forked from Minki/linux
Merge branch 'akpm' (Andrew's incoming - part two)
Says Andrew: "60 patches. That's good enough for -rc1 I guess. I have quite a lot of detritus to be rechecked, work through maintainers, etc. - most of the remains of MM - rtc - various misc - cgroups - memcg - cpusets - procfs - ipc - rapidio - sysctl - pps - w1 - drivers/misc - aio" * akpm: (60 commits) memcg: replace ss->id_lock with a rwlock aio: allocate kiocbs in batches drivers/misc/vmw_balloon.c: fix typo in code comment drivers/misc/vmw_balloon.c: determine page allocation flag can_sleep outside loop w1: disable irqs in critical section drivers/w1/w1_int.c: multiple masters used same init_name drivers/power/ds2780_battery.c: fix deadlock upon insertion and removal drivers/power/ds2780_battery.c: add a nolock function to w1 interface drivers/power/ds2780_battery.c: create central point for calling w1 interface w1: ds2760 and ds2780, use ida for id and ida_simple_get() to get it pps gpio client: add missing dependency pps: new client driver using GPIO pps: default echo function include/linux/dma-mapping.h: add dma_zalloc_coherent() sysctl: make CONFIG_SYSCTL_SYSCALL default to n sysctl: add support for poll() RapidIO: documentation update drivers/net/rionet.c: fix ethernet address macros for LE platforms RapidIO: fix potential null deref in rio_setup_device() RapidIO: add mport driver for Tsi721 bridge ...
This commit is contained in:
commit
092f4c56c1
@ -50,6 +50,13 @@ specify the GFP_ flags (see kmalloc) for the allocation (the
|
||||
implementation may choose to ignore flags that affect the location of
|
||||
the returned memory, like GFP_DMA).
|
||||
|
||||
void *
|
||||
dma_zalloc_coherent(struct device *dev, size_t size,
|
||||
dma_addr_t *dma_handle, gfp_t flag)
|
||||
|
||||
Wraps dma_alloc_coherent() and also zeroes the returned memory if the
|
||||
allocation attempt succeeded.
|
||||
|
||||
void
|
||||
dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
|
||||
dma_addr_t dma_handle)
|
||||
|
@ -418,7 +418,6 @@ total_unevictable - sum of all children's "unevictable"
|
||||
|
||||
# The following additional stats are dependent on CONFIG_DEBUG_VM.
|
||||
|
||||
inactive_ratio - VM internal parameter. (see mm/page_alloc.c)
|
||||
recent_rotated_anon - VM internal parameter. (see mm/vmscan.c)
|
||||
recent_rotated_file - VM internal parameter. (see mm/vmscan.c)
|
||||
recent_scanned_anon - VM internal parameter. (see mm/vmscan.c)
|
||||
|
@ -133,41 +133,6 @@ Who: Pavel Machek <pavel@ucw.cz>
|
||||
|
||||
---------------------------
|
||||
|
||||
What: sys_sysctl
|
||||
When: September 2010
|
||||
Option: CONFIG_SYSCTL_SYSCALL
|
||||
Why: The same information is available in a more convenient from
|
||||
/proc/sys, and none of the sysctl variables appear to be
|
||||
important performance wise.
|
||||
|
||||
Binary sysctls are a long standing source of subtle kernel
|
||||
bugs and security issues.
|
||||
|
||||
When I looked several months ago all I could find after
|
||||
searching several distributions were 5 user space programs and
|
||||
glibc (which falls back to /proc/sys) using this syscall.
|
||||
|
||||
The man page for sysctl(2) documents it as unusable for user
|
||||
space programs.
|
||||
|
||||
sysctl(2) is not generally ABI compatible to a 32bit user
|
||||
space application on a 64bit and a 32bit kernel.
|
||||
|
||||
For the last several months the policy has been no new binary
|
||||
sysctls and no one has put forward an argument to use them.
|
||||
|
||||
Binary sysctls issues seem to keep happening appearing so
|
||||
properly deprecating them (with a warning to user space) and a
|
||||
2 year grace warning period will mean eventually we can kill
|
||||
them and end the pain.
|
||||
|
||||
In the mean time individual binary sysctls can be dealt with
|
||||
in a piecewise fashion.
|
||||
|
||||
Who: Eric Biederman <ebiederm@xmission.com>
|
||||
|
||||
---------------------------
|
||||
|
||||
What: /proc/<pid>/oom_adj
|
||||
When: August 2012
|
||||
Why: /proc/<pid>/oom_adj allows userspace to influence the oom killer's
|
||||
|
@ -144,7 +144,7 @@ and the default device ID in order to access the device on the active port.
|
||||
|
||||
After the host has completed enumeration of the entire network it releases
|
||||
devices by clearing device ID locks (calls rio_clear_locks()). For each endpoint
|
||||
in the system, it sets the Master Enable bit in the Port General Control CSR
|
||||
in the system, it sets the Discovered bit in the Port General Control CSR
|
||||
to indicate that enumeration is completed and agents are allowed to execute
|
||||
passive discovery of the network.
|
||||
|
||||
|
49
Documentation/rapidio/tsi721.txt
Normal file
49
Documentation/rapidio/tsi721.txt
Normal file
@ -0,0 +1,49 @@
|
||||
RapidIO subsystem mport driver for IDT Tsi721 PCI Express-to-SRIO bridge.
|
||||
=========================================================================
|
||||
|
||||
I. Overview
|
||||
|
||||
This driver implements all currently defined RapidIO mport callback functions.
|
||||
It supports maintenance read and write operations, inbound and outbound RapidIO
|
||||
doorbells, inbound maintenance port-writes and RapidIO messaging.
|
||||
|
||||
To generate SRIO maintenance transactions this driver uses one of Tsi721 DMA
|
||||
channels. This mechanism provides access to larger range of hop counts and
|
||||
destination IDs without need for changes in outbound window translation.
|
||||
|
||||
RapidIO messaging support uses dedicated messaging channels for each mailbox.
|
||||
For inbound messages this driver uses destination ID matching to forward messages
|
||||
into the corresponding message queue. Messaging callbacks are implemented to be
|
||||
fully compatible with RIONET driver (Ethernet over RapidIO messaging services).
|
||||
|
||||
II. Known problems
|
||||
|
||||
None.
|
||||
|
||||
III. To do
|
||||
|
||||
Add DMA data transfers (non-messaging).
|
||||
Add inbound region (SRIO-to-PCIe) mapping.
|
||||
|
||||
IV. Version History
|
||||
|
||||
1.0.0 - Initial driver release.
|
||||
|
||||
V. License
|
||||
-----------------------------------------------
|
||||
|
||||
Copyright(c) 2011 Integrated Device Technology, Inc. All rights reserved.
|
||||
|
||||
This program is free software; you can redistribute it and/or modify it
|
||||
under the terms of the GNU General Public License as published by the Free
|
||||
Software Foundation; either version 2 of the License, or (at your option)
|
||||
any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License along with
|
||||
this program; if not, write to the Free Software Foundation, Inc.,
|
||||
59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
|
@ -16,16 +16,6 @@
|
||||
|
||||
#ifdef __HAVE_ARCH_PTE_SPECIAL
|
||||
|
||||
static inline void get_huge_page_tail(struct page *page)
|
||||
{
|
||||
/*
|
||||
* __split_huge_page_refcount() cannot run
|
||||
* from under us.
|
||||
*/
|
||||
VM_BUG_ON(atomic_read(&page->_count) < 0);
|
||||
atomic_inc(&page->_count);
|
||||
}
|
||||
|
||||
/*
|
||||
* The performance critical leaf functions are made noinline otherwise gcc
|
||||
* inlines everything into a single function which results in too much
|
||||
@ -57,8 +47,6 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
|
||||
put_page(page);
|
||||
return 0;
|
||||
}
|
||||
if (PageTail(page))
|
||||
get_huge_page_tail(page);
|
||||
pages[*nr] = page;
|
||||
(*nr)++;
|
||||
|
||||
|
@ -390,7 +390,7 @@ static noinline int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long add
|
||||
{
|
||||
unsigned long mask;
|
||||
unsigned long pte_end;
|
||||
struct page *head, *page;
|
||||
struct page *head, *page, *tail;
|
||||
pte_t pte;
|
||||
int refs;
|
||||
|
||||
@ -413,6 +413,7 @@ static noinline int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long add
|
||||
head = pte_page(pte);
|
||||
|
||||
page = head + ((addr & (sz-1)) >> PAGE_SHIFT);
|
||||
tail = page;
|
||||
do {
|
||||
VM_BUG_ON(compound_head(page) != head);
|
||||
pages[*nr] = page;
|
||||
@ -428,10 +429,20 @@ static noinline int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long add
|
||||
|
||||
if (unlikely(pte_val(pte) != pte_val(*ptep))) {
|
||||
/* Could be optimized better */
|
||||
while (*nr) {
|
||||
put_page(page);
|
||||
(*nr)--;
|
||||
}
|
||||
*nr -= refs;
|
||||
while (refs--)
|
||||
put_page(head);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Any tail page need their mapcount reference taken before we
|
||||
* return.
|
||||
*/
|
||||
while (refs--) {
|
||||
if (PageTail(tail))
|
||||
get_huge_page_tail(tail);
|
||||
tail++;
|
||||
}
|
||||
|
||||
return 1;
|
||||
|
@ -1608,6 +1608,7 @@ int fsl_rio_setup(struct platform_device *dev)
|
||||
return 0;
|
||||
err:
|
||||
iounmap(priv->regs_win);
|
||||
release_resource(&port->iores);
|
||||
err_res:
|
||||
kfree(priv);
|
||||
err_priv:
|
||||
|
@ -52,7 +52,7 @@ static inline int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
|
||||
unsigned long end, int write, struct page **pages, int *nr)
|
||||
{
|
||||
unsigned long mask, result;
|
||||
struct page *head, *page;
|
||||
struct page *head, *page, *tail;
|
||||
int refs;
|
||||
|
||||
result = write ? 0 : _SEGMENT_ENTRY_RO;
|
||||
@ -64,6 +64,7 @@ static inline int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
|
||||
refs = 0;
|
||||
head = pmd_page(pmd);
|
||||
page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
|
||||
tail = page;
|
||||
do {
|
||||
VM_BUG_ON(compound_head(page) != head);
|
||||
pages[*nr] = page;
|
||||
@ -81,6 +82,17 @@ static inline int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
|
||||
*nr -= refs;
|
||||
while (refs--)
|
||||
put_page(head);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Any tail page need their mapcount reference taken before we
|
||||
* return.
|
||||
*/
|
||||
while (refs--) {
|
||||
if (PageTail(tail))
|
||||
get_huge_page_tail(tail);
|
||||
tail++;
|
||||
}
|
||||
|
||||
return 1;
|
||||
|
@ -56,6 +56,8 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
|
||||
put_page(head);
|
||||
return 0;
|
||||
}
|
||||
if (head != page)
|
||||
get_huge_page_tail(page);
|
||||
|
||||
pages[*nr] = page;
|
||||
(*nr)++;
|
||||
|
@ -108,16 +108,6 @@ static inline void get_head_page_multiple(struct page *page, int nr)
|
||||
SetPageReferenced(page);
|
||||
}
|
||||
|
||||
static inline void get_huge_page_tail(struct page *page)
|
||||
{
|
||||
/*
|
||||
* __split_huge_page_refcount() cannot run
|
||||
* from under us.
|
||||
*/
|
||||
VM_BUG_ON(atomic_read(&page->_count) < 0);
|
||||
atomic_inc(&page->_count);
|
||||
}
|
||||
|
||||
static noinline int gup_huge_pmd(pmd_t pmd, unsigned long addr,
|
||||
unsigned long end, int write, struct page **pages, int *nr)
|
||||
{
|
||||
|
@ -151,7 +151,7 @@ MODULE_LICENSE("GPL");
|
||||
struct vmballoon_stats {
|
||||
unsigned int timer;
|
||||
|
||||
/* allocation statustics */
|
||||
/* allocation statistics */
|
||||
unsigned int alloc;
|
||||
unsigned int alloc_fail;
|
||||
unsigned int sleep_alloc;
|
||||
@ -412,6 +412,7 @@ static int vmballoon_reserve_page(struct vmballoon *b, bool can_sleep)
|
||||
gfp_t flags;
|
||||
unsigned int hv_status;
|
||||
bool locked = false;
|
||||
flags = can_sleep ? VMW_PAGE_ALLOC_CANSLEEP : VMW_PAGE_ALLOC_NOSLEEP;
|
||||
|
||||
do {
|
||||
if (!can_sleep)
|
||||
@ -419,7 +420,6 @@ static int vmballoon_reserve_page(struct vmballoon *b, bool can_sleep)
|
||||
else
|
||||
STATS_INC(b->stats.sleep_alloc);
|
||||
|
||||
flags = can_sleep ? VMW_PAGE_ALLOC_CANSLEEP : VMW_PAGE_ALLOC_NOSLEEP;
|
||||
page = alloc_page(flags);
|
||||
if (!page) {
|
||||
if (!can_sleep)
|
||||
|
@ -88,8 +88,8 @@ static struct rio_dev **rionet_active;
|
||||
#define dev_rionet_capable(dev) \
|
||||
is_rionet_capable(dev->src_ops, dev->dst_ops)
|
||||
|
||||
#define RIONET_MAC_MATCH(x) (*(u32 *)x == 0x00010001)
|
||||
#define RIONET_GET_DESTID(x) (*(u16 *)(x + 4))
|
||||
#define RIONET_MAC_MATCH(x) (!memcmp((x), "\00\01\00\01", 4))
|
||||
#define RIONET_GET_DESTID(x) ((*((u8 *)x + 4) << 8) | *((u8 *)x + 5))
|
||||
|
||||
static int rionet_rx_clean(struct net_device *ndev)
|
||||
{
|
||||
|
@ -39,6 +39,7 @@ struct ds2780_device_info {
|
||||
struct device *dev;
|
||||
struct power_supply bat;
|
||||
struct device *w1_dev;
|
||||
struct task_struct *mutex_holder;
|
||||
};
|
||||
|
||||
enum current_types {
|
||||
@ -49,8 +50,8 @@ enum current_types {
|
||||
static const char model[] = "DS2780";
|
||||
static const char manufacturer[] = "Maxim/Dallas";
|
||||
|
||||
static inline struct ds2780_device_info *to_ds2780_device_info(
|
||||
struct power_supply *psy)
|
||||
static inline struct ds2780_device_info *
|
||||
to_ds2780_device_info(struct power_supply *psy)
|
||||
{
|
||||
return container_of(psy, struct ds2780_device_info, bat);
|
||||
}
|
||||
@ -60,17 +61,28 @@ static inline struct power_supply *to_power_supply(struct device *dev)
|
||||
return dev_get_drvdata(dev);
|
||||
}
|
||||
|
||||
static inline int ds2780_read8(struct device *dev, u8 *val, int addr)
|
||||
static inline int ds2780_battery_io(struct ds2780_device_info *dev_info,
|
||||
char *buf, int addr, size_t count, int io)
|
||||
{
|
||||
return w1_ds2780_io(dev, val, addr, sizeof(u8), 0);
|
||||
if (dev_info->mutex_holder == current)
|
||||
return w1_ds2780_io_nolock(dev_info->w1_dev, buf, addr, count, io);
|
||||
else
|
||||
return w1_ds2780_io(dev_info->w1_dev, buf, addr, count, io);
|
||||
}
|
||||
|
||||
static int ds2780_read16(struct device *dev, s16 *val, int addr)
|
||||
static inline int ds2780_read8(struct ds2780_device_info *dev_info, u8 *val,
|
||||
int addr)
|
||||
{
|
||||
return ds2780_battery_io(dev_info, val, addr, sizeof(u8), 0);
|
||||
}
|
||||
|
||||
static int ds2780_read16(struct ds2780_device_info *dev_info, s16 *val,
|
||||
int addr)
|
||||
{
|
||||
int ret;
|
||||
u8 raw[2];
|
||||
|
||||
ret = w1_ds2780_io(dev, raw, addr, sizeof(u8) * 2, 0);
|
||||
ret = ds2780_battery_io(dev_info, raw, addr, sizeof(raw), 0);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@ -79,16 +91,16 @@ static int ds2780_read16(struct device *dev, s16 *val, int addr)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int ds2780_read_block(struct device *dev, u8 *val, int addr,
|
||||
size_t count)
|
||||
static inline int ds2780_read_block(struct ds2780_device_info *dev_info,
|
||||
u8 *val, int addr, size_t count)
|
||||
{
|
||||
return w1_ds2780_io(dev, val, addr, count, 0);
|
||||
return ds2780_battery_io(dev_info, val, addr, count, 0);
|
||||
}
|
||||
|
||||
static inline int ds2780_write(struct device *dev, u8 *val, int addr,
|
||||
size_t count)
|
||||
static inline int ds2780_write(struct ds2780_device_info *dev_info, u8 *val,
|
||||
int addr, size_t count)
|
||||
{
|
||||
return w1_ds2780_io(dev, val, addr, count, 1);
|
||||
return ds2780_battery_io(dev_info, val, addr, count, 1);
|
||||
}
|
||||
|
||||
static inline int ds2780_store_eeprom(struct device *dev, int addr)
|
||||
@ -122,7 +134,7 @@ static int ds2780_set_sense_register(struct ds2780_device_info *dev_info,
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = ds2780_write(dev_info->w1_dev, &conductance,
|
||||
ret = ds2780_write(dev_info, &conductance,
|
||||
DS2780_RSNSP_REG, sizeof(u8));
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
@ -134,7 +146,7 @@ static int ds2780_set_sense_register(struct ds2780_device_info *dev_info,
|
||||
static int ds2780_get_rsgain_register(struct ds2780_device_info *dev_info,
|
||||
u16 *rsgain)
|
||||
{
|
||||
return ds2780_read16(dev_info->w1_dev, rsgain, DS2780_RSGAIN_MSB_REG);
|
||||
return ds2780_read16(dev_info, rsgain, DS2780_RSGAIN_MSB_REG);
|
||||
}
|
||||
|
||||
/* Set RSGAIN value from 0 to 1.999 in steps of 0.001 */
|
||||
@ -144,8 +156,8 @@ static int ds2780_set_rsgain_register(struct ds2780_device_info *dev_info,
|
||||
int ret;
|
||||
u8 raw[] = {rsgain >> 8, rsgain & 0xFF};
|
||||
|
||||
ret = ds2780_write(dev_info->w1_dev, raw,
|
||||
DS2780_RSGAIN_MSB_REG, sizeof(u8) * 2);
|
||||
ret = ds2780_write(dev_info, raw,
|
||||
DS2780_RSGAIN_MSB_REG, sizeof(raw));
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@ -167,7 +179,7 @@ static int ds2780_get_voltage(struct ds2780_device_info *dev_info,
|
||||
* Bits 2 - 0 of the voltage value are in bits 7 - 5 of the
|
||||
* voltage LSB register
|
||||
*/
|
||||
ret = ds2780_read16(dev_info->w1_dev, &voltage_raw,
|
||||
ret = ds2780_read16(dev_info, &voltage_raw,
|
||||
DS2780_VOLT_MSB_REG);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
@ -196,7 +208,7 @@ static int ds2780_get_temperature(struct ds2780_device_info *dev_info,
|
||||
* Bits 2 - 0 of the temperature value are in bits 7 - 5 of the
|
||||
* temperature LSB register
|
||||
*/
|
||||
ret = ds2780_read16(dev_info->w1_dev, &temperature_raw,
|
||||
ret = ds2780_read16(dev_info, &temperature_raw,
|
||||
DS2780_TEMP_MSB_REG);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
@ -222,13 +234,13 @@ static int ds2780_get_current(struct ds2780_device_info *dev_info,
|
||||
* The units of measurement for current are dependent on the value of
|
||||
* the sense resistor.
|
||||
*/
|
||||
ret = ds2780_read8(dev_info->w1_dev, &sense_res_raw, DS2780_RSNSP_REG);
|
||||
ret = ds2780_read8(dev_info, &sense_res_raw, DS2780_RSNSP_REG);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (sense_res_raw == 0) {
|
||||
dev_err(dev_info->dev, "sense resistor value is 0\n");
|
||||
return -ENXIO;
|
||||
return -EINVAL;
|
||||
}
|
||||
sense_res = 1000 / sense_res_raw;
|
||||
|
||||
@ -248,7 +260,7 @@ static int ds2780_get_current(struct ds2780_device_info *dev_info,
|
||||
* Bits 7 - 0 of the current value are in bits 7 - 0 of the current
|
||||
* LSB register
|
||||
*/
|
||||
ret = ds2780_read16(dev_info->w1_dev, ¤t_raw, reg_msb);
|
||||
ret = ds2780_read16(dev_info, ¤t_raw, reg_msb);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@ -267,7 +279,7 @@ static int ds2780_get_accumulated_current(struct ds2780_device_info *dev_info,
|
||||
* The units of measurement for accumulated current are dependent on
|
||||
* the value of the sense resistor.
|
||||
*/
|
||||
ret = ds2780_read8(dev_info->w1_dev, &sense_res_raw, DS2780_RSNSP_REG);
|
||||
ret = ds2780_read8(dev_info, &sense_res_raw, DS2780_RSNSP_REG);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@ -285,7 +297,7 @@ static int ds2780_get_accumulated_current(struct ds2780_device_info *dev_info,
|
||||
* Bits 7 - 0 of the ACR value are in bits 7 - 0 of the ACR
|
||||
* LSB register
|
||||
*/
|
||||
ret = ds2780_read16(dev_info->w1_dev, ¤t_raw, DS2780_ACR_MSB_REG);
|
||||
ret = ds2780_read16(dev_info, ¤t_raw, DS2780_ACR_MSB_REG);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@ -299,7 +311,7 @@ static int ds2780_get_capacity(struct ds2780_device_info *dev_info,
|
||||
int ret;
|
||||
u8 raw;
|
||||
|
||||
ret = ds2780_read8(dev_info->w1_dev, &raw, DS2780_RARC_REG);
|
||||
ret = ds2780_read8(dev_info, &raw, DS2780_RARC_REG);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@ -345,7 +357,7 @@ static int ds2780_get_charge_now(struct ds2780_device_info *dev_info,
|
||||
* Bits 7 - 0 of the RAAC value are in bits 7 - 0 of the RAAC
|
||||
* LSB register
|
||||
*/
|
||||
ret = ds2780_read16(dev_info->w1_dev, &charge_raw, DS2780_RAAC_MSB_REG);
|
||||
ret = ds2780_read16(dev_info, &charge_raw, DS2780_RAAC_MSB_REG);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@ -356,7 +368,7 @@ static int ds2780_get_charge_now(struct ds2780_device_info *dev_info,
|
||||
static int ds2780_get_control_register(struct ds2780_device_info *dev_info,
|
||||
u8 *control_reg)
|
||||
{
|
||||
return ds2780_read8(dev_info->w1_dev, control_reg, DS2780_CONTROL_REG);
|
||||
return ds2780_read8(dev_info, control_reg, DS2780_CONTROL_REG);
|
||||
}
|
||||
|
||||
static int ds2780_set_control_register(struct ds2780_device_info *dev_info,
|
||||
@ -364,7 +376,7 @@ static int ds2780_set_control_register(struct ds2780_device_info *dev_info,
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = ds2780_write(dev_info->w1_dev, &control_reg,
|
||||
ret = ds2780_write(dev_info, &control_reg,
|
||||
DS2780_CONTROL_REG, sizeof(u8));
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
@ -503,7 +515,7 @@ static ssize_t ds2780_get_sense_resistor_value(struct device *dev,
|
||||
struct power_supply *psy = to_power_supply(dev);
|
||||
struct ds2780_device_info *dev_info = to_ds2780_device_info(psy);
|
||||
|
||||
ret = ds2780_read8(dev_info->w1_dev, &sense_resistor, DS2780_RSNSP_REG);
|
||||
ret = ds2780_read8(dev_info, &sense_resistor, DS2780_RSNSP_REG);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@ -584,7 +596,7 @@ static ssize_t ds2780_get_pio_pin(struct device *dev,
|
||||
struct power_supply *psy = to_power_supply(dev);
|
||||
struct ds2780_device_info *dev_info = to_ds2780_device_info(psy);
|
||||
|
||||
ret = ds2780_read8(dev_info->w1_dev, &sfr, DS2780_SFR_REG);
|
||||
ret = ds2780_read8(dev_info, &sfr, DS2780_SFR_REG);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@ -611,7 +623,7 @@ static ssize_t ds2780_set_pio_pin(struct device *dev,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = ds2780_write(dev_info->w1_dev, &new_setting,
|
||||
ret = ds2780_write(dev_info, &new_setting,
|
||||
DS2780_SFR_REG, sizeof(u8));
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
@ -632,7 +644,7 @@ static ssize_t ds2780_read_param_eeprom_bin(struct file *filp,
|
||||
DS2780_EEPROM_BLOCK1_END -
|
||||
DS2780_EEPROM_BLOCK1_START + 1 - off);
|
||||
|
||||
return ds2780_read_block(dev_info->w1_dev, buf,
|
||||
return ds2780_read_block(dev_info, buf,
|
||||
DS2780_EEPROM_BLOCK1_START + off, count);
|
||||
}
|
||||
|
||||
@ -650,7 +662,7 @@ static ssize_t ds2780_write_param_eeprom_bin(struct file *filp,
|
||||
DS2780_EEPROM_BLOCK1_END -
|
||||
DS2780_EEPROM_BLOCK1_START + 1 - off);
|
||||
|
||||
ret = ds2780_write(dev_info->w1_dev, buf,
|
||||
ret = ds2780_write(dev_info, buf,
|
||||
DS2780_EEPROM_BLOCK1_START + off, count);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
@ -685,9 +697,8 @@ static ssize_t ds2780_read_user_eeprom_bin(struct file *filp,
|
||||
DS2780_EEPROM_BLOCK0_END -
|
||||
DS2780_EEPROM_BLOCK0_START + 1 - off);
|
||||
|
||||
return ds2780_read_block(dev_info->w1_dev, buf,
|
||||
return ds2780_read_block(dev_info, buf,
|
||||
DS2780_EEPROM_BLOCK0_START + off, count);
|
||||
|
||||
}
|
||||
|
||||
static ssize_t ds2780_write_user_eeprom_bin(struct file *filp,
|
||||
@ -704,7 +715,7 @@ static ssize_t ds2780_write_user_eeprom_bin(struct file *filp,
|
||||
DS2780_EEPROM_BLOCK0_END -
|
||||
DS2780_EEPROM_BLOCK0_START + 1 - off);
|
||||
|
||||
ret = ds2780_write(dev_info->w1_dev, buf,
|
||||
ret = ds2780_write(dev_info, buf,
|
||||
DS2780_EEPROM_BLOCK0_START + off, count);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
@ -768,6 +779,7 @@ static int __devinit ds2780_battery_probe(struct platform_device *pdev)
|
||||
dev_info->bat.properties = ds2780_battery_props;
|
||||
dev_info->bat.num_properties = ARRAY_SIZE(ds2780_battery_props);
|
||||
dev_info->bat.get_property = ds2780_battery_get_property;
|
||||
dev_info->mutex_holder = current;
|
||||
|
||||
ret = power_supply_register(&pdev->dev, &dev_info->bat);
|
||||
if (ret) {
|
||||
@ -797,6 +809,8 @@ static int __devinit ds2780_battery_probe(struct platform_device *pdev)
|
||||
goto fail_remove_bin_file;
|
||||
}
|
||||
|
||||
dev_info->mutex_holder = NULL;
|
||||
|
||||
return 0;
|
||||
|
||||
fail_remove_bin_file:
|
||||
@ -816,6 +830,8 @@ static int __devexit ds2780_battery_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct ds2780_device_info *dev_info = platform_get_drvdata(pdev);
|
||||
|
||||
dev_info->mutex_holder = current;
|
||||
|
||||
/* remove attributes */
|
||||
sysfs_remove_group(&dev_info->bat.dev->kobj, &ds2780_attr_group);
|
||||
|
||||
|
@ -29,4 +29,13 @@ config PPS_CLIENT_PARPORT
|
||||
If you say yes here you get support for a PPS source connected
|
||||
with the interrupt pin of your parallel port.
|
||||
|
||||
config PPS_CLIENT_GPIO
|
||||
tristate "PPS client using GPIO"
|
||||
depends on PPS && GENERIC_HARDIRQS
|
||||
help
|
||||
If you say yes here you get support for a PPS source using
|
||||
GPIO. To be useful you must also register a platform device
|
||||
specifying the GPIO pin and other options, usually in your board
|
||||
setup.
|
||||
|
||||
endif
|
||||
|
@ -5,5 +5,6 @@
|
||||
obj-$(CONFIG_PPS_CLIENT_KTIMER) += pps-ktimer.o
|
||||
obj-$(CONFIG_PPS_CLIENT_LDISC) += pps-ldisc.o
|
||||
obj-$(CONFIG_PPS_CLIENT_PARPORT) += pps_parport.o
|
||||
obj-$(CONFIG_PPS_CLIENT_GPIO) += pps-gpio.o
|
||||
|
||||
ccflags-$(CONFIG_PPS_DEBUG) := -DDEBUG
|
||||
|
227
drivers/pps/clients/pps-gpio.c
Normal file
227
drivers/pps/clients/pps-gpio.c
Normal file
@ -0,0 +1,227 @@
|
||||
/*
|
||||
* pps-gpio.c -- PPS client driver using GPIO
|
||||
*
|
||||
*
|
||||
* Copyright (C) 2010 Ricardo Martins <rasm@fe.up.pt>
|
||||
* Copyright (C) 2011 James Nuss <jamesnuss@nanometrics.ca>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; either version 2 of the License, or
|
||||
* (at your option) any later version.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
|
||||
*/
|
||||
|
||||
#define PPS_GPIO_NAME "pps-gpio"
|
||||
#define pr_fmt(fmt) PPS_GPIO_NAME ": " fmt
|
||||
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/pps_kernel.h>
|
||||
#include <linux/pps-gpio.h>
|
||||
#include <linux/gpio.h>
|
||||
#include <linux/list.h>
|
||||
|
||||
/* Info for each registered platform device */
|
||||
struct pps_gpio_device_data {
|
||||
int irq; /* IRQ used as PPS source */
|
||||
struct pps_device *pps; /* PPS source device */
|
||||
struct pps_source_info info; /* PPS source information */
|
||||
const struct pps_gpio_platform_data *pdata;
|
||||
};
|
||||
|
||||
/*
|
||||
* Report the PPS event
|
||||
*/
|
||||
|
||||
static irqreturn_t pps_gpio_irq_handler(int irq, void *data)
|
||||
{
|
||||
const struct pps_gpio_device_data *info;
|
||||
struct pps_event_time ts;
|
||||
int rising_edge;
|
||||
|
||||
/* Get the time stamp first */
|
||||
pps_get_ts(&ts);
|
||||
|
||||
info = data;
|
||||
|
||||
rising_edge = gpio_get_value(info->pdata->gpio_pin);
|
||||
if ((rising_edge && !info->pdata->assert_falling_edge) ||
|
||||
(!rising_edge && info->pdata->assert_falling_edge))
|
||||
pps_event(info->pps, &ts, PPS_CAPTUREASSERT, NULL);
|
||||
else if (info->pdata->capture_clear &&
|
||||
((rising_edge && info->pdata->assert_falling_edge) ||
|
||||
(!rising_edge && !info->pdata->assert_falling_edge)))
|
||||
pps_event(info->pps, &ts, PPS_CAPTURECLEAR, NULL);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int pps_gpio_setup(struct platform_device *pdev)
|
||||
{
|
||||
int ret;
|
||||
const struct pps_gpio_platform_data *pdata = pdev->dev.platform_data;
|
||||
|
||||
ret = gpio_request(pdata->gpio_pin, pdata->gpio_label);
|
||||
if (ret) {
|
||||
pr_warning("failed to request GPIO %u\n", pdata->gpio_pin);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = gpio_direction_input(pdata->gpio_pin);
|
||||
if (ret) {
|
||||
pr_warning("failed to set pin direction\n");
|
||||
gpio_free(pdata->gpio_pin);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static unsigned long
|
||||
get_irqf_trigger_flags(const struct pps_gpio_platform_data *pdata)
|
||||
{
|
||||
unsigned long flags = pdata->assert_falling_edge ?
|
||||
IRQF_TRIGGER_FALLING : IRQF_TRIGGER_RISING;
|
||||
|
||||
if (pdata->capture_clear) {
|
||||
flags |= ((flags & IRQF_TRIGGER_RISING) ?
|
||||
IRQF_TRIGGER_FALLING : IRQF_TRIGGER_RISING);
|
||||
}
|
||||
|
||||
return flags;
|
||||
}
|
||||
|
||||
static int pps_gpio_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct pps_gpio_device_data *data;
|
||||
int irq;
|
||||
int ret;
|
||||
int err;
|
||||
int pps_default_params;
|
||||
const struct pps_gpio_platform_data *pdata = pdev->dev.platform_data;
|
||||
|
||||
|
||||
/* GPIO setup */
|
||||
ret = pps_gpio_setup(pdev);
|
||||
if (ret)
|
||||
return -EINVAL;
|
||||
|
||||
/* IRQ setup */
|
||||
irq = gpio_to_irq(pdata->gpio_pin);
|
||||
if (irq < 0) {
|
||||
pr_err("failed to map GPIO to IRQ: %d\n", irq);
|
||||
err = -EINVAL;
|
||||
goto return_error;
|
||||
}
|
||||
|
||||
/* allocate space for device info */
|
||||
data = kzalloc(sizeof(struct pps_gpio_device_data), GFP_KERNEL);
|
||||
if (data == NULL) {
|
||||
err = -ENOMEM;
|
||||
goto return_error;
|
||||
}
|
||||
|
||||
/* initialize PPS specific parts of the bookkeeping data structure. */
|
||||
data->info.mode = PPS_CAPTUREASSERT | PPS_OFFSETASSERT |
|
||||
PPS_ECHOASSERT | PPS_CANWAIT | PPS_TSFMT_TSPEC;
|
||||
if (pdata->capture_clear)
|
||||
data->info.mode |= PPS_CAPTURECLEAR | PPS_OFFSETCLEAR |
|
||||
PPS_ECHOCLEAR;
|
||||
data->info.owner = THIS_MODULE;
|
||||
snprintf(data->info.name, PPS_MAX_NAME_LEN - 1, "%s.%d",
|
||||
pdev->name, pdev->id);
|
||||
|
||||
/* register PPS source */
|
||||
pps_default_params = PPS_CAPTUREASSERT | PPS_OFFSETASSERT;
|
||||
if (pdata->capture_clear)
|
||||
pps_default_params |= PPS_CAPTURECLEAR | PPS_OFFSETCLEAR;
|
||||
data->pps = pps_register_source(&data->info, pps_default_params);
|
||||
if (data->pps == NULL) {
|
||||
kfree(data);
|
||||
pr_err("failed to register IRQ %d as PPS source\n", irq);
|
||||
err = -EINVAL;
|
||||
goto return_error;
|
||||
}
|
||||
|
||||
data->irq = irq;
|
||||
data->pdata = pdata;
|
||||
|
||||
/* register IRQ interrupt handler */
|
||||
ret = request_irq(irq, pps_gpio_irq_handler,
|
||||
get_irqf_trigger_flags(pdata), data->info.name, data);
|
||||
if (ret) {
|
||||
pps_unregister_source(data->pps);
|
||||
kfree(data);
|
||||
pr_err("failed to acquire IRQ %d\n", irq);
|
||||
err = -EINVAL;
|
||||
goto return_error;
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, data);
|
||||
dev_info(data->pps->dev, "Registered IRQ %d as PPS source\n", irq);
|
||||
|
||||
return 0;
|
||||
|
||||
return_error:
|
||||
gpio_free(pdata->gpio_pin);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int pps_gpio_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct pps_gpio_device_data *data = platform_get_drvdata(pdev);
|
||||
const struct pps_gpio_platform_data *pdata = data->pdata;
|
||||
|
||||
platform_set_drvdata(pdev, NULL);
|
||||
free_irq(data->irq, data);
|
||||
gpio_free(pdata->gpio_pin);
|
||||
pps_unregister_source(data->pps);
|
||||
pr_info("removed IRQ %d as PPS source\n", data->irq);
|
||||
kfree(data);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver pps_gpio_driver = {
|
||||
.probe = pps_gpio_probe,
|
||||
.remove = __devexit_p(pps_gpio_remove),
|
||||
.driver = {
|
||||
.name = PPS_GPIO_NAME,
|
||||
.owner = THIS_MODULE
|
||||
},
|
||||
};
|
||||
|
||||
static int __init pps_gpio_init(void)
|
||||
{
|
||||
int ret = platform_driver_register(&pps_gpio_driver);
|
||||
if (ret < 0)
|
||||
pr_err("failed to register platform driver\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit pps_gpio_exit(void)
|
||||
{
|
||||
platform_driver_unregister(&pps_gpio_driver);
|
||||
pr_debug("unregistered platform driver\n");
|
||||
}
|
||||
|
||||
module_init(pps_gpio_init);
|
||||
module_exit(pps_gpio_exit);
|
||||
|
||||
MODULE_AUTHOR("Ricardo Martins <rasm@fe.up.pt>");
|
||||
MODULE_AUTHOR("James Nuss <jamesnuss@nanometrics.ca>");
|
||||
MODULE_DESCRIPTION("Use GPIO pin as PPS source");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_VERSION("1.0.0");
|
@ -51,17 +51,6 @@ static void pps_ktimer_event(unsigned long ptr)
|
||||
mod_timer(&ktimer, jiffies + HZ);
|
||||
}
|
||||
|
||||
/*
|
||||
* The echo function
|
||||
*/
|
||||
|
||||
static void pps_ktimer_echo(struct pps_device *pps, int event, void *data)
|
||||
{
|
||||
dev_info(pps->dev, "echo %s %s\n",
|
||||
event & PPS_CAPTUREASSERT ? "assert" : "",
|
||||
event & PPS_CAPTURECLEAR ? "clear" : "");
|
||||
}
|
||||
|
||||
/*
|
||||
* The PPS info struct
|
||||
*/
|
||||
@ -72,7 +61,6 @@ static struct pps_source_info pps_ktimer_info = {
|
||||
.mode = PPS_CAPTUREASSERT | PPS_OFFSETASSERT |
|
||||
PPS_ECHOASSERT |
|
||||
PPS_CANWAIT | PPS_TSFMT_TSPEC,
|
||||
.echo = pps_ktimer_echo,
|
||||
.owner = THIS_MODULE,
|
||||
};
|
||||
|
||||
|
@ -133,14 +133,6 @@ out_both:
|
||||
return;
|
||||
}
|
||||
|
||||
/* the PPS echo function */
|
||||
static void pps_echo(struct pps_device *pps, int event, void *data)
|
||||
{
|
||||
dev_info(pps->dev, "echo %s %s\n",
|
||||
event & PPS_CAPTUREASSERT ? "assert" : "",
|
||||
event & PPS_CAPTURECLEAR ? "clear" : "");
|
||||
}
|
||||
|
||||
static void parport_attach(struct parport *port)
|
||||
{
|
||||
struct pps_client_pp *device;
|
||||
@ -151,7 +143,6 @@ static void parport_attach(struct parport *port)
|
||||
PPS_OFFSETASSERT | PPS_OFFSETCLEAR | \
|
||||
PPS_ECHOASSERT | PPS_ECHOCLEAR | \
|
||||
PPS_CANWAIT | PPS_TSFMT_TSPEC,
|
||||
.echo = pps_echo,
|
||||
.owner = THIS_MODULE,
|
||||
.dev = NULL
|
||||
};
|
||||
|
@ -52,6 +52,14 @@ static void pps_add_offset(struct pps_ktime *ts, struct pps_ktime *offset)
|
||||
ts->sec += offset->sec;
|
||||
}
|
||||
|
||||
static void pps_echo_client_default(struct pps_device *pps, int event,
|
||||
void *data)
|
||||
{
|
||||
dev_info(pps->dev, "echo %s %s\n",
|
||||
event & PPS_CAPTUREASSERT ? "assert" : "",
|
||||
event & PPS_CAPTURECLEAR ? "clear" : "");
|
||||
}
|
||||
|
||||
/*
|
||||
* Exported functions
|
||||
*/
|
||||
@ -80,13 +88,6 @@ struct pps_device *pps_register_source(struct pps_source_info *info,
|
||||
err = -EINVAL;
|
||||
goto pps_register_source_exit;
|
||||
}
|
||||
if ((info->mode & (PPS_ECHOASSERT | PPS_ECHOCLEAR)) != 0 &&
|
||||
info->echo == NULL) {
|
||||
pr_err("%s: echo function is not defined\n",
|
||||
info->name);
|
||||
err = -EINVAL;
|
||||
goto pps_register_source_exit;
|
||||
}
|
||||
if ((info->mode & (PPS_TSFMT_TSPEC | PPS_TSFMT_NTPFP)) == 0) {
|
||||
pr_err("%s: unspecified time format\n",
|
||||
info->name);
|
||||
@ -108,6 +109,11 @@ struct pps_device *pps_register_source(struct pps_source_info *info,
|
||||
pps->params.mode = default_params;
|
||||
pps->info = *info;
|
||||
|
||||
/* check for default echo function */
|
||||
if ((pps->info.mode & (PPS_ECHOASSERT | PPS_ECHOCLEAR)) &&
|
||||
pps->info.echo == NULL)
|
||||
pps->info.echo = pps_echo_client_default;
|
||||
|
||||
init_waitqueue_head(&pps->queue);
|
||||
spin_lock_init(&pps->lock);
|
||||
|
||||
|
@ -1,6 +1,8 @@
|
||||
#
|
||||
# RapidIO configuration
|
||||
#
|
||||
source "drivers/rapidio/devices/Kconfig"
|
||||
|
||||
config RAPIDIO_DISC_TIMEOUT
|
||||
int "Discovery timeout duration (seconds)"
|
||||
depends on RAPIDIO
|
||||
@ -20,8 +22,6 @@ config RAPIDIO_ENABLE_RX_TX_PORTS
|
||||
ports for Input/Output direction to allow other traffic
|
||||
than Maintenance transfers.
|
||||
|
||||
source "drivers/rapidio/switches/Kconfig"
|
||||
|
||||
config RAPIDIO_DEBUG
|
||||
bool "RapidIO subsystem debug messages"
|
||||
depends on RAPIDIO
|
||||
@ -32,3 +32,5 @@ config RAPIDIO_DEBUG
|
||||
going on.
|
||||
|
||||
If you are unsure about this, say N here.
|
||||
|
||||
source "drivers/rapidio/switches/Kconfig"
|
||||
|
@ -4,5 +4,6 @@
|
||||
obj-y += rio.o rio-access.o rio-driver.o rio-scan.o rio-sysfs.o
|
||||
|
||||
obj-$(CONFIG_RAPIDIO) += switches/
|
||||
obj-$(CONFIG_RAPIDIO) += devices/
|
||||
|
||||
subdir-ccflags-$(CONFIG_RAPIDIO_DEBUG) := -DDEBUG
|
||||
|
10
drivers/rapidio/devices/Kconfig
Normal file
10
drivers/rapidio/devices/Kconfig
Normal file
@ -0,0 +1,10 @@
|
||||
#
|
||||
# RapidIO master port configuration
|
||||
#
|
||||
|
||||
config RAPIDIO_TSI721
|
||||
bool "IDT Tsi721 PCI Express SRIO Controller support"
|
||||
depends on RAPIDIO && PCIEPORTBUS
|
||||
default "n"
|
||||
---help---
|
||||
Include support for IDT Tsi721 PCI Express Serial RapidIO controller.
|
5
drivers/rapidio/devices/Makefile
Normal file
5
drivers/rapidio/devices/Makefile
Normal file
@ -0,0 +1,5 @@
|
||||
#
|
||||
# Makefile for RapidIO devices
|
||||
#
|
||||
|
||||
obj-$(CONFIG_RAPIDIO_TSI721) += tsi721.o
|
2360
drivers/rapidio/devices/tsi721.c
Normal file
2360
drivers/rapidio/devices/tsi721.c
Normal file
File diff suppressed because it is too large
Load Diff
766
drivers/rapidio/devices/tsi721.h
Normal file
766
drivers/rapidio/devices/tsi721.h
Normal file
@ -0,0 +1,766 @@
|
||||
/*
|
||||
* Tsi721 PCIExpress-to-SRIO bridge definitions
|
||||
*
|
||||
* Copyright 2011, Integrated Device Technology, Inc.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms of the GNU General Public License as published by the Free
|
||||
* Software Foundation; either version 2 of the License, or (at your option)
|
||||
* any later version.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program; if not, write to the Free Software Foundation, Inc., 59
|
||||
* Temple Place - Suite 330, Boston, MA 02111-1307, USA.
|
||||
*/
|
||||
|
||||
#ifndef __TSI721_H
|
||||
#define __TSI721_H
|
||||
|
||||
#define DRV_NAME "tsi721"
|
||||
|
||||
#define DEFAULT_HOPCOUNT 0xff
|
||||
#define DEFAULT_DESTID 0xff
|
||||
|
||||
/* PCI device ID */
|
||||
#define PCI_DEVICE_ID_TSI721 0x80ab
|
||||
|
||||
#define BAR_0 0
|
||||
#define BAR_1 1
|
||||
#define BAR_2 2
|
||||
#define BAR_4 4
|
||||
|
||||
#define TSI721_PC2SR_BARS 2
|
||||
#define TSI721_PC2SR_WINS 8
|
||||
#define TSI721_PC2SR_ZONES 8
|
||||
#define TSI721_MAINT_WIN 0 /* Window for outbound maintenance requests */
|
||||
#define IDB_QUEUE 0 /* Inbound Doorbell Queue to use */
|
||||
#define IDB_QSIZE 512 /* Inbound Doorbell Queue size */
|
||||
|
||||
/* Memory space sizes */
|
||||
#define TSI721_REG_SPACE_SIZE (512 * 1024) /* 512K */
|
||||
#define TSI721_DB_WIN_SIZE (16 * 1024 * 1024) /* 16MB */
|
||||
|
||||
#define RIO_TT_CODE_8 0x00000000
|
||||
#define RIO_TT_CODE_16 0x00000001
|
||||
|
||||
#define TSI721_DMA_MAXCH 8
|
||||
#define TSI721_DMA_MINSTSSZ 32
|
||||
#define TSI721_DMA_STSBLKSZ 8
|
||||
|
||||
#define TSI721_SRIO_MAXCH 8
|
||||
|
||||
#define DBELL_SID(buf) (((u8)buf[2] << 8) | (u8)buf[3])
|
||||
#define DBELL_TID(buf) (((u8)buf[4] << 8) | (u8)buf[5])
|
||||
#define DBELL_INF(buf) (((u8)buf[0] << 8) | (u8)buf[1])
|
||||
|
||||
#define TSI721_RIO_PW_MSG_SIZE 16 /* Tsi721 saves only 16 bytes of PW msg */
|
||||
|
||||
/* Register definitions */
|
||||
|
||||
/*
|
||||
* Registers in PCIe configuration space
|
||||
*/
|
||||
|
||||
#define TSI721_PCIECFG_MSIXTBL 0x0a4
|
||||
#define TSI721_MSIXTBL_OFFSET 0x2c000
|
||||
#define TSI721_PCIECFG_MSIXPBA 0x0a8
|
||||
#define TSI721_MSIXPBA_OFFSET 0x2a000
|
||||
#define TSI721_PCIECFG_EPCTL 0x400
|
||||
|
||||
/*
|
||||
* Event Management Registers
|
||||
*/
|
||||
|
||||
#define TSI721_RIO_EM_INT_STAT 0x10910
|
||||
#define TSI721_RIO_EM_INT_STAT_PW_RX 0x00010000
|
||||
|
||||
#define TSI721_RIO_EM_INT_ENABLE 0x10914
|
||||
#define TSI721_RIO_EM_INT_ENABLE_PW_RX 0x00010000
|
||||
|
||||
#define TSI721_RIO_EM_DEV_INT_EN 0x10930
|
||||
#define TSI721_RIO_EM_DEV_INT_EN_INT 0x00000001
|
||||
|
||||
/*
|
||||
* Port-Write Block Registers
|
||||
*/
|
||||
|
||||
#define TSI721_RIO_PW_CTL 0x10a04
|
||||
#define TSI721_RIO_PW_CTL_PW_TIMER 0xf0000000
|
||||
#define TSI721_RIO_PW_CTL_PWT_DIS (0 << 28)
|
||||
#define TSI721_RIO_PW_CTL_PWT_103 (1 << 28)
|
||||
#define TSI721_RIO_PW_CTL_PWT_205 (1 << 29)
|
||||
#define TSI721_RIO_PW_CTL_PWT_410 (1 << 30)
|
||||
#define TSI721_RIO_PW_CTL_PWT_820 (1 << 31)
|
||||
#define TSI721_RIO_PW_CTL_PWC_MODE 0x01000000
|
||||
#define TSI721_RIO_PW_CTL_PWC_CONT 0x00000000
|
||||
#define TSI721_RIO_PW_CTL_PWC_REL 0x01000000
|
||||
|
||||
#define TSI721_RIO_PW_RX_STAT 0x10a10
|
||||
#define TSI721_RIO_PW_RX_STAT_WR_SIZE 0x0000f000
|
||||
#define TSI_RIO_PW_RX_STAT_WDPTR 0x00000100
|
||||
#define TSI721_RIO_PW_RX_STAT_PW_SHORT 0x00000008
|
||||
#define TSI721_RIO_PW_RX_STAT_PW_TRUNC 0x00000004
|
||||
#define TSI721_RIO_PW_RX_STAT_PW_DISC 0x00000002
|
||||
#define TSI721_RIO_PW_RX_STAT_PW_VAL 0x00000001
|
||||
|
||||
#define TSI721_RIO_PW_RX_CAPT(x) (0x10a20 + (x)*4)
|
||||
|
||||
/*
|
||||
* Inbound Doorbells
|
||||
*/
|
||||
|
||||
#define TSI721_IDB_ENTRY_SIZE 64
|
||||
|
||||
#define TSI721_IDQ_CTL(x) (0x20000 + (x) * 1000)
|
||||
#define TSI721_IDQ_SUSPEND 0x00000002
|
||||
#define TSI721_IDQ_INIT 0x00000001
|
||||
|
||||
#define TSI721_IDQ_STS(x) (0x20004 + (x) * 1000)
|
||||
#define TSI721_IDQ_RUN 0x00200000
|
||||
|
||||
#define TSI721_IDQ_MASK(x) (0x20008 + (x) * 1000)
|
||||
#define TSI721_IDQ_MASK_MASK 0xffff0000
|
||||
#define TSI721_IDQ_MASK_PATT 0x0000ffff
|
||||
|
||||
#define TSI721_IDQ_RP(x) (0x2000c + (x) * 1000)
|
||||
#define TSI721_IDQ_RP_PTR 0x0007ffff
|
||||
|
||||
#define TSI721_IDQ_WP(x) (0x20010 + (x) * 1000)
|
||||
#define TSI721_IDQ_WP_PTR 0x0007ffff
|
||||
|
||||
#define TSI721_IDQ_BASEL(x) (0x20014 + (x) * 1000)
|
||||
#define TSI721_IDQ_BASEL_ADDR 0xffffffc0
|
||||
#define TSI721_IDQ_BASEU(x) (0x20018 + (x) * 1000)
|
||||
#define TSI721_IDQ_SIZE(x) (0x2001c + (x) * 1000)
|
||||
#define TSI721_IDQ_SIZE_VAL(size) (__fls(size) - 4)
|
||||
#define TSI721_IDQ_SIZE_MIN 512
|
||||
#define TSI721_IDQ_SIZE_MAX (512 * 1024)
|
||||
|
||||
#define TSI721_SR_CHINT(x) (0x20040 + (x) * 1000)
|
||||
#define TSI721_SR_CHINTE(x) (0x20044 + (x) * 1000)
|
||||
#define TSI721_SR_CHINTSET(x) (0x20048 + (x) * 1000)
|
||||
#define TSI721_SR_CHINT_ODBOK 0x00000020
|
||||
#define TSI721_SR_CHINT_IDBQRCV 0x00000010
|
||||
#define TSI721_SR_CHINT_SUSP 0x00000008
|
||||
#define TSI721_SR_CHINT_ODBTO 0x00000004
|
||||
#define TSI721_SR_CHINT_ODBRTRY 0x00000002
|
||||
#define TSI721_SR_CHINT_ODBERR 0x00000001
|
||||
#define TSI721_SR_CHINT_ALL 0x0000003f
|
||||
|
||||
#define TSI721_IBWIN_NUM 8
|
||||
|
||||
#define TSI721_IBWINLB(x) (0x29000 + (x) * 20)
|
||||
#define TSI721_IBWINLB_BA 0xfffff000
|
||||
#define TSI721_IBWINLB_WEN 0x00000001
|
||||
|
||||
#define TSI721_SR2PC_GEN_INTE 0x29800
|
||||
#define TSI721_SR2PC_PWE 0x29804
|
||||
#define TSI721_SR2PC_GEN_INT 0x29808
|
||||
|
||||
#define TSI721_DEV_INTE 0x29840
|
||||
#define TSI721_DEV_INT 0x29844
|
||||
#define TSI721_DEV_INTSET 0x29848
|
||||
#define TSI721_DEV_INT_SMSG_CH 0x00000800
|
||||
#define TSI721_DEV_INT_SMSG_NCH 0x00000400
|
||||
#define TSI721_DEV_INT_SR2PC_CH 0x00000200
|
||||
#define TSI721_DEV_INT_SRIO 0x00000020
|
||||
|
||||
#define TSI721_DEV_CHAN_INTE 0x2984c
|
||||
#define TSI721_DEV_CHAN_INT 0x29850
|
||||
|
||||
#define TSI721_INT_SR2PC_CHAN_M 0xff000000
|
||||
#define TSI721_INT_SR2PC_CHAN(x) (1 << (24 + (x)))
|
||||
#define TSI721_INT_IMSG_CHAN_M 0x00ff0000
|
||||
#define TSI721_INT_IMSG_CHAN(x) (1 << (16 + (x)))
|
||||
#define TSI721_INT_OMSG_CHAN_M 0x0000ff00
|
||||
#define TSI721_INT_OMSG_CHAN(x) (1 << (8 + (x)))
|
||||
|
||||
/*
|
||||
* PC2SR block registers
|
||||
*/
|
||||
#define TSI721_OBWIN_NUM TSI721_PC2SR_WINS
|
||||
|
||||
#define TSI721_OBWINLB(x) (0x40000 + (x) * 20)
|
||||
#define TSI721_OBWINLB_BA 0xffff8000
|
||||
#define TSI721_OBWINLB_WEN 0x00000001
|
||||
|
||||
#define TSI721_OBWINUB(x) (0x40004 + (x) * 20)
|
||||
|
||||
#define TSI721_OBWINSZ(x) (0x40008 + (x) * 20)
|
||||
#define TSI721_OBWINSZ_SIZE 0x00001f00
|
||||
#define TSI721_OBWIN_SIZE(size) (__fls(size) - 15)
|
||||
|
||||
#define TSI721_ZONE_SEL 0x41300
|
||||
#define TSI721_ZONE_SEL_RD_WRB 0x00020000
|
||||
#define TSI721_ZONE_SEL_GO 0x00010000
|
||||
#define TSI721_ZONE_SEL_WIN 0x00000038
|
||||
#define TSI721_ZONE_SEL_ZONE 0x00000007
|
||||
|
||||
#define TSI721_LUT_DATA0 0x41304
|
||||
#define TSI721_LUT_DATA0_ADD 0xfffff000
|
||||
#define TSI721_LUT_DATA0_RDTYPE 0x00000f00
|
||||
#define TSI721_LUT_DATA0_NREAD 0x00000100
|
||||
#define TSI721_LUT_DATA0_MNTRD 0x00000200
|
||||
#define TSI721_LUT_DATA0_RDCRF 0x00000020
|
||||
#define TSI721_LUT_DATA0_WRCRF 0x00000010
|
||||
#define TSI721_LUT_DATA0_WRTYPE 0x0000000f
|
||||
#define TSI721_LUT_DATA0_NWR 0x00000001
|
||||
#define TSI721_LUT_DATA0_MNTWR 0x00000002
|
||||
#define TSI721_LUT_DATA0_NWR_R 0x00000004
|
||||
|
||||
#define TSI721_LUT_DATA1 0x41308
|
||||
|
||||
#define TSI721_LUT_DATA2 0x4130c
|
||||
#define TSI721_LUT_DATA2_HC 0xff000000
|
||||
#define TSI721_LUT_DATA2_ADD65 0x000c0000
|
||||
#define TSI721_LUT_DATA2_TT 0x00030000
|
||||
#define TSI721_LUT_DATA2_DSTID 0x0000ffff
|
||||
|
||||
#define TSI721_PC2SR_INTE 0x41310
|
||||
|
||||
#define TSI721_DEVCTL 0x48004
|
||||
#define TSI721_DEVCTL_SRBOOT_CMPL 0x00000004
|
||||
|
||||
#define TSI721_I2C_INT_ENABLE 0x49120
|
||||
|
||||
/*
|
||||
* Block DMA Engine Registers
|
||||
* x = 0..7
|
||||
*/
|
||||
|
||||
#define TSI721_DMAC_DWRCNT(x) (0x51000 + (x) * 0x1000)
|
||||
#define TSI721_DMAC_DRDCNT(x) (0x51004 + (x) * 0x1000)
|
||||
|
||||
#define TSI721_DMAC_CTL(x) (0x51008 + (x) * 0x1000)
|
||||
#define TSI721_DMAC_CTL_SUSP 0x00000002
|
||||
#define TSI721_DMAC_CTL_INIT 0x00000001
|
||||
|
||||
#define TSI721_DMAC_INT(x) (0x5100c + (x) * 0x1000)
|
||||
#define TSI721_DMAC_INT_STFULL 0x00000010
|
||||
#define TSI721_DMAC_INT_DONE 0x00000008
|
||||
#define TSI721_DMAC_INT_SUSP 0x00000004
|
||||
#define TSI721_DMAC_INT_ERR 0x00000002
|
||||
#define TSI721_DMAC_INT_IOFDONE 0x00000001
|
||||
#define TSI721_DMAC_INT_ALL 0x0000001f
|
||||
|
||||
#define TSI721_DMAC_INTSET(x) (0x51010 + (x) * 0x1000)
|
||||
|
||||
#define TSI721_DMAC_STS(x) (0x51014 + (x) * 0x1000)
|
||||
#define TSI721_DMAC_STS_ABORT 0x00400000
|
||||
#define TSI721_DMAC_STS_RUN 0x00200000
|
||||
#define TSI721_DMAC_STS_CS 0x001f0000
|
||||
|
||||
#define TSI721_DMAC_INTE(x) (0x51018 + (x) * 0x1000)
|
||||
|
||||
#define TSI721_DMAC_DPTRL(x) (0x51024 + (x) * 0x1000)
|
||||
#define TSI721_DMAC_DPTRL_MASK 0xffffffe0
|
||||
|
||||
#define TSI721_DMAC_DPTRH(x) (0x51028 + (x) * 0x1000)
|
||||
|
||||
#define TSI721_DMAC_DSBL(x) (0x5102c + (x) * 0x1000)
|
||||
#define TSI721_DMAC_DSBL_MASK 0xffffffc0
|
||||
|
||||
#define TSI721_DMAC_DSBH(x) (0x51030 + (x) * 0x1000)
|
||||
|
||||
#define TSI721_DMAC_DSSZ(x) (0x51034 + (x) * 0x1000)
|
||||
#define TSI721_DMAC_DSSZ_SIZE_M 0x0000000f
|
||||
#define TSI721_DMAC_DSSZ_SIZE(size) (__fls(size) - 4)
|
||||
|
||||
|
||||
#define TSI721_DMAC_DSRP(x) (0x51038 + (x) * 0x1000)
|
||||
#define TSI721_DMAC_DSRP_MASK 0x0007ffff
|
||||
|
||||
#define TSI721_DMAC_DSWP(x) (0x5103c + (x) * 0x1000)
|
||||
#define TSI721_DMAC_DSWP_MASK 0x0007ffff
|
||||
|
||||
#define TSI721_BDMA_INTE 0x5f000
|
||||
|
||||
/*
|
||||
* Messaging definitions
|
||||
*/
|
||||
#define TSI721_MSG_BUFFER_SIZE RIO_MAX_MSG_SIZE
|
||||
#define TSI721_MSG_MAX_SIZE RIO_MAX_MSG_SIZE
|
||||
#define TSI721_IMSG_MAXCH 8
|
||||
#define TSI721_IMSG_CHNUM TSI721_IMSG_MAXCH
|
||||
#define TSI721_IMSGD_MIN_RING_SIZE 32
|
||||
#define TSI721_IMSGD_RING_SIZE 512
|
||||
|
||||
#define TSI721_OMSG_CHNUM 4 /* One channel per MBOX */
|
||||
#define TSI721_OMSGD_MIN_RING_SIZE 32
|
||||
#define TSI721_OMSGD_RING_SIZE 512
|
||||
|
||||
/*
|
||||
* Outbound Messaging Engine Registers
|
||||
* x = 0..7
|
||||
*/
|
||||
|
||||
#define TSI721_OBDMAC_DWRCNT(x) (0x61000 + (x) * 0x1000)
|
||||
|
||||
#define TSI721_OBDMAC_DRDCNT(x) (0x61004 + (x) * 0x1000)
|
||||
|
||||
#define TSI721_OBDMAC_CTL(x) (0x61008 + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_CTL_MASK 0x00000007
|
||||
#define TSI721_OBDMAC_CTL_RETRY_THR 0x00000004
|
||||
#define TSI721_OBDMAC_CTL_SUSPEND 0x00000002
|
||||
#define TSI721_OBDMAC_CTL_INIT 0x00000001
|
||||
|
||||
#define TSI721_OBDMAC_INT(x) (0x6100c + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_INTSET(x) (0x61010 + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_INTE(x) (0x61018 + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_INT_MASK 0x0000001F
|
||||
#define TSI721_OBDMAC_INT_ST_FULL 0x00000010
|
||||
#define TSI721_OBDMAC_INT_DONE 0x00000008
|
||||
#define TSI721_OBDMAC_INT_SUSPENDED 0x00000004
|
||||
#define TSI721_OBDMAC_INT_ERROR 0x00000002
|
||||
#define TSI721_OBDMAC_INT_IOF_DONE 0x00000001
|
||||
#define TSI721_OBDMAC_INT_ALL TSI721_OBDMAC_INT_MASK
|
||||
|
||||
#define TSI721_OBDMAC_STS(x) (0x61014 + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_STS_MASK 0x007f0000
|
||||
#define TSI721_OBDMAC_STS_ABORT 0x00400000
|
||||
#define TSI721_OBDMAC_STS_RUN 0x00200000
|
||||
#define TSI721_OBDMAC_STS_CS 0x001f0000
|
||||
|
||||
#define TSI721_OBDMAC_PWE(x) (0x6101c + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_PWE_MASK 0x00000002
|
||||
#define TSI721_OBDMAC_PWE_ERROR_EN 0x00000002
|
||||
|
||||
#define TSI721_OBDMAC_DPTRL(x) (0x61020 + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_DPTRL_MASK 0xfffffff0
|
||||
|
||||
#define TSI721_OBDMAC_DPTRH(x) (0x61024 + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_DPTRH_MASK 0xffffffff
|
||||
|
||||
#define TSI721_OBDMAC_DSBL(x) (0x61040 + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_DSBL_MASK 0xffffffc0
|
||||
|
||||
#define TSI721_OBDMAC_DSBH(x) (0x61044 + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_DSBH_MASK 0xffffffff
|
||||
|
||||
#define TSI721_OBDMAC_DSSZ(x) (0x61048 + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_DSSZ_MASK 0x0000000f
|
||||
|
||||
#define TSI721_OBDMAC_DSRP(x) (0x6104c + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_DSRP_MASK 0x0007ffff
|
||||
|
||||
#define TSI721_OBDMAC_DSWP(x) (0x61050 + (x) * 0x1000)
|
||||
#define TSI721_OBDMAC_DSWP_MASK 0x0007ffff
|
||||
|
||||
#define TSI721_RQRPTO 0x60010
|
||||
#define TSI721_RQRPTO_MASK 0x00ffffff
|
||||
#define TSI721_RQRPTO_VAL 400 /* Response TO value */
|
||||
|
||||
/*
|
||||
* Inbound Messaging Engine Registers
|
||||
* x = 0..7
|
||||
*/
|
||||
|
||||
#define TSI721_IB_DEVID_GLOBAL 0xffff
|
||||
#define TSI721_IBDMAC_FQBL(x) (0x61200 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_FQBL_MASK 0xffffffc0
|
||||
|
||||
#define TSI721_IBDMAC_FQBH(x) (0x61204 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_FQBH_MASK 0xffffffff
|
||||
|
||||
#define TSI721_IBDMAC_FQSZ_ENTRY_INX TSI721_IMSGD_RING_SIZE
|
||||
#define TSI721_IBDMAC_FQSZ(x) (0x61208 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_FQSZ_MASK 0x0000000f
|
||||
|
||||
#define TSI721_IBDMAC_FQRP(x) (0x6120c + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_FQRP_MASK 0x0007ffff
|
||||
|
||||
#define TSI721_IBDMAC_FQWP(x) (0x61210 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_FQWP_MASK 0x0007ffff
|
||||
|
||||
#define TSI721_IBDMAC_FQTH(x) (0x61214 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_FQTH_MASK 0x0007ffff
|
||||
|
||||
#define TSI721_IB_DEVID 0x60020
|
||||
#define TSI721_IB_DEVID_MASK 0x0000ffff
|
||||
|
||||
#define TSI721_IBDMAC_CTL(x) (0x61240 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_CTL_MASK 0x00000003
|
||||
#define TSI721_IBDMAC_CTL_SUSPEND 0x00000002
|
||||
#define TSI721_IBDMAC_CTL_INIT 0x00000001
|
||||
|
||||
#define TSI721_IBDMAC_STS(x) (0x61244 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_STS_MASK 0x007f0000
|
||||
#define TSI721_IBSMAC_STS_ABORT 0x00400000
|
||||
#define TSI721_IBSMAC_STS_RUN 0x00200000
|
||||
#define TSI721_IBSMAC_STS_CS 0x001f0000
|
||||
|
||||
#define TSI721_IBDMAC_INT(x) (0x61248 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_INTSET(x) (0x6124c + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_INTE(x) (0x61250 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_INT_MASK 0x0000100f
|
||||
#define TSI721_IBDMAC_INT_SRTO 0x00001000
|
||||
#define TSI721_IBDMAC_INT_SUSPENDED 0x00000008
|
||||
#define TSI721_IBDMAC_INT_PC_ERROR 0x00000004
|
||||
#define TSI721_IBDMAC_INT_FQ_LOW 0x00000002
|
||||
#define TSI721_IBDMAC_INT_DQ_RCV 0x00000001
|
||||
#define TSI721_IBDMAC_INT_ALL TSI721_IBDMAC_INT_MASK
|
||||
|
||||
#define TSI721_IBDMAC_PWE(x) (0x61254 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_PWE_MASK 0x00001700
|
||||
#define TSI721_IBDMAC_PWE_SRTO 0x00001000
|
||||
#define TSI721_IBDMAC_PWE_ILL_FMT 0x00000400
|
||||
#define TSI721_IBDMAC_PWE_ILL_DEC 0x00000200
|
||||
#define TSI721_IBDMAC_PWE_IMP_SP 0x00000100
|
||||
|
||||
#define TSI721_IBDMAC_DQBL(x) (0x61300 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_DQBL_MASK 0xffffffc0
|
||||
#define TSI721_IBDMAC_DQBL_ADDR 0xffffffc0
|
||||
|
||||
#define TSI721_IBDMAC_DQBH(x) (0x61304 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_DQBH_MASK 0xffffffff
|
||||
|
||||
#define TSI721_IBDMAC_DQRP(x) (0x61308 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_DQRP_MASK 0x0007ffff
|
||||
|
||||
#define TSI721_IBDMAC_DQWR(x) (0x6130c + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_DQWR_MASK 0x0007ffff
|
||||
|
||||
#define TSI721_IBDMAC_DQSZ(x) (0x61314 + (x) * 0x1000)
|
||||
#define TSI721_IBDMAC_DQSZ_MASK 0x0000000f
|
||||
|
||||
/*
|
||||
* Messaging Engine Interrupts
|
||||
*/
|
||||
|
||||
#define TSI721_SMSG_PWE 0x6a004
|
||||
|
||||
#define TSI721_SMSG_INTE 0x6a000
|
||||
#define TSI721_SMSG_INT 0x6a008
|
||||
#define TSI721_SMSG_INTSET 0x6a010
|
||||
#define TSI721_SMSG_INT_MASK 0x0086ffff
|
||||
#define TSI721_SMSG_INT_UNS_RSP 0x00800000
|
||||
#define TSI721_SMSG_INT_ECC_NCOR 0x00040000
|
||||
#define TSI721_SMSG_INT_ECC_COR 0x00020000
|
||||
#define TSI721_SMSG_INT_ECC_NCOR_CH 0x0000ff00
|
||||
#define TSI721_SMSG_INT_ECC_COR_CH 0x000000ff
|
||||
|
||||
#define TSI721_SMSG_ECC_LOG 0x6a014
|
||||
#define TSI721_SMSG_ECC_LOG_MASK 0x00070007
|
||||
#define TSI721_SMSG_ECC_LOG_ECC_NCOR_M 0x00070000
|
||||
#define TSI721_SMSG_ECC_LOG_ECC_COR_M 0x00000007
|
||||
|
||||
#define TSI721_RETRY_GEN_CNT 0x6a100
|
||||
#define TSI721_RETRY_GEN_CNT_MASK 0xffffffff
|
||||
|
||||
#define TSI721_RETRY_RX_CNT 0x6a104
|
||||
#define TSI721_RETRY_RX_CNT_MASK 0xffffffff
|
||||
|
||||
#define TSI721_SMSG_ECC_COR_LOG(x) (0x6a300 + (x) * 4)
|
||||
#define TSI721_SMSG_ECC_COR_LOG_MASK 0x000000ff
|
||||
|
||||
#define TSI721_SMSG_ECC_NCOR(x) (0x6a340 + (x) * 4)
|
||||
#define TSI721_SMSG_ECC_NCOR_MASK 0x000000ff
|
||||
|
||||
/*
|
||||
* Block DMA Descriptors
|
||||
*/
|
||||
|
||||
struct tsi721_dma_desc {
|
||||
__le32 type_id;
|
||||
|
||||
#define TSI721_DMAD_DEVID 0x0000ffff
|
||||
#define TSI721_DMAD_CRF 0x00010000
|
||||
#define TSI721_DMAD_PRIO 0x00060000
|
||||
#define TSI721_DMAD_RTYPE 0x00780000
|
||||
#define TSI721_DMAD_IOF 0x08000000
|
||||
#define TSI721_DMAD_DTYPE 0xe0000000
|
||||
|
||||
__le32 bcount;
|
||||
|
||||
#define TSI721_DMAD_BCOUNT1 0x03ffffff /* if DTYPE == 1 */
|
||||
#define TSI721_DMAD_BCOUNT2 0x0000000f /* if DTYPE == 2 */
|
||||
#define TSI721_DMAD_TT 0x0c000000
|
||||
#define TSI721_DMAD_RADDR0 0xc0000000
|
||||
|
||||
union {
|
||||
__le32 raddr_lo; /* if DTYPE == (1 || 2) */
|
||||
__le32 next_lo; /* if DTYPE == 3 */
|
||||
};
|
||||
|
||||
#define TSI721_DMAD_CFGOFF 0x00ffffff
|
||||
#define TSI721_DMAD_HOPCNT 0xff000000
|
||||
|
||||
union {
|
||||
__le32 raddr_hi; /* if DTYPE == (1 || 2) */
|
||||
__le32 next_hi; /* if DTYPE == 3 */
|
||||
};
|
||||
|
||||
union {
|
||||
struct { /* if DTYPE == 1 */
|
||||
__le32 bufptr_lo;
|
||||
__le32 bufptr_hi;
|
||||
__le32 s_dist;
|
||||
__le32 s_size;
|
||||
} t1;
|
||||
__le32 data[4]; /* if DTYPE == 2 */
|
||||
u32 reserved[4]; /* if DTYPE == 3 */
|
||||
};
|
||||
} __aligned(32);
|
||||
|
||||
/*
|
||||
* Inbound Messaging Descriptor
|
||||
*/
|
||||
struct tsi721_imsg_desc {
|
||||
__le32 type_id;
|
||||
|
||||
#define TSI721_IMD_DEVID 0x0000ffff
|
||||
#define TSI721_IMD_CRF 0x00010000
|
||||
#define TSI721_IMD_PRIO 0x00060000
|
||||
#define TSI721_IMD_TT 0x00180000
|
||||
#define TSI721_IMD_DTYPE 0xe0000000
|
||||
|
||||
__le32 msg_info;
|
||||
|
||||
#define TSI721_IMD_BCOUNT 0x00000ff8
|
||||
#define TSI721_IMD_SSIZE 0x0000f000
|
||||
#define TSI721_IMD_LETER 0x00030000
|
||||
#define TSI721_IMD_XMBOX 0x003c0000
|
||||
#define TSI721_IMD_MBOX 0x00c00000
|
||||
#define TSI721_IMD_CS 0x78000000
|
||||
#define TSI721_IMD_HO 0x80000000
|
||||
|
||||
__le32 bufptr_lo;
|
||||
__le32 bufptr_hi;
|
||||
u32 reserved[12];
|
||||
|
||||
} __aligned(64);
|
||||
|
||||
/*
|
||||
* Outbound Messaging Descriptor
|
||||
*/
|
||||
struct tsi721_omsg_desc {
|
||||
__le32 type_id;
|
||||
|
||||
#define TSI721_OMD_DEVID 0x0000ffff
|
||||
#define TSI721_OMD_CRF 0x00010000
|
||||
#define TSI721_OMD_PRIO 0x00060000
|
||||
#define TSI721_OMD_IOF 0x08000000
|
||||
#define TSI721_OMD_DTYPE 0xe0000000
|
||||
#define TSI721_OMD_RSRVD 0x17f80000
|
||||
|
||||
__le32 msg_info;
|
||||
|
||||
#define TSI721_OMD_BCOUNT 0x00000ff8
|
||||
#define TSI721_OMD_SSIZE 0x0000f000
|
||||
#define TSI721_OMD_LETER 0x00030000
|
||||
#define TSI721_OMD_XMBOX 0x003c0000
|
||||
#define TSI721_OMD_MBOX 0x00c00000
|
||||
#define TSI721_OMD_TT 0x0c000000
|
||||
|
||||
union {
|
||||
__le32 bufptr_lo; /* if DTYPE == 4 */
|
||||
__le32 next_lo; /* if DTYPE == 5 */
|
||||
};
|
||||
|
||||
union {
|
||||
__le32 bufptr_hi; /* if DTYPE == 4 */
|
||||
__le32 next_hi; /* if DTYPE == 5 */
|
||||
};
|
||||
|
||||
} __aligned(16);
|
||||
|
||||
struct tsi721_dma_sts {
|
||||
__le64 desc_sts[8];
|
||||
} __aligned(64);
|
||||
|
||||
struct tsi721_desc_sts_fifo {
|
||||
union {
|
||||
__le64 da64;
|
||||
struct {
|
||||
__le32 lo;
|
||||
__le32 hi;
|
||||
} da32;
|
||||
} stat[8];
|
||||
} __aligned(64);
|
||||
|
||||
/* Descriptor types for BDMA and Messaging blocks */
|
||||
enum dma_dtype {
|
||||
DTYPE1 = 1, /* Data Transfer DMA Descriptor */
|
||||
DTYPE2 = 2, /* Immediate Data Transfer DMA Descriptor */
|
||||
DTYPE3 = 3, /* Block Pointer DMA Descriptor */
|
||||
DTYPE4 = 4, /* Outbound Msg DMA Descriptor */
|
||||
DTYPE5 = 5, /* OB Messaging Block Pointer Descriptor */
|
||||
DTYPE6 = 6 /* Inbound Messaging Descriptor */
|
||||
};
|
||||
|
||||
enum dma_rtype {
|
||||
NREAD = 0,
|
||||
LAST_NWRITE_R = 1,
|
||||
ALL_NWRITE = 2,
|
||||
ALL_NWRITE_R = 3,
|
||||
MAINT_RD = 4,
|
||||
MAINT_WR = 5
|
||||
};
|
||||
|
||||
/*
|
||||
* mport Driver Definitions
|
||||
*/
|
||||
#define TSI721_DMA_CHNUM TSI721_DMA_MAXCH
|
||||
|
||||
#define TSI721_DMACH_MAINT 0 /* DMA channel for maint requests */
|
||||
#define TSI721_DMACH_MAINT_NBD 32 /* Number of BDs for maint requests */
|
||||
|
||||
#define MSG_DMA_ENTRY_INX_TO_SIZE(x) ((0x10 << (x)) & 0xFFFF0)
|
||||
|
||||
enum tsi721_smsg_int_flag {
|
||||
SMSG_INT_NONE = 0x00000000,
|
||||
SMSG_INT_ECC_COR_CH = 0x000000ff,
|
||||
SMSG_INT_ECC_NCOR_CH = 0x0000ff00,
|
||||
SMSG_INT_ECC_COR = 0x00020000,
|
||||
SMSG_INT_ECC_NCOR = 0x00040000,
|
||||
SMSG_INT_UNS_RSP = 0x00800000,
|
||||
SMSG_INT_ALL = 0x0006ffff
|
||||
};
|
||||
|
||||
/* Structures */
|
||||
|
||||
struct tsi721_bdma_chan {
|
||||
int bd_num; /* number of buffer descriptors */
|
||||
void *bd_base; /* start of DMA descriptors */
|
||||
dma_addr_t bd_phys;
|
||||
void *sts_base; /* start of DMA BD status FIFO */
|
||||
dma_addr_t sts_phys;
|
||||
int sts_size;
|
||||
};
|
||||
|
||||
struct tsi721_imsg_ring {
|
||||
u32 size;
|
||||
/* VA/PA of data buffers for incoming messages */
|
||||
void *buf_base;
|
||||
dma_addr_t buf_phys;
|
||||
/* VA/PA of circular free buffer list */
|
||||
void *imfq_base;
|
||||
dma_addr_t imfq_phys;
|
||||
/* VA/PA of Inbound message descriptors */
|
||||
void *imd_base;
|
||||
dma_addr_t imd_phys;
|
||||
/* Inbound Queue buffer pointers */
|
||||
void *imq_base[TSI721_IMSGD_RING_SIZE];
|
||||
|
||||
u32 rx_slot;
|
||||
void *dev_id;
|
||||
u32 fq_wrptr;
|
||||
u32 desc_rdptr;
|
||||
spinlock_t lock;
|
||||
};
|
||||
|
||||
struct tsi721_omsg_ring {
|
||||
u32 size;
|
||||
/* VA/PA of OB Msg descriptors */
|
||||
void *omd_base;
|
||||
dma_addr_t omd_phys;
|
||||
/* VA/PA of OB Msg data buffers */
|
||||
void *omq_base[TSI721_OMSGD_RING_SIZE];
|
||||
dma_addr_t omq_phys[TSI721_OMSGD_RING_SIZE];
|
||||
/* VA/PA of OB Msg descriptor status FIFO */
|
||||
void *sts_base;
|
||||
dma_addr_t sts_phys;
|
||||
u32 sts_size; /* # of allocated status entries */
|
||||
u32 sts_rdptr;
|
||||
|
||||
u32 tx_slot;
|
||||
void *dev_id;
|
||||
u32 wr_count;
|
||||
spinlock_t lock;
|
||||
};
|
||||
|
||||
enum tsi721_flags {
|
||||
TSI721_USING_MSI = (1 << 0),
|
||||
TSI721_USING_MSIX = (1 << 1),
|
||||
TSI721_IMSGID_SET = (1 << 2),
|
||||
};
|
||||
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
/*
|
||||
* MSI-X Table Entries (0 ... 69)
|
||||
*/
|
||||
#define TSI721_MSIX_DMACH_DONE(x) (0 + (x))
|
||||
#define TSI721_MSIX_DMACH_INT(x) (8 + (x))
|
||||
#define TSI721_MSIX_BDMA_INT 16
|
||||
#define TSI721_MSIX_OMSG_DONE(x) (17 + (x))
|
||||
#define TSI721_MSIX_OMSG_INT(x) (25 + (x))
|
||||
#define TSI721_MSIX_IMSG_DQ_RCV(x) (33 + (x))
|
||||
#define TSI721_MSIX_IMSG_INT(x) (41 + (x))
|
||||
#define TSI721_MSIX_MSG_INT 49
|
||||
#define TSI721_MSIX_SR2PC_IDBQ_RCV(x) (50 + (x))
|
||||
#define TSI721_MSIX_SR2PC_CH_INT(x) (58 + (x))
|
||||
#define TSI721_MSIX_SR2PC_INT 66
|
||||
#define TSI721_MSIX_PC2SR_INT 67
|
||||
#define TSI721_MSIX_SRIO_MAC_INT 68
|
||||
#define TSI721_MSIX_I2C_INT 69
|
||||
|
||||
/* MSI-X vector and init table entry indexes */
|
||||
enum tsi721_msix_vect {
|
||||
TSI721_VECT_IDB,
|
||||
TSI721_VECT_PWRX, /* PW_RX is part of SRIO MAC Interrupt reporting */
|
||||
TSI721_VECT_OMB0_DONE,
|
||||
TSI721_VECT_OMB1_DONE,
|
||||
TSI721_VECT_OMB2_DONE,
|
||||
TSI721_VECT_OMB3_DONE,
|
||||
TSI721_VECT_OMB0_INT,
|
||||
TSI721_VECT_OMB1_INT,
|
||||
TSI721_VECT_OMB2_INT,
|
||||
TSI721_VECT_OMB3_INT,
|
||||
TSI721_VECT_IMB0_RCV,
|
||||
TSI721_VECT_IMB1_RCV,
|
||||
TSI721_VECT_IMB2_RCV,
|
||||
TSI721_VECT_IMB3_RCV,
|
||||
TSI721_VECT_IMB0_INT,
|
||||
TSI721_VECT_IMB1_INT,
|
||||
TSI721_VECT_IMB2_INT,
|
||||
TSI721_VECT_IMB3_INT,
|
||||
TSI721_VECT_MAX
|
||||
};
|
||||
|
||||
#define IRQ_DEVICE_NAME_MAX 64
|
||||
|
||||
struct msix_irq {
|
||||
u16 vector;
|
||||
char irq_name[IRQ_DEVICE_NAME_MAX];
|
||||
};
|
||||
#endif /* CONFIG_PCI_MSI */
|
||||
|
||||
struct tsi721_device {
|
||||
struct pci_dev *pdev;
|
||||
struct rio_mport *mport;
|
||||
u32 flags;
|
||||
void __iomem *regs;
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
struct msix_irq msix[TSI721_VECT_MAX];
|
||||
#endif
|
||||
/* Doorbells */
|
||||
void __iomem *odb_base;
|
||||
void *idb_base;
|
||||
dma_addr_t idb_dma;
|
||||
struct work_struct idb_work;
|
||||
u32 db_discard_count;
|
||||
|
||||
/* Inbound Port-Write */
|
||||
struct work_struct pw_work;
|
||||
struct kfifo pw_fifo;
|
||||
spinlock_t pw_fifo_lock;
|
||||
u32 pw_discard_count;
|
||||
|
||||
/* BDMA Engine */
|
||||
struct tsi721_bdma_chan bdma[TSI721_DMA_CHNUM];
|
||||
|
||||
/* Inbound Messaging */
|
||||
int imsg_init[TSI721_IMSG_CHNUM];
|
||||
struct tsi721_imsg_ring imsg_ring[TSI721_IMSG_CHNUM];
|
||||
|
||||
/* Outbound Messaging */
|
||||
int omsg_init[TSI721_OMSG_CHNUM];
|
||||
struct tsi721_omsg_ring omsg_ring[TSI721_OMSG_CHNUM];
|
||||
};
|
||||
|
||||
#endif
|
@ -516,7 +516,7 @@ static struct rio_dev __devinit *rio_setup_device(struct rio_net *net,
|
||||
return rdev;
|
||||
|
||||
cleanup:
|
||||
if (rio_is_switch(rdev))
|
||||
if (rswitch)
|
||||
kfree(rswitch->route_table);
|
||||
|
||||
kfree(rdev);
|
||||
@ -923,7 +923,7 @@ static int __devinit rio_enum_peer(struct rio_net *net, struct rio_mport *port,
|
||||
* rio_enum_complete- Tests if enumeration of a network is complete
|
||||
* @port: Master port to send transaction
|
||||
*
|
||||
* Tests the Component Tag CSR for non-zero value (enumeration
|
||||
* Tests the PGCCSR discovered bit for non-zero value (enumeration
|
||||
* complete flag). Return %1 if enumeration is complete or %0 if
|
||||
* enumeration is incomplete.
|
||||
*/
|
||||
@ -933,7 +933,7 @@ static int rio_enum_complete(struct rio_mport *port)
|
||||
|
||||
rio_local_read_config_32(port, port->phys_efptr + RIO_PORT_GEN_CTL_CSR,
|
||||
®val);
|
||||
return (regval & RIO_PORT_GEN_MASTER) ? 1 : 0;
|
||||
return (regval & RIO_PORT_GEN_DISCOVERED) ? 1 : 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -21,16 +21,13 @@
|
||||
#include "rtc-core.h"
|
||||
|
||||
|
||||
static DEFINE_IDR(rtc_idr);
|
||||
static DEFINE_MUTEX(idr_lock);
|
||||
static DEFINE_IDA(rtc_ida);
|
||||
struct class *rtc_class;
|
||||
|
||||
static void rtc_device_release(struct device *dev)
|
||||
{
|
||||
struct rtc_device *rtc = to_rtc_device(dev);
|
||||
mutex_lock(&idr_lock);
|
||||
idr_remove(&rtc_idr, rtc->id);
|
||||
mutex_unlock(&idr_lock);
|
||||
ida_simple_remove(&rtc_ida, rtc->id);
|
||||
kfree(rtc);
|
||||
}
|
||||
|
||||
@ -146,25 +143,16 @@ struct rtc_device *rtc_device_register(const char *name, struct device *dev,
|
||||
struct rtc_wkalrm alrm;
|
||||
int id, err;
|
||||
|
||||
if (idr_pre_get(&rtc_idr, GFP_KERNEL) == 0) {
|
||||
err = -ENOMEM;
|
||||
id = ida_simple_get(&rtc_ida, 0, 0, GFP_KERNEL);
|
||||
if (id < 0) {
|
||||
err = id;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
|
||||
mutex_lock(&idr_lock);
|
||||
err = idr_get_new(&rtc_idr, NULL, &id);
|
||||
mutex_unlock(&idr_lock);
|
||||
|
||||
if (err < 0)
|
||||
goto exit;
|
||||
|
||||
id = id & MAX_ID_MASK;
|
||||
|
||||
rtc = kzalloc(sizeof(struct rtc_device), GFP_KERNEL);
|
||||
if (rtc == NULL) {
|
||||
err = -ENOMEM;
|
||||
goto exit_idr;
|
||||
goto exit_ida;
|
||||
}
|
||||
|
||||
rtc->id = id;
|
||||
@ -222,10 +210,8 @@ struct rtc_device *rtc_device_register(const char *name, struct device *dev,
|
||||
exit_kfree:
|
||||
kfree(rtc);
|
||||
|
||||
exit_idr:
|
||||
mutex_lock(&idr_lock);
|
||||
idr_remove(&rtc_idr, id);
|
||||
mutex_unlock(&idr_lock);
|
||||
exit_ida:
|
||||
ida_simple_remove(&rtc_ida, id);
|
||||
|
||||
exit:
|
||||
dev_err(dev, "rtc core: unable to register %s, err = %d\n",
|
||||
@ -276,7 +262,7 @@ static void __exit rtc_exit(void)
|
||||
{
|
||||
rtc_dev_exit();
|
||||
class_destroy(rtc_class);
|
||||
idr_destroy(&rtc_idr);
|
||||
ida_destroy(&rtc_ida);
|
||||
}
|
||||
|
||||
subsys_initcall(rtc_init);
|
||||
|
@ -34,6 +34,7 @@ enum ds_type {
|
||||
ds_1388,
|
||||
ds_3231,
|
||||
m41t00,
|
||||
mcp7941x,
|
||||
rx_8025,
|
||||
// rs5c372 too? different address...
|
||||
};
|
||||
@ -43,6 +44,7 @@ enum ds_type {
|
||||
#define DS1307_REG_SECS 0x00 /* 00-59 */
|
||||
# define DS1307_BIT_CH 0x80
|
||||
# define DS1340_BIT_nEOSC 0x80
|
||||
# define MCP7941X_BIT_ST 0x80
|
||||
#define DS1307_REG_MIN 0x01 /* 00-59 */
|
||||
#define DS1307_REG_HOUR 0x02 /* 00-23, or 1-12{am,pm} */
|
||||
# define DS1307_BIT_12HR 0x40 /* in REG_HOUR */
|
||||
@ -50,6 +52,7 @@ enum ds_type {
|
||||
# define DS1340_BIT_CENTURY_EN 0x80 /* in REG_HOUR */
|
||||
# define DS1340_BIT_CENTURY 0x40 /* in REG_HOUR */
|
||||
#define DS1307_REG_WDAY 0x03 /* 01-07 */
|
||||
# define MCP7941X_BIT_VBATEN 0x08
|
||||
#define DS1307_REG_MDAY 0x04 /* 01-31 */
|
||||
#define DS1307_REG_MONTH 0x05 /* 01-12 */
|
||||
# define DS1337_BIT_CENTURY 0x80 /* in REG_MONTH */
|
||||
@ -137,6 +140,8 @@ static const struct chip_desc chips[] = {
|
||||
},
|
||||
[m41t00] = {
|
||||
},
|
||||
[mcp7941x] = {
|
||||
},
|
||||
[rx_8025] = {
|
||||
}, };
|
||||
|
||||
@ -149,6 +154,7 @@ static const struct i2c_device_id ds1307_id[] = {
|
||||
{ "ds1340", ds_1340 },
|
||||
{ "ds3231", ds_3231 },
|
||||
{ "m41t00", m41t00 },
|
||||
{ "mcp7941x", mcp7941x },
|
||||
{ "pt7c4338", ds_1307 },
|
||||
{ "rx8025", rx_8025 },
|
||||
{ }
|
||||
@ -365,6 +371,10 @@ static int ds1307_set_time(struct device *dev, struct rtc_time *t)
|
||||
buf[DS1307_REG_HOUR] |= DS1340_BIT_CENTURY_EN
|
||||
| DS1340_BIT_CENTURY;
|
||||
break;
|
||||
case mcp7941x:
|
||||
buf[DS1307_REG_SECS] |= MCP7941X_BIT_ST;
|
||||
buf[DS1307_REG_WDAY] |= MCP7941X_BIT_VBATEN;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
@ -808,6 +818,23 @@ read_rtc:
|
||||
i2c_smbus_write_byte_data(client, DS1340_REG_FLAG, 0);
|
||||
dev_warn(&client->dev, "SET TIME!\n");
|
||||
}
|
||||
break;
|
||||
case mcp7941x:
|
||||
/* make sure that the backup battery is enabled */
|
||||
if (!(ds1307->regs[DS1307_REG_WDAY] & MCP7941X_BIT_VBATEN)) {
|
||||
i2c_smbus_write_byte_data(client, DS1307_REG_WDAY,
|
||||
ds1307->regs[DS1307_REG_WDAY]
|
||||
| MCP7941X_BIT_VBATEN);
|
||||
}
|
||||
|
||||
/* clock halted? turn it on, so clock can tick. */
|
||||
if (!(tmp & MCP7941X_BIT_ST)) {
|
||||
i2c_smbus_write_byte_data(client, DS1307_REG_SECS,
|
||||
MCP7941X_BIT_ST);
|
||||
dev_warn(&client->dev, "SET TIME!\n");
|
||||
goto read_rtc;
|
||||
}
|
||||
|
||||
break;
|
||||
case rx_8025:
|
||||
case ds_1337:
|
||||
|
@ -309,7 +309,7 @@ static irqreturn_t mc13xxx_rtc_reset_handler(int irq, void *dev)
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int __devinit mc13xxx_rtc_probe(struct platform_device *pdev)
|
||||
static int __init mc13xxx_rtc_probe(struct platform_device *pdev)
|
||||
{
|
||||
int ret;
|
||||
struct mc13xxx_rtc *priv;
|
||||
@ -378,7 +378,7 @@ err_reset_irq_request:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int __devexit mc13xxx_rtc_remove(struct platform_device *pdev)
|
||||
static int __exit mc13xxx_rtc_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct mc13xxx_rtc *priv = platform_get_drvdata(pdev);
|
||||
|
||||
@ -410,7 +410,7 @@ const struct platform_device_id mc13xxx_rtc_idtable[] = {
|
||||
|
||||
static struct platform_driver mc13xxx_rtc_driver = {
|
||||
.id_table = mc13xxx_rtc_idtable,
|
||||
.remove = __devexit_p(mc13xxx_rtc_remove),
|
||||
.remove = __exit_p(mc13xxx_rtc_remove),
|
||||
.driver = {
|
||||
.name = DRIVER_NAME,
|
||||
.owner = THIS_MODULE,
|
||||
|
@ -114,43 +114,7 @@ static struct bin_attribute w1_ds2760_bin_attr = {
|
||||
.read = w1_ds2760_read_bin,
|
||||
};
|
||||
|
||||
static DEFINE_IDR(bat_idr);
|
||||
static DEFINE_MUTEX(bat_idr_lock);
|
||||
|
||||
static int new_bat_id(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
while (1) {
|
||||
int id;
|
||||
|
||||
ret = idr_pre_get(&bat_idr, GFP_KERNEL);
|
||||
if (ret == 0)
|
||||
return -ENOMEM;
|
||||
|
||||
mutex_lock(&bat_idr_lock);
|
||||
ret = idr_get_new(&bat_idr, NULL, &id);
|
||||
mutex_unlock(&bat_idr_lock);
|
||||
|
||||
if (ret == 0) {
|
||||
ret = id & MAX_ID_MASK;
|
||||
break;
|
||||
} else if (ret == -EAGAIN) {
|
||||
continue;
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void release_bat_id(int id)
|
||||
{
|
||||
mutex_lock(&bat_idr_lock);
|
||||
idr_remove(&bat_idr, id);
|
||||
mutex_unlock(&bat_idr_lock);
|
||||
}
|
||||
static DEFINE_IDA(bat_ida);
|
||||
|
||||
static int w1_ds2760_add_slave(struct w1_slave *sl)
|
||||
{
|
||||
@ -158,7 +122,7 @@ static int w1_ds2760_add_slave(struct w1_slave *sl)
|
||||
int id;
|
||||
struct platform_device *pdev;
|
||||
|
||||
id = new_bat_id();
|
||||
id = ida_simple_get(&bat_ida, 0, 0, GFP_KERNEL);
|
||||
if (id < 0) {
|
||||
ret = id;
|
||||
goto noid;
|
||||
@ -187,7 +151,7 @@ bin_attr_failed:
|
||||
pdev_add_failed:
|
||||
platform_device_unregister(pdev);
|
||||
pdev_alloc_failed:
|
||||
release_bat_id(id);
|
||||
ida_simple_remove(&bat_ida, id);
|
||||
noid:
|
||||
success:
|
||||
return ret;
|
||||
@ -199,7 +163,7 @@ static void w1_ds2760_remove_slave(struct w1_slave *sl)
|
||||
int id = pdev->id;
|
||||
|
||||
platform_device_unregister(pdev);
|
||||
release_bat_id(id);
|
||||
ida_simple_remove(&bat_ida, id);
|
||||
sysfs_remove_bin_file(&sl->dev.kobj, &w1_ds2760_bin_attr);
|
||||
}
|
||||
|
||||
@ -217,14 +181,14 @@ static int __init w1_ds2760_init(void)
|
||||
{
|
||||
printk(KERN_INFO "1-Wire driver for the DS2760 battery monitor "
|
||||
" chip - (c) 2004-2005, Szabolcs Gyurko\n");
|
||||
idr_init(&bat_idr);
|
||||
ida_init(&bat_ida);
|
||||
return w1_register_family(&w1_ds2760_family);
|
||||
}
|
||||
|
||||
static void __exit w1_ds2760_exit(void)
|
||||
{
|
||||
w1_unregister_family(&w1_ds2760_family);
|
||||
idr_destroy(&bat_idr);
|
||||
ida_destroy(&bat_ida);
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL(w1_ds2760_read);
|
||||
|
@ -26,20 +26,14 @@
|
||||
#include "../w1_family.h"
|
||||
#include "w1_ds2780.h"
|
||||
|
||||
int w1_ds2780_io(struct device *dev, char *buf, int addr, size_t count,
|
||||
int io)
|
||||
static int w1_ds2780_do_io(struct device *dev, char *buf, int addr,
|
||||
size_t count, int io)
|
||||
{
|
||||
struct w1_slave *sl = container_of(dev, struct w1_slave, dev);
|
||||
|
||||
if (!dev)
|
||||
return -ENODEV;
|
||||
if (addr > DS2780_DATA_SIZE || addr < 0)
|
||||
return 0;
|
||||
|
||||
mutex_lock(&sl->master->mutex);
|
||||
|
||||
if (addr > DS2780_DATA_SIZE || addr < 0) {
|
||||
count = 0;
|
||||
goto out;
|
||||
}
|
||||
count = min_t(int, count, DS2780_DATA_SIZE - addr);
|
||||
|
||||
if (w1_reset_select_slave(sl) == 0) {
|
||||
@ -47,7 +41,6 @@ int w1_ds2780_io(struct device *dev, char *buf, int addr, size_t count,
|
||||
w1_write_8(sl->master, W1_DS2780_WRITE_DATA);
|
||||
w1_write_8(sl->master, addr);
|
||||
w1_write_block(sl->master, buf, count);
|
||||
/* XXX w1_write_block returns void, not n_written */
|
||||
} else {
|
||||
w1_write_8(sl->master, W1_DS2780_READ_DATA);
|
||||
w1_write_8(sl->master, addr);
|
||||
@ -55,13 +48,42 @@ int w1_ds2780_io(struct device *dev, char *buf, int addr, size_t count,
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
mutex_unlock(&sl->master->mutex);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
int w1_ds2780_io(struct device *dev, char *buf, int addr, size_t count,
|
||||
int io)
|
||||
{
|
||||
struct w1_slave *sl = container_of(dev, struct w1_slave, dev);
|
||||
int ret;
|
||||
|
||||
if (!dev)
|
||||
return -ENODEV;
|
||||
|
||||
mutex_lock(&sl->master->mutex);
|
||||
|
||||
ret = w1_ds2780_do_io(dev, buf, addr, count, io);
|
||||
|
||||
mutex_unlock(&sl->master->mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(w1_ds2780_io);
|
||||
|
||||
int w1_ds2780_io_nolock(struct device *dev, char *buf, int addr, size_t count,
|
||||
int io)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (!dev)
|
||||
return -ENODEV;
|
||||
|
||||
ret = w1_ds2780_do_io(dev, buf, addr, count, io);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(w1_ds2780_io_nolock);
|
||||
|
||||
int w1_ds2780_eeprom_cmd(struct device *dev, int addr, int cmd)
|
||||
{
|
||||
struct w1_slave *sl = container_of(dev, struct w1_slave, dev);
|
||||
@ -99,43 +121,7 @@ static struct bin_attribute w1_ds2780_bin_attr = {
|
||||
.read = w1_ds2780_read_bin,
|
||||
};
|
||||
|
||||
static DEFINE_IDR(bat_idr);
|
||||
static DEFINE_MUTEX(bat_idr_lock);
|
||||
|
||||
static int new_bat_id(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
while (1) {
|
||||
int id;
|
||||
|
||||
ret = idr_pre_get(&bat_idr, GFP_KERNEL);
|
||||
if (ret == 0)
|
||||
return -ENOMEM;
|
||||
|
||||
mutex_lock(&bat_idr_lock);
|
||||
ret = idr_get_new(&bat_idr, NULL, &id);
|
||||
mutex_unlock(&bat_idr_lock);
|
||||
|
||||
if (ret == 0) {
|
||||
ret = id & MAX_ID_MASK;
|
||||
break;
|
||||
} else if (ret == -EAGAIN) {
|
||||
continue;
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void release_bat_id(int id)
|
||||
{
|
||||
mutex_lock(&bat_idr_lock);
|
||||
idr_remove(&bat_idr, id);
|
||||
mutex_unlock(&bat_idr_lock);
|
||||
}
|
||||
static DEFINE_IDA(bat_ida);
|
||||
|
||||
static int w1_ds2780_add_slave(struct w1_slave *sl)
|
||||
{
|
||||
@ -143,7 +129,7 @@ static int w1_ds2780_add_slave(struct w1_slave *sl)
|
||||
int id;
|
||||
struct platform_device *pdev;
|
||||
|
||||
id = new_bat_id();
|
||||
id = ida_simple_get(&bat_ida, 0, 0, GFP_KERNEL);
|
||||
if (id < 0) {
|
||||
ret = id;
|
||||
goto noid;
|
||||
@ -172,7 +158,7 @@ bin_attr_failed:
|
||||
pdev_add_failed:
|
||||
platform_device_unregister(pdev);
|
||||
pdev_alloc_failed:
|
||||
release_bat_id(id);
|
||||
ida_simple_remove(&bat_ida, id);
|
||||
noid:
|
||||
return ret;
|
||||
}
|
||||
@ -183,7 +169,7 @@ static void w1_ds2780_remove_slave(struct w1_slave *sl)
|
||||
int id = pdev->id;
|
||||
|
||||
platform_device_unregister(pdev);
|
||||
release_bat_id(id);
|
||||
ida_simple_remove(&bat_ida, id);
|
||||
sysfs_remove_bin_file(&sl->dev.kobj, &w1_ds2780_bin_attr);
|
||||
}
|
||||
|
||||
@ -199,14 +185,14 @@ static struct w1_family w1_ds2780_family = {
|
||||
|
||||
static int __init w1_ds2780_init(void)
|
||||
{
|
||||
idr_init(&bat_idr);
|
||||
ida_init(&bat_ida);
|
||||
return w1_register_family(&w1_ds2780_family);
|
||||
}
|
||||
|
||||
static void __exit w1_ds2780_exit(void)
|
||||
{
|
||||
w1_unregister_family(&w1_ds2780_family);
|
||||
idr_destroy(&bat_idr);
|
||||
ida_destroy(&bat_ida);
|
||||
}
|
||||
|
||||
module_init(w1_ds2780_init);
|
||||
|
@ -124,6 +124,8 @@
|
||||
|
||||
extern int w1_ds2780_io(struct device *dev, char *buf, int addr, size_t count,
|
||||
int io);
|
||||
extern int w1_ds2780_io_nolock(struct device *dev, char *buf, int addr,
|
||||
size_t count, int io);
|
||||
extern int w1_ds2780_eeprom_cmd(struct device *dev, int addr, int cmd);
|
||||
|
||||
#endif /* !_W1_DS2780_H */
|
||||
|
@ -78,6 +78,7 @@ static struct w1_master * w1_alloc_dev(u32 id, int slave_count, int slave_ttl,
|
||||
memcpy(&dev->dev, device, sizeof(struct device));
|
||||
dev_set_name(&dev->dev, "w1_bus_master%u", dev->id);
|
||||
snprintf(dev->name, sizeof(dev->name), "w1_bus_master%u", dev->id);
|
||||
dev->dev.init_name = dev->name;
|
||||
|
||||
dev->driver = driver;
|
||||
|
||||
|
@ -158,13 +158,18 @@ EXPORT_SYMBOL_GPL(w1_write_8);
|
||||
static u8 w1_read_bit(struct w1_master *dev)
|
||||
{
|
||||
int result;
|
||||
unsigned long flags;
|
||||
|
||||
/* sample timing is critical here */
|
||||
local_irq_save(flags);
|
||||
dev->bus_master->write_bit(dev->bus_master->data, 0);
|
||||
w1_delay(6);
|
||||
dev->bus_master->write_bit(dev->bus_master->data, 1);
|
||||
w1_delay(9);
|
||||
|
||||
result = dev->bus_master->read_bit(dev->bus_master->data);
|
||||
local_irq_restore(flags);
|
||||
|
||||
w1_delay(55);
|
||||
|
||||
return result & 0x1;
|
||||
|
146
fs/aio.c
146
fs/aio.c
@ -440,8 +440,6 @@ void exit_aio(struct mm_struct *mm)
|
||||
static struct kiocb *__aio_get_req(struct kioctx *ctx)
|
||||
{
|
||||
struct kiocb *req = NULL;
|
||||
struct aio_ring *ring;
|
||||
int okay = 0;
|
||||
|
||||
req = kmem_cache_alloc(kiocb_cachep, GFP_KERNEL);
|
||||
if (unlikely(!req))
|
||||
@ -459,39 +457,114 @@ static struct kiocb *__aio_get_req(struct kioctx *ctx)
|
||||
INIT_LIST_HEAD(&req->ki_run_list);
|
||||
req->ki_eventfd = NULL;
|
||||
|
||||
/* Check if the completion queue has enough free space to
|
||||
* accept an event from this io.
|
||||
*/
|
||||
spin_lock_irq(&ctx->ctx_lock);
|
||||
ring = kmap_atomic(ctx->ring_info.ring_pages[0], KM_USER0);
|
||||
if (ctx->reqs_active < aio_ring_avail(&ctx->ring_info, ring)) {
|
||||
list_add(&req->ki_list, &ctx->active_reqs);
|
||||
ctx->reqs_active++;
|
||||
okay = 1;
|
||||
}
|
||||
kunmap_atomic(ring, KM_USER0);
|
||||
spin_unlock_irq(&ctx->ctx_lock);
|
||||
|
||||
if (!okay) {
|
||||
kmem_cache_free(kiocb_cachep, req);
|
||||
req = NULL;
|
||||
}
|
||||
|
||||
return req;
|
||||
}
|
||||
|
||||
static inline struct kiocb *aio_get_req(struct kioctx *ctx)
|
||||
/*
|
||||
* struct kiocb's are allocated in batches to reduce the number of
|
||||
* times the ctx lock is acquired and released.
|
||||
*/
|
||||
#define KIOCB_BATCH_SIZE 32L
|
||||
struct kiocb_batch {
|
||||
struct list_head head;
|
||||
long count; /* number of requests left to allocate */
|
||||
};
|
||||
|
||||
static void kiocb_batch_init(struct kiocb_batch *batch, long total)
|
||||
{
|
||||
INIT_LIST_HEAD(&batch->head);
|
||||
batch->count = total;
|
||||
}
|
||||
|
||||
static void kiocb_batch_free(struct kiocb_batch *batch)
|
||||
{
|
||||
struct kiocb *req, *n;
|
||||
|
||||
list_for_each_entry_safe(req, n, &batch->head, ki_batch) {
|
||||
list_del(&req->ki_batch);
|
||||
kmem_cache_free(kiocb_cachep, req);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Allocate a batch of kiocbs. This avoids taking and dropping the
|
||||
* context lock a lot during setup.
|
||||
*/
|
||||
static int kiocb_batch_refill(struct kioctx *ctx, struct kiocb_batch *batch)
|
||||
{
|
||||
unsigned short allocated, to_alloc;
|
||||
long avail;
|
||||
bool called_fput = false;
|
||||
struct kiocb *req, *n;
|
||||
struct aio_ring *ring;
|
||||
|
||||
to_alloc = min(batch->count, KIOCB_BATCH_SIZE);
|
||||
for (allocated = 0; allocated < to_alloc; allocated++) {
|
||||
req = __aio_get_req(ctx);
|
||||
if (!req)
|
||||
/* allocation failed, go with what we've got */
|
||||
break;
|
||||
list_add(&req->ki_batch, &batch->head);
|
||||
}
|
||||
|
||||
if (allocated == 0)
|
||||
goto out;
|
||||
|
||||
retry:
|
||||
spin_lock_irq(&ctx->ctx_lock);
|
||||
ring = kmap_atomic(ctx->ring_info.ring_pages[0]);
|
||||
|
||||
avail = aio_ring_avail(&ctx->ring_info, ring) - ctx->reqs_active;
|
||||
BUG_ON(avail < 0);
|
||||
if (avail == 0 && !called_fput) {
|
||||
/*
|
||||
* Handle a potential starvation case. It is possible that
|
||||
* we hold the last reference on a struct file, causing us
|
||||
* to delay the final fput to non-irq context. In this case,
|
||||
* ctx->reqs_active is artificially high. Calling the fput
|
||||
* routine here may free up a slot in the event completion
|
||||
* ring, allowing this allocation to succeed.
|
||||
*/
|
||||
kunmap_atomic(ring);
|
||||
spin_unlock_irq(&ctx->ctx_lock);
|
||||
aio_fput_routine(NULL);
|
||||
called_fput = true;
|
||||
goto retry;
|
||||
}
|
||||
|
||||
if (avail < allocated) {
|
||||
/* Trim back the number of requests. */
|
||||
list_for_each_entry_safe(req, n, &batch->head, ki_batch) {
|
||||
list_del(&req->ki_batch);
|
||||
kmem_cache_free(kiocb_cachep, req);
|
||||
if (--allocated <= avail)
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
batch->count -= allocated;
|
||||
list_for_each_entry(req, &batch->head, ki_batch) {
|
||||
list_add(&req->ki_list, &ctx->active_reqs);
|
||||
ctx->reqs_active++;
|
||||
}
|
||||
|
||||
kunmap_atomic(ring);
|
||||
spin_unlock_irq(&ctx->ctx_lock);
|
||||
|
||||
out:
|
||||
return allocated;
|
||||
}
|
||||
|
||||
static inline struct kiocb *aio_get_req(struct kioctx *ctx,
|
||||
struct kiocb_batch *batch)
|
||||
{
|
||||
struct kiocb *req;
|
||||
/* Handle a potential starvation case -- should be exceedingly rare as
|
||||
* requests will be stuck on fput_head only if the aio_fput_routine is
|
||||
* delayed and the requests were the last user of the struct file.
|
||||
*/
|
||||
req = __aio_get_req(ctx);
|
||||
if (unlikely(NULL == req)) {
|
||||
aio_fput_routine(NULL);
|
||||
req = __aio_get_req(ctx);
|
||||
}
|
||||
|
||||
if (list_empty(&batch->head))
|
||||
if (kiocb_batch_refill(ctx, batch) == 0)
|
||||
return NULL;
|
||||
req = list_first_entry(&batch->head, struct kiocb, ki_batch);
|
||||
list_del(&req->ki_batch);
|
||||
return req;
|
||||
}
|
||||
|
||||
@ -1515,7 +1588,8 @@ static ssize_t aio_setup_iocb(struct kiocb *kiocb, bool compat)
|
||||
}
|
||||
|
||||
static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb,
|
||||
struct iocb *iocb, bool compat)
|
||||
struct iocb *iocb, struct kiocb_batch *batch,
|
||||
bool compat)
|
||||
{
|
||||
struct kiocb *req;
|
||||
struct file *file;
|
||||
@ -1541,7 +1615,7 @@ static int io_submit_one(struct kioctx *ctx, struct iocb __user *user_iocb,
|
||||
if (unlikely(!file))
|
||||
return -EBADF;
|
||||
|
||||
req = aio_get_req(ctx); /* returns with 2 references to req */
|
||||
req = aio_get_req(ctx, batch); /* returns with 2 references to req */
|
||||
if (unlikely(!req)) {
|
||||
fput(file);
|
||||
return -EAGAIN;
|
||||
@ -1621,8 +1695,9 @@ long do_io_submit(aio_context_t ctx_id, long nr,
|
||||
{
|
||||
struct kioctx *ctx;
|
||||
long ret = 0;
|
||||
int i;
|
||||
int i = 0;
|
||||
struct blk_plug plug;
|
||||
struct kiocb_batch batch;
|
||||
|
||||
if (unlikely(nr < 0))
|
||||
return -EINVAL;
|
||||
@ -1639,6 +1714,8 @@ long do_io_submit(aio_context_t ctx_id, long nr,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
kiocb_batch_init(&batch, nr);
|
||||
|
||||
blk_start_plug(&plug);
|
||||
|
||||
/*
|
||||
@ -1659,12 +1736,13 @@ long do_io_submit(aio_context_t ctx_id, long nr,
|
||||
break;
|
||||
}
|
||||
|
||||
ret = io_submit_one(ctx, user_iocb, &tmp, compat);
|
||||
ret = io_submit_one(ctx, user_iocb, &tmp, &batch, compat);
|
||||
if (ret)
|
||||
break;
|
||||
}
|
||||
blk_finish_plug(&plug);
|
||||
|
||||
kiocb_batch_free(&batch);
|
||||
put_ioctx(ctx);
|
||||
return i ? i : ret;
|
||||
}
|
||||
|
@ -795,7 +795,16 @@ static int load_elf_binary(struct linux_binprm *bprm, struct pt_regs *regs)
|
||||
* might try to exec. This is because the brk will
|
||||
* follow the loader, and is not movable. */
|
||||
#if defined(CONFIG_X86) || defined(CONFIG_ARM)
|
||||
load_bias = 0;
|
||||
/* Memory randomization might have been switched off
|
||||
* in runtime via sysctl.
|
||||
* If that is the case, retain the original non-zero
|
||||
* load_bias value in order to establish proper
|
||||
* non-randomized mappings.
|
||||
*/
|
||||
if (current->flags & PF_RANDOMIZE)
|
||||
load_bias = 0;
|
||||
else
|
||||
load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
|
||||
#else
|
||||
load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
|
||||
#endif
|
||||
|
@ -46,11 +46,26 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke
|
||||
case HFS_EXT_CNID:
|
||||
hfs_inode_read_fork(tree->inode, mdb->drXTExtRec, mdb->drXTFlSize,
|
||||
mdb->drXTFlSize, be32_to_cpu(mdb->drXTClpSiz));
|
||||
if (HFS_I(tree->inode)->alloc_blocks >
|
||||
HFS_I(tree->inode)->first_blocks) {
|
||||
printk(KERN_ERR "hfs: invalid btree extent records\n");
|
||||
unlock_new_inode(tree->inode);
|
||||
goto free_inode;
|
||||
}
|
||||
|
||||
tree->inode->i_mapping->a_ops = &hfs_btree_aops;
|
||||
break;
|
||||
case HFS_CAT_CNID:
|
||||
hfs_inode_read_fork(tree->inode, mdb->drCTExtRec, mdb->drCTFlSize,
|
||||
mdb->drCTFlSize, be32_to_cpu(mdb->drCTClpSiz));
|
||||
|
||||
if (!HFS_I(tree->inode)->first_blocks) {
|
||||
printk(KERN_ERR "hfs: invalid btree extent records "
|
||||
"(0 size).\n");
|
||||
unlock_new_inode(tree->inode);
|
||||
goto free_inode;
|
||||
}
|
||||
|
||||
tree->inode->i_mapping->a_ops = &hfs_btree_aops;
|
||||
break;
|
||||
default:
|
||||
@ -59,11 +74,6 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke
|
||||
}
|
||||
unlock_new_inode(tree->inode);
|
||||
|
||||
if (!HFS_I(tree->inode)->first_blocks) {
|
||||
printk(KERN_ERR "hfs: invalid btree extent records (0 size).\n");
|
||||
goto free_inode;
|
||||
}
|
||||
|
||||
mapping = tree->inode->i_mapping;
|
||||
page = read_mapping_page(mapping, 0, NULL);
|
||||
if (IS_ERR(page))
|
||||
|
@ -20,6 +20,7 @@
|
||||
#include <linux/statfs.h>
|
||||
#include <linux/cdrom.h>
|
||||
#include <linux/parser.h>
|
||||
#include <linux/mpage.h>
|
||||
|
||||
#include "isofs.h"
|
||||
#include "zisofs.h"
|
||||
@ -1148,7 +1149,13 @@ struct buffer_head *isofs_bread(struct inode *inode, sector_t block)
|
||||
|
||||
static int isofs_readpage(struct file *file, struct page *page)
|
||||
{
|
||||
return block_read_full_page(page,isofs_get_block);
|
||||
return mpage_readpage(page, isofs_get_block);
|
||||
}
|
||||
|
||||
static int isofs_readpages(struct file *file, struct address_space *mapping,
|
||||
struct list_head *pages, unsigned nr_pages)
|
||||
{
|
||||
return mpage_readpages(mapping, pages, nr_pages, isofs_get_block);
|
||||
}
|
||||
|
||||
static sector_t _isofs_bmap(struct address_space *mapping, sector_t block)
|
||||
@ -1158,6 +1165,7 @@ static sector_t _isofs_bmap(struct address_space *mapping, sector_t block)
|
||||
|
||||
static const struct address_space_operations isofs_aops = {
|
||||
.readpage = isofs_readpage,
|
||||
.readpages = isofs_readpages,
|
||||
.bmap = _isofs_bmap
|
||||
};
|
||||
|
||||
|
142
fs/proc/base.c
142
fs/proc/base.c
@ -1652,12 +1652,46 @@ out:
|
||||
return error;
|
||||
}
|
||||
|
||||
static int proc_pid_fd_link_getattr(struct vfsmount *mnt, struct dentry *dentry,
|
||||
struct kstat *stat)
|
||||
{
|
||||
struct inode *inode = dentry->d_inode;
|
||||
struct task_struct *task = get_proc_task(inode);
|
||||
int rc;
|
||||
|
||||
if (task == NULL)
|
||||
return -ESRCH;
|
||||
|
||||
rc = -EACCES;
|
||||
if (lock_trace(task))
|
||||
goto out_task;
|
||||
|
||||
generic_fillattr(inode, stat);
|
||||
unlock_trace(task);
|
||||
rc = 0;
|
||||
out_task:
|
||||
put_task_struct(task);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static const struct inode_operations proc_pid_link_inode_operations = {
|
||||
.readlink = proc_pid_readlink,
|
||||
.follow_link = proc_pid_follow_link,
|
||||
.setattr = proc_setattr,
|
||||
};
|
||||
|
||||
static const struct inode_operations proc_fdinfo_link_inode_operations = {
|
||||
.setattr = proc_setattr,
|
||||
.getattr = proc_pid_fd_link_getattr,
|
||||
};
|
||||
|
||||
static const struct inode_operations proc_fd_link_inode_operations = {
|
||||
.readlink = proc_pid_readlink,
|
||||
.follow_link = proc_pid_follow_link,
|
||||
.setattr = proc_setattr,
|
||||
.getattr = proc_pid_fd_link_getattr,
|
||||
};
|
||||
|
||||
|
||||
/* building an inode */
|
||||
|
||||
@ -1889,49 +1923,61 @@ out:
|
||||
|
||||
static int proc_fd_info(struct inode *inode, struct path *path, char *info)
|
||||
{
|
||||
struct task_struct *task = get_proc_task(inode);
|
||||
struct files_struct *files = NULL;
|
||||
struct task_struct *task;
|
||||
struct files_struct *files;
|
||||
struct file *file;
|
||||
int fd = proc_fd(inode);
|
||||
int rc;
|
||||
|
||||
if (task) {
|
||||
files = get_files_struct(task);
|
||||
put_task_struct(task);
|
||||
}
|
||||
if (files) {
|
||||
/*
|
||||
* We are not taking a ref to the file structure, so we must
|
||||
* hold ->file_lock.
|
||||
*/
|
||||
spin_lock(&files->file_lock);
|
||||
file = fcheck_files(files, fd);
|
||||
if (file) {
|
||||
unsigned int f_flags;
|
||||
struct fdtable *fdt;
|
||||
task = get_proc_task(inode);
|
||||
if (!task)
|
||||
return -ENOENT;
|
||||
|
||||
fdt = files_fdtable(files);
|
||||
f_flags = file->f_flags & ~O_CLOEXEC;
|
||||
if (FD_ISSET(fd, fdt->close_on_exec))
|
||||
f_flags |= O_CLOEXEC;
|
||||
rc = -EACCES;
|
||||
if (lock_trace(task))
|
||||
goto out_task;
|
||||
|
||||
if (path) {
|
||||
*path = file->f_path;
|
||||
path_get(&file->f_path);
|
||||
}
|
||||
if (info)
|
||||
snprintf(info, PROC_FDINFO_MAX,
|
||||
"pos:\t%lli\n"
|
||||
"flags:\t0%o\n",
|
||||
(long long) file->f_pos,
|
||||
f_flags);
|
||||
spin_unlock(&files->file_lock);
|
||||
put_files_struct(files);
|
||||
return 0;
|
||||
rc = -ENOENT;
|
||||
files = get_files_struct(task);
|
||||
if (files == NULL)
|
||||
goto out_unlock;
|
||||
|
||||
/*
|
||||
* We are not taking a ref to the file structure, so we must
|
||||
* hold ->file_lock.
|
||||
*/
|
||||
spin_lock(&files->file_lock);
|
||||
file = fcheck_files(files, fd);
|
||||
if (file) {
|
||||
unsigned int f_flags;
|
||||
struct fdtable *fdt;
|
||||
|
||||
fdt = files_fdtable(files);
|
||||
f_flags = file->f_flags & ~O_CLOEXEC;
|
||||
if (FD_ISSET(fd, fdt->close_on_exec))
|
||||
f_flags |= O_CLOEXEC;
|
||||
|
||||
if (path) {
|
||||
*path = file->f_path;
|
||||
path_get(&file->f_path);
|
||||
}
|
||||
spin_unlock(&files->file_lock);
|
||||
put_files_struct(files);
|
||||
}
|
||||
return -ENOENT;
|
||||
if (info)
|
||||
snprintf(info, PROC_FDINFO_MAX,
|
||||
"pos:\t%lli\n"
|
||||
"flags:\t0%o\n",
|
||||
(long long) file->f_pos,
|
||||
f_flags);
|
||||
rc = 0;
|
||||
} else
|
||||
rc = -ENOENT;
|
||||
spin_unlock(&files->file_lock);
|
||||
put_files_struct(files);
|
||||
|
||||
out_unlock:
|
||||
unlock_trace(task);
|
||||
out_task:
|
||||
put_task_struct(task);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static int proc_fd_link(struct inode *inode, struct path *path)
|
||||
@ -2026,7 +2072,7 @@ static struct dentry *proc_fd_instantiate(struct inode *dir,
|
||||
spin_unlock(&files->file_lock);
|
||||
put_files_struct(files);
|
||||
|
||||
inode->i_op = &proc_pid_link_inode_operations;
|
||||
inode->i_op = &proc_fd_link_inode_operations;
|
||||
inode->i_size = 64;
|
||||
ei->op.proc_get_link = proc_fd_link;
|
||||
d_set_d_op(dentry, &tid_fd_dentry_operations);
|
||||
@ -2058,7 +2104,12 @@ static struct dentry *proc_lookupfd_common(struct inode *dir,
|
||||
if (fd == ~0U)
|
||||
goto out;
|
||||
|
||||
result = ERR_PTR(-EACCES);
|
||||
if (lock_trace(task))
|
||||
goto out;
|
||||
|
||||
result = instantiate(dir, dentry, task, &fd);
|
||||
unlock_trace(task);
|
||||
out:
|
||||
put_task_struct(task);
|
||||
out_no_task:
|
||||
@ -2078,23 +2129,28 @@ static int proc_readfd_common(struct file * filp, void * dirent,
|
||||
retval = -ENOENT;
|
||||
if (!p)
|
||||
goto out_no_task;
|
||||
|
||||
retval = -EACCES;
|
||||
if (lock_trace(p))
|
||||
goto out;
|
||||
|
||||
retval = 0;
|
||||
|
||||
fd = filp->f_pos;
|
||||
switch (fd) {
|
||||
case 0:
|
||||
if (filldir(dirent, ".", 1, 0, inode->i_ino, DT_DIR) < 0)
|
||||
goto out;
|
||||
goto out_unlock;
|
||||
filp->f_pos++;
|
||||
case 1:
|
||||
ino = parent_ino(dentry);
|
||||
if (filldir(dirent, "..", 2, 1, ino, DT_DIR) < 0)
|
||||
goto out;
|
||||
goto out_unlock;
|
||||
filp->f_pos++;
|
||||
default:
|
||||
files = get_files_struct(p);
|
||||
if (!files)
|
||||
goto out;
|
||||
goto out_unlock;
|
||||
rcu_read_lock();
|
||||
for (fd = filp->f_pos-2;
|
||||
fd < files_fdtable(files)->max_fds;
|
||||
@ -2118,6 +2174,9 @@ static int proc_readfd_common(struct file * filp, void * dirent,
|
||||
rcu_read_unlock();
|
||||
put_files_struct(files);
|
||||
}
|
||||
|
||||
out_unlock:
|
||||
unlock_trace(p);
|
||||
out:
|
||||
put_task_struct(p);
|
||||
out_no_task:
|
||||
@ -2195,6 +2254,7 @@ static struct dentry *proc_fdinfo_instantiate(struct inode *dir,
|
||||
ei->fd = fd;
|
||||
inode->i_mode = S_IFREG | S_IRUSR;
|
||||
inode->i_fop = &proc_fdinfo_file_operations;
|
||||
inode->i_op = &proc_fdinfo_link_inode_operations;
|
||||
d_set_d_op(dentry, &tid_fd_dentry_operations);
|
||||
d_add(dentry, inode);
|
||||
/* Close the race of the process dying before we return the dentry */
|
||||
|
@ -3,6 +3,7 @@
|
||||
*/
|
||||
#include <linux/init.h>
|
||||
#include <linux/sysctl.h>
|
||||
#include <linux/poll.h>
|
||||
#include <linux/proc_fs.h>
|
||||
#include <linux/security.h>
|
||||
#include <linux/namei.h>
|
||||
@ -14,6 +15,15 @@ static const struct inode_operations proc_sys_inode_operations;
|
||||
static const struct file_operations proc_sys_dir_file_operations;
|
||||
static const struct inode_operations proc_sys_dir_operations;
|
||||
|
||||
void proc_sys_poll_notify(struct ctl_table_poll *poll)
|
||||
{
|
||||
if (!poll)
|
||||
return;
|
||||
|
||||
atomic_inc(&poll->event);
|
||||
wake_up_interruptible(&poll->wait);
|
||||
}
|
||||
|
||||
static struct inode *proc_sys_make_inode(struct super_block *sb,
|
||||
struct ctl_table_header *head, struct ctl_table *table)
|
||||
{
|
||||
@ -176,6 +186,39 @@ static ssize_t proc_sys_write(struct file *filp, const char __user *buf,
|
||||
return proc_sys_call_handler(filp, (void __user *)buf, count, ppos, 1);
|
||||
}
|
||||
|
||||
static int proc_sys_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct ctl_table *table = PROC_I(inode)->sysctl_entry;
|
||||
|
||||
if (table->poll)
|
||||
filp->private_data = proc_sys_poll_event(table->poll);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static unsigned int proc_sys_poll(struct file *filp, poll_table *wait)
|
||||
{
|
||||
struct inode *inode = filp->f_path.dentry->d_inode;
|
||||
struct ctl_table *table = PROC_I(inode)->sysctl_entry;
|
||||
unsigned long event = (unsigned long)filp->private_data;
|
||||
unsigned int ret = DEFAULT_POLLMASK;
|
||||
|
||||
if (!table->proc_handler)
|
||||
goto out;
|
||||
|
||||
if (!table->poll)
|
||||
goto out;
|
||||
|
||||
poll_wait(filp, &table->poll->wait, wait);
|
||||
|
||||
if (event != atomic_read(&table->poll->event)) {
|
||||
filp->private_data = proc_sys_poll_event(table->poll);
|
||||
ret = POLLIN | POLLRDNORM | POLLERR | POLLPRI;
|
||||
}
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int proc_sys_fill_cache(struct file *filp, void *dirent,
|
||||
filldir_t filldir,
|
||||
@ -364,12 +407,15 @@ static int proc_sys_getattr(struct vfsmount *mnt, struct dentry *dentry, struct
|
||||
}
|
||||
|
||||
static const struct file_operations proc_sys_file_operations = {
|
||||
.open = proc_sys_open,
|
||||
.poll = proc_sys_poll,
|
||||
.read = proc_sys_read,
|
||||
.write = proc_sys_write,
|
||||
.llseek = default_llseek,
|
||||
};
|
||||
|
||||
static const struct file_operations proc_sys_dir_file_operations = {
|
||||
.read = generic_read_dir,
|
||||
.readdir = proc_sys_readdir,
|
||||
.llseek = generic_file_llseek,
|
||||
};
|
||||
|
@ -23,7 +23,6 @@
|
||||
* caches is sufficient.
|
||||
*/
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/pagemap.h>
|
||||
#include <linux/highmem.h>
|
||||
@ -288,14 +287,7 @@ static int __init init_ramfs_fs(void)
|
||||
{
|
||||
return register_filesystem(&ramfs_fs_type);
|
||||
}
|
||||
|
||||
static void __exit exit_ramfs_fs(void)
|
||||
{
|
||||
unregister_filesystem(&ramfs_fs_type);
|
||||
}
|
||||
|
||||
module_init(init_ramfs_fs)
|
||||
module_exit(exit_ramfs_fs)
|
||||
|
||||
int __init init_rootfs(void)
|
||||
{
|
||||
@ -311,5 +303,3 @@ int __init init_rootfs(void)
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -117,6 +117,7 @@ struct kiocb {
|
||||
|
||||
struct list_head ki_list; /* the aio core uses this
|
||||
* for cancellation */
|
||||
struct list_head ki_batch; /* batch allocation */
|
||||
|
||||
/*
|
||||
* If the aio_resfd field of the userspace iocb is not zero,
|
||||
|
@ -516,7 +516,7 @@ struct cgroup_subsys {
|
||||
struct list_head sibling;
|
||||
/* used when use_id == true */
|
||||
struct idr idr;
|
||||
spinlock_t id_lock;
|
||||
rwlock_t id_lock;
|
||||
|
||||
/* should be defined only by modular subsystems */
|
||||
struct module *module;
|
||||
|
@ -1,6 +1,7 @@
|
||||
#ifndef _LINUX_DMA_MAPPING_H
|
||||
#define _LINUX_DMA_MAPPING_H
|
||||
|
||||
#include <linux/string.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/dma-attrs.h>
|
||||
@ -117,6 +118,15 @@ static inline int dma_set_seg_boundary(struct device *dev, unsigned long mask)
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
static inline void *dma_zalloc_coherent(struct device *dev, size_t size,
|
||||
dma_addr_t *dma_handle, gfp_t flag)
|
||||
{
|
||||
void *ret = dma_alloc_coherent(dev, size, dma_handle, flag);
|
||||
if (ret)
|
||||
memset(ret, 0, size);
|
||||
return ret;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HAS_DMA
|
||||
static inline int dma_get_cache_alignment(void)
|
||||
{
|
||||
|
@ -30,11 +30,11 @@
|
||||
#define ANON_INODE_FS_MAGIC 0x09041934
|
||||
#define PSTOREFS_MAGIC 0x6165676C
|
||||
|
||||
#define MINIX_SUPER_MAGIC 0x137F /* original minix fs */
|
||||
#define MINIX_SUPER_MAGIC2 0x138F /* minix fs, 30 char names */
|
||||
#define MINIX2_SUPER_MAGIC 0x2468 /* minix V2 fs */
|
||||
#define MINIX2_SUPER_MAGIC2 0x2478 /* minix V2 fs, 30 char names */
|
||||
#define MINIX3_SUPER_MAGIC 0x4d5a /* minix V3 fs */
|
||||
#define MINIX_SUPER_MAGIC 0x137F /* minix v1 fs, 14 char names */
|
||||
#define MINIX_SUPER_MAGIC2 0x138F /* minix v1 fs, 30 char names */
|
||||
#define MINIX2_SUPER_MAGIC 0x2468 /* minix v2 fs, 14 char names */
|
||||
#define MINIX2_SUPER_MAGIC2 0x2478 /* minix v2 fs, 30 char names */
|
||||
#define MINIX3_SUPER_MAGIC 0x4d5a /* minix v3 fs, 60 char names */
|
||||
|
||||
#define MSDOS_SUPER_MAGIC 0x4d44 /* MD */
|
||||
#define NCP_SUPER_MAGIC 0x564c /* Guess, what 0x564c is :-) */
|
||||
|
@ -78,8 +78,8 @@ extern void mem_cgroup_uncharge_end(void);
|
||||
extern void mem_cgroup_uncharge_page(struct page *page);
|
||||
extern void mem_cgroup_uncharge_cache_page(struct page *page);
|
||||
|
||||
extern void mem_cgroup_out_of_memory(struct mem_cgroup *mem, gfp_t gfp_mask);
|
||||
int task_in_mem_cgroup(struct task_struct *task, const struct mem_cgroup *mem);
|
||||
extern void mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask);
|
||||
int task_in_mem_cgroup(struct task_struct *task, const struct mem_cgroup *memcg);
|
||||
|
||||
extern struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page);
|
||||
extern struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
|
||||
@ -88,26 +88,28 @@ extern struct mem_cgroup *try_get_mem_cgroup_from_mm(struct mm_struct *mm);
|
||||
static inline
|
||||
int mm_match_cgroup(const struct mm_struct *mm, const struct mem_cgroup *cgroup)
|
||||
{
|
||||
struct mem_cgroup *mem;
|
||||
struct mem_cgroup *memcg;
|
||||
rcu_read_lock();
|
||||
mem = mem_cgroup_from_task(rcu_dereference((mm)->owner));
|
||||
memcg = mem_cgroup_from_task(rcu_dereference((mm)->owner));
|
||||
rcu_read_unlock();
|
||||
return cgroup == mem;
|
||||
return cgroup == memcg;
|
||||
}
|
||||
|
||||
extern struct cgroup_subsys_state *mem_cgroup_css(struct mem_cgroup *mem);
|
||||
extern struct cgroup_subsys_state *mem_cgroup_css(struct mem_cgroup *memcg);
|
||||
|
||||
extern int
|
||||
mem_cgroup_prepare_migration(struct page *page,
|
||||
struct page *newpage, struct mem_cgroup **ptr, gfp_t gfp_mask);
|
||||
extern void mem_cgroup_end_migration(struct mem_cgroup *mem,
|
||||
extern void mem_cgroup_end_migration(struct mem_cgroup *memcg,
|
||||
struct page *oldpage, struct page *newpage, bool migration_ok);
|
||||
|
||||
/*
|
||||
* For memory reclaim.
|
||||
*/
|
||||
int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg);
|
||||
int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg);
|
||||
int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg,
|
||||
struct zone *zone);
|
||||
int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg,
|
||||
struct zone *zone);
|
||||
int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
|
||||
unsigned long mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg,
|
||||
int nid, int zid, unsigned int lrumask);
|
||||
@ -148,7 +150,7 @@ static inline void mem_cgroup_dec_page_stat(struct page *page,
|
||||
unsigned long mem_cgroup_soft_limit_reclaim(struct zone *zone, int order,
|
||||
gfp_t gfp_mask,
|
||||
unsigned long *total_scanned);
|
||||
u64 mem_cgroup_get_limit(struct mem_cgroup *mem);
|
||||
u64 mem_cgroup_get_limit(struct mem_cgroup *memcg);
|
||||
|
||||
void mem_cgroup_count_vm_event(struct mm_struct *mm, enum vm_event_item idx);
|
||||
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
||||
@ -244,18 +246,20 @@ static inline struct mem_cgroup *try_get_mem_cgroup_from_mm(struct mm_struct *mm
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline int mm_match_cgroup(struct mm_struct *mm, struct mem_cgroup *mem)
|
||||
static inline int mm_match_cgroup(struct mm_struct *mm,
|
||||
struct mem_cgroup *memcg)
|
||||
{
|
||||
return 1;
|
||||
}
|
||||
|
||||
static inline int task_in_mem_cgroup(struct task_struct *task,
|
||||
const struct mem_cgroup *mem)
|
||||
const struct mem_cgroup *memcg)
|
||||
{
|
||||
return 1;
|
||||
}
|
||||
|
||||
static inline struct cgroup_subsys_state *mem_cgroup_css(struct mem_cgroup *mem)
|
||||
static inline struct cgroup_subsys_state
|
||||
*mem_cgroup_css(struct mem_cgroup *memcg)
|
||||
{
|
||||
return NULL;
|
||||
}
|
||||
@ -267,22 +271,22 @@ mem_cgroup_prepare_migration(struct page *page, struct page *newpage,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void mem_cgroup_end_migration(struct mem_cgroup *mem,
|
||||
static inline void mem_cgroup_end_migration(struct mem_cgroup *memcg,
|
||||
struct page *oldpage, struct page *newpage, bool migration_ok)
|
||||
{
|
||||
}
|
||||
|
||||
static inline int mem_cgroup_get_reclaim_priority(struct mem_cgroup *mem)
|
||||
static inline int mem_cgroup_get_reclaim_priority(struct mem_cgroup *memcg)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void mem_cgroup_note_reclaim_priority(struct mem_cgroup *mem,
|
||||
static inline void mem_cgroup_note_reclaim_priority(struct mem_cgroup *memcg,
|
||||
int priority)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void mem_cgroup_record_reclaim_priority(struct mem_cgroup *mem,
|
||||
static inline void mem_cgroup_record_reclaim_priority(struct mem_cgroup *memcg,
|
||||
int priority)
|
||||
{
|
||||
}
|
||||
@ -293,13 +297,13 @@ static inline bool mem_cgroup_disabled(void)
|
||||
}
|
||||
|
||||
static inline int
|
||||
mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg)
|
||||
mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg, struct zone *zone)
|
||||
{
|
||||
return 1;
|
||||
}
|
||||
|
||||
static inline int
|
||||
mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg)
|
||||
mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg, struct zone *zone)
|
||||
{
|
||||
return 1;
|
||||
}
|
||||
@ -348,7 +352,7 @@ unsigned long mem_cgroup_soft_limit_reclaim(struct zone *zone, int order,
|
||||
}
|
||||
|
||||
static inline
|
||||
u64 mem_cgroup_get_limit(struct mem_cgroup *mem)
|
||||
u64 mem_cgroup_get_limit(struct mem_cgroup *memcg)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
@ -356,36 +356,50 @@ static inline struct page *compound_head(struct page *page)
|
||||
return page;
|
||||
}
|
||||
|
||||
/*
|
||||
* The atomic page->_mapcount, starts from -1: so that transitions
|
||||
* both from it and to it can be tracked, using atomic_inc_and_test
|
||||
* and atomic_add_negative(-1).
|
||||
*/
|
||||
static inline void reset_page_mapcount(struct page *page)
|
||||
{
|
||||
atomic_set(&(page)->_mapcount, -1);
|
||||
}
|
||||
|
||||
static inline int page_mapcount(struct page *page)
|
||||
{
|
||||
return atomic_read(&(page)->_mapcount) + 1;
|
||||
}
|
||||
|
||||
static inline int page_count(struct page *page)
|
||||
{
|
||||
return atomic_read(&compound_head(page)->_count);
|
||||
}
|
||||
|
||||
static inline void get_page(struct page *page)
|
||||
static inline void get_huge_page_tail(struct page *page)
|
||||
{
|
||||
/*
|
||||
* Getting a normal page or the head of a compound page
|
||||
* requires to already have an elevated page->_count. Only if
|
||||
* we're getting a tail page, the elevated page->_count is
|
||||
* required only in the head page, so for tail pages the
|
||||
* bugcheck only verifies that the page->_count isn't
|
||||
* negative.
|
||||
* __split_huge_page_refcount() cannot run
|
||||
* from under us.
|
||||
*/
|
||||
VM_BUG_ON(atomic_read(&page->_count) < !PageTail(page));
|
||||
atomic_inc(&page->_count);
|
||||
VM_BUG_ON(page_mapcount(page) < 0);
|
||||
VM_BUG_ON(atomic_read(&page->_count) != 0);
|
||||
atomic_inc(&page->_mapcount);
|
||||
}
|
||||
|
||||
extern bool __get_page_tail(struct page *page);
|
||||
|
||||
static inline void get_page(struct page *page)
|
||||
{
|
||||
if (unlikely(PageTail(page)))
|
||||
if (likely(__get_page_tail(page)))
|
||||
return;
|
||||
/*
|
||||
* Getting a tail page will elevate both the head and tail
|
||||
* page->_count(s).
|
||||
* Getting a normal page or the head of a compound page
|
||||
* requires to already have an elevated page->_count.
|
||||
*/
|
||||
if (unlikely(PageTail(page))) {
|
||||
/*
|
||||
* This is safe only because
|
||||
* __split_huge_page_refcount can't run under
|
||||
* get_page().
|
||||
*/
|
||||
VM_BUG_ON(atomic_read(&page->first_page->_count) <= 0);
|
||||
atomic_inc(&page->first_page->_count);
|
||||
}
|
||||
VM_BUG_ON(atomic_read(&page->_count) <= 0);
|
||||
atomic_inc(&page->_count);
|
||||
}
|
||||
|
||||
static inline struct page *virt_to_head_page(const void *x)
|
||||
@ -803,21 +817,6 @@ static inline pgoff_t page_index(struct page *page)
|
||||
return page->index;
|
||||
}
|
||||
|
||||
/*
|
||||
* The atomic page->_mapcount, like _count, starts from -1:
|
||||
* so that transitions both from it and to it can be tracked,
|
||||
* using atomic_inc_and_test and atomic_add_negative(-1).
|
||||
*/
|
||||
static inline void reset_page_mapcount(struct page *page)
|
||||
{
|
||||
atomic_set(&(page)->_mapcount, -1);
|
||||
}
|
||||
|
||||
static inline int page_mapcount(struct page *page)
|
||||
{
|
||||
return atomic_read(&(page)->_mapcount) + 1;
|
||||
}
|
||||
|
||||
/*
|
||||
* Return true if this page is mapped into pagetables.
|
||||
*/
|
||||
|
@ -62,10 +62,23 @@ struct page {
|
||||
struct {
|
||||
|
||||
union {
|
||||
atomic_t _mapcount; /* Count of ptes mapped in mms,
|
||||
* to show when page is mapped
|
||||
* & limit reverse map searches.
|
||||
*/
|
||||
/*
|
||||
* Count of ptes mapped in
|
||||
* mms, to show when page is
|
||||
* mapped & limit reverse map
|
||||
* searches.
|
||||
*
|
||||
* Used also for tail pages
|
||||
* refcounting instead of
|
||||
* _count. Tail pages cannot
|
||||
* be mapped and keeping the
|
||||
* tail page _count zero at
|
||||
* all times guarantees
|
||||
* get_page_unless_zero() will
|
||||
* never succeed on tail
|
||||
* pages.
|
||||
*/
|
||||
atomic_t _mapcount;
|
||||
|
||||
struct {
|
||||
unsigned inuse:16;
|
||||
|
32
include/linux/pps-gpio.h
Normal file
32
include/linux/pps-gpio.h
Normal file
@ -0,0 +1,32 @@
|
||||
/*
|
||||
* pps-gpio.h -- PPS client for GPIOs
|
||||
*
|
||||
*
|
||||
* Copyright (C) 2011 James Nuss <jamesnuss@nanometrics.ca>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; either version 2 of the License, or
|
||||
* (at your option) any later version.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; if not, write to the Free Software
|
||||
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
|
||||
*/
|
||||
|
||||
#ifndef _PPS_GPIO_H
|
||||
#define _PPS_GPIO_H
|
||||
|
||||
struct pps_gpio_platform_data {
|
||||
bool assert_falling_edge;
|
||||
bool capture_clear;
|
||||
unsigned int gpio_pin;
|
||||
const char *gpio_label;
|
||||
};
|
||||
|
||||
#endif
|
@ -39,5 +39,6 @@
|
||||
#define RIO_DID_IDTCPS1616 0x0379
|
||||
#define RIO_DID_IDTVPS1616 0x0377
|
||||
#define RIO_DID_IDTSPS1616 0x0378
|
||||
#define RIO_DID_TSI721 0x80ab
|
||||
|
||||
#endif /* LINUX_RIO_IDS_H */
|
||||
|
@ -83,13 +83,6 @@ struct seminfo {
|
||||
|
||||
struct task_struct;
|
||||
|
||||
/* One semaphore structure for each semaphore in the system. */
|
||||
struct sem {
|
||||
int semval; /* current value */
|
||||
int sempid; /* pid of last operation */
|
||||
struct list_head sem_pending; /* pending single-sop operations */
|
||||
};
|
||||
|
||||
/* One sem_array data structure for each set of semaphores in the system. */
|
||||
struct sem_array {
|
||||
struct kern_ipc_perm ____cacheline_aligned_in_smp
|
||||
@ -103,51 +96,21 @@ struct sem_array {
|
||||
int complex_count; /* pending complex operations */
|
||||
};
|
||||
|
||||
/* One queue for each sleeping process in the system. */
|
||||
struct sem_queue {
|
||||
struct list_head simple_list; /* queue of pending operations */
|
||||
struct list_head list; /* queue of pending operations */
|
||||
struct task_struct *sleeper; /* this process */
|
||||
struct sem_undo *undo; /* undo structure */
|
||||
int pid; /* process id of requesting process */
|
||||
int status; /* completion status of operation */
|
||||
struct sembuf *sops; /* array of pending operations */
|
||||
int nsops; /* number of operations */
|
||||
int alter; /* does the operation alter the array? */
|
||||
};
|
||||
|
||||
/* Each task has a list of undo requests. They are executed automatically
|
||||
* when the process exits.
|
||||
*/
|
||||
struct sem_undo {
|
||||
struct list_head list_proc; /* per-process list: all undos from one process. */
|
||||
/* rcu protected */
|
||||
struct rcu_head rcu; /* rcu struct for sem_undo() */
|
||||
struct sem_undo_list *ulp; /* sem_undo_list for the process */
|
||||
struct list_head list_id; /* per semaphore array list: all undos for one array */
|
||||
int semid; /* semaphore set identifier */
|
||||
short * semadj; /* array of adjustments, one per semaphore */
|
||||
};
|
||||
|
||||
/* sem_undo_list controls shared access to the list of sem_undo structures
|
||||
* that may be shared among all a CLONE_SYSVSEM task group.
|
||||
*/
|
||||
struct sem_undo_list {
|
||||
atomic_t refcnt;
|
||||
spinlock_t lock;
|
||||
struct list_head list_proc;
|
||||
};
|
||||
#ifdef CONFIG_SYSVIPC
|
||||
|
||||
struct sysv_sem {
|
||||
struct sem_undo_list *undo_list;
|
||||
};
|
||||
|
||||
#ifdef CONFIG_SYSVIPC
|
||||
|
||||
extern int copy_semundo(unsigned long clone_flags, struct task_struct *tsk);
|
||||
extern void exit_sem(struct task_struct *tsk);
|
||||
|
||||
#else
|
||||
|
||||
struct sysv_sem {
|
||||
/* empty */
|
||||
};
|
||||
|
||||
static inline int copy_semundo(unsigned long clone_flags, struct task_struct *tsk)
|
||||
{
|
||||
return 0;
|
||||
|
@ -931,6 +931,7 @@ enum
|
||||
#ifdef __KERNEL__
|
||||
#include <linux/list.h>
|
||||
#include <linux/rcupdate.h>
|
||||
#include <linux/wait.h>
|
||||
|
||||
/* For the /proc/sys support */
|
||||
struct ctl_table;
|
||||
@ -1011,6 +1012,26 @@ extern int proc_do_large_bitmap(struct ctl_table *, int,
|
||||
* cover common cases.
|
||||
*/
|
||||
|
||||
/* Support for userspace poll() to watch for changes */
|
||||
struct ctl_table_poll {
|
||||
atomic_t event;
|
||||
wait_queue_head_t wait;
|
||||
};
|
||||
|
||||
static inline void *proc_sys_poll_event(struct ctl_table_poll *poll)
|
||||
{
|
||||
return (void *)(unsigned long)atomic_read(&poll->event);
|
||||
}
|
||||
|
||||
void proc_sys_poll_notify(struct ctl_table_poll *poll);
|
||||
|
||||
#define __CTL_TABLE_POLL_INITIALIZER(name) { \
|
||||
.event = ATOMIC_INIT(0), \
|
||||
.wait = __WAIT_QUEUE_HEAD_INITIALIZER(name.wait) }
|
||||
|
||||
#define DEFINE_CTL_TABLE_POLL(name) \
|
||||
struct ctl_table_poll name = __CTL_TABLE_POLL_INITIALIZER(name)
|
||||
|
||||
/* A sysctl table is an array of struct ctl_table: */
|
||||
struct ctl_table
|
||||
{
|
||||
@ -1021,6 +1042,7 @@ struct ctl_table
|
||||
struct ctl_table *child;
|
||||
struct ctl_table *parent; /* Automatically set */
|
||||
proc_handler *proc_handler; /* Callback for text formatting */
|
||||
struct ctl_table_poll *poll;
|
||||
void *extra1;
|
||||
void *extra2;
|
||||
};
|
||||
|
@ -37,6 +37,14 @@ struct new_utsname {
|
||||
#include <linux/nsproxy.h>
|
||||
#include <linux/err.h>
|
||||
|
||||
enum uts_proc {
|
||||
UTS_PROC_OSTYPE,
|
||||
UTS_PROC_OSRELEASE,
|
||||
UTS_PROC_VERSION,
|
||||
UTS_PROC_HOSTNAME,
|
||||
UTS_PROC_DOMAINNAME,
|
||||
};
|
||||
|
||||
struct user_namespace;
|
||||
extern struct user_namespace init_user_ns;
|
||||
|
||||
@ -80,6 +88,14 @@ static inline struct uts_namespace *copy_utsname(unsigned long flags,
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PROC_SYSCTL
|
||||
extern void uts_proc_notify(enum uts_proc proc);
|
||||
#else
|
||||
static inline void uts_proc_notify(enum uts_proc proc)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
static inline struct new_utsname *utsname(void)
|
||||
{
|
||||
return ¤t->nsproxy->uts_ns->name;
|
||||
|
@ -947,7 +947,7 @@ config UID16
|
||||
config SYSCTL_SYSCALL
|
||||
bool "Sysctl syscall support" if EXPERT
|
||||
depends on PROC_SYSCTL
|
||||
default y
|
||||
default n
|
||||
select SYSCTL
|
||||
---help---
|
||||
sys_sysctl uses binary paths that have been found challenging
|
||||
@ -959,7 +959,7 @@ config SYSCTL_SYSCALL
|
||||
trying to save some space it is probably safe to disable this,
|
||||
making your kernel marginally smaller.
|
||||
|
||||
If unsure say Y here.
|
||||
If unsure say N here.
|
||||
|
||||
config KALLSYMS
|
||||
bool "Load all symbols for debugging/ksymoops" if EXPERT
|
||||
|
@ -28,7 +28,7 @@ int __initdata rd_doload; /* 1 = load RAM disk, 0 = don't load */
|
||||
int root_mountflags = MS_RDONLY | MS_SILENT;
|
||||
static char * __initdata root_device_name;
|
||||
static char __initdata saved_root_name[64];
|
||||
static int __initdata root_wait;
|
||||
static int root_wait;
|
||||
|
||||
dev_t ROOT_DEV;
|
||||
|
||||
@ -85,12 +85,15 @@ no_match:
|
||||
|
||||
/**
|
||||
* devt_from_partuuid - looks up the dev_t of a partition by its UUID
|
||||
* @uuid: 36 byte char array containing a hex ascii UUID
|
||||
* @uuid: min 36 byte char array containing a hex ascii UUID
|
||||
*
|
||||
* The function will return the first partition which contains a matching
|
||||
* UUID value in its partition_meta_info struct. This does not search
|
||||
* by filesystem UUIDs.
|
||||
*
|
||||
* If @uuid is followed by a "/PARTNROFF=%d", then the number will be
|
||||
* extracted and used as an offset from the partition identified by the UUID.
|
||||
*
|
||||
* Returns the matching dev_t on success or 0 on failure.
|
||||
*/
|
||||
static dev_t devt_from_partuuid(char *uuid_str)
|
||||
@ -98,6 +101,28 @@ static dev_t devt_from_partuuid(char *uuid_str)
|
||||
dev_t res = 0;
|
||||
struct device *dev = NULL;
|
||||
u8 uuid[16];
|
||||
struct gendisk *disk;
|
||||
struct hd_struct *part;
|
||||
int offset = 0;
|
||||
|
||||
if (strlen(uuid_str) < 36)
|
||||
goto done;
|
||||
|
||||
/* Check for optional partition number offset attributes. */
|
||||
if (uuid_str[36]) {
|
||||
char c = 0;
|
||||
/* Explicitly fail on poor PARTUUID syntax. */
|
||||
if (sscanf(&uuid_str[36],
|
||||
"/PARTNROFF=%d%c", &offset, &c) != 1) {
|
||||
printk(KERN_ERR "VFS: PARTUUID= is invalid.\n"
|
||||
"Expected PARTUUID=<valid-uuid-id>[/PARTNROFF=%%d]\n");
|
||||
if (root_wait)
|
||||
printk(KERN_ERR
|
||||
"Disabling rootwait; root= is invalid.\n");
|
||||
root_wait = 0;
|
||||
goto done;
|
||||
}
|
||||
}
|
||||
|
||||
/* Pack the requested UUID in the expected format. */
|
||||
part_pack_uuid(uuid_str, uuid);
|
||||
@ -107,8 +132,21 @@ static dev_t devt_from_partuuid(char *uuid_str)
|
||||
goto done;
|
||||
|
||||
res = dev->devt;
|
||||
put_device(dev);
|
||||
|
||||
/* Attempt to find the partition by offset. */
|
||||
if (!offset)
|
||||
goto no_offset;
|
||||
|
||||
res = 0;
|
||||
disk = part_to_disk(dev_to_part(dev));
|
||||
part = disk_get_part(disk, dev_to_part(dev)->partno + offset);
|
||||
if (part) {
|
||||
res = part_devt(part);
|
||||
put_device(part_to_dev(part));
|
||||
}
|
||||
|
||||
no_offset:
|
||||
put_device(dev);
|
||||
done:
|
||||
return res;
|
||||
}
|
||||
@ -126,6 +164,8 @@ done:
|
||||
* used when disk name of partitioned disk ends on a digit.
|
||||
* 6) PARTUUID=00112233-4455-6677-8899-AABBCCDDEEFF representing the
|
||||
* unique id of a partition if the partition table provides it.
|
||||
* 7) PARTUUID=<UUID>/PARTNROFF=<int> to select a partition in relation to
|
||||
* a partition with a known unique id.
|
||||
*
|
||||
* If name doesn't have fall into the categories above, we return (0,0).
|
||||
* block_class is used to check if something is a disk name. If the disk
|
||||
@ -143,8 +183,6 @@ dev_t name_to_dev_t(char *name)
|
||||
#ifdef CONFIG_BLOCK
|
||||
if (strncmp(name, "PARTUUID=", 9) == 0) {
|
||||
name += 9;
|
||||
if (strlen(name) != 36)
|
||||
goto fail;
|
||||
res = devt_from_partuuid(name);
|
||||
if (!res)
|
||||
goto fail;
|
||||
|
@ -119,6 +119,20 @@ identify_ramdisk_image(int fd, int start_block, decompress_fn *decompressor)
|
||||
goto done;
|
||||
}
|
||||
|
||||
/*
|
||||
* Read 512 bytes further to check if cramfs is padded
|
||||
*/
|
||||
sys_lseek(fd, start_block * BLOCK_SIZE + 0x200, 0);
|
||||
sys_read(fd, buf, size);
|
||||
|
||||
if (cramfsb->magic == CRAMFS_MAGIC) {
|
||||
printk(KERN_NOTICE
|
||||
"RAMDISK: cramfs filesystem found at block %d\n",
|
||||
start_block);
|
||||
nblocks = (cramfsb->size + BLOCK_SIZE - 1) >> BLOCK_SIZE_BITS;
|
||||
goto done;
|
||||
}
|
||||
|
||||
/*
|
||||
* Read block 1 to test for minix and ext2 superblock
|
||||
*/
|
||||
|
56
ipc/sem.c
56
ipc/sem.c
@ -90,6 +90,52 @@
|
||||
#include <asm/uaccess.h>
|
||||
#include "util.h"
|
||||
|
||||
/* One semaphore structure for each semaphore in the system. */
|
||||
struct sem {
|
||||
int semval; /* current value */
|
||||
int sempid; /* pid of last operation */
|
||||
struct list_head sem_pending; /* pending single-sop operations */
|
||||
};
|
||||
|
||||
/* One queue for each sleeping process in the system. */
|
||||
struct sem_queue {
|
||||
struct list_head simple_list; /* queue of pending operations */
|
||||
struct list_head list; /* queue of pending operations */
|
||||
struct task_struct *sleeper; /* this process */
|
||||
struct sem_undo *undo; /* undo structure */
|
||||
int pid; /* process id of requesting process */
|
||||
int status; /* completion status of operation */
|
||||
struct sembuf *sops; /* array of pending operations */
|
||||
int nsops; /* number of operations */
|
||||
int alter; /* does *sops alter the array? */
|
||||
};
|
||||
|
||||
/* Each task has a list of undo requests. They are executed automatically
|
||||
* when the process exits.
|
||||
*/
|
||||
struct sem_undo {
|
||||
struct list_head list_proc; /* per-process list: *
|
||||
* all undos from one process
|
||||
* rcu protected */
|
||||
struct rcu_head rcu; /* rcu struct for sem_undo */
|
||||
struct sem_undo_list *ulp; /* back ptr to sem_undo_list */
|
||||
struct list_head list_id; /* per semaphore array list:
|
||||
* all undos for one array */
|
||||
int semid; /* semaphore set identifier */
|
||||
short *semadj; /* array of adjustments */
|
||||
/* one per semaphore */
|
||||
};
|
||||
|
||||
/* sem_undo_list controls shared access to the list of sem_undo structures
|
||||
* that may be shared among all a CLONE_SYSVSEM task group.
|
||||
*/
|
||||
struct sem_undo_list {
|
||||
atomic_t refcnt;
|
||||
spinlock_t lock;
|
||||
struct list_head list_proc;
|
||||
};
|
||||
|
||||
|
||||
#define sem_ids(ns) ((ns)->ids[IPC_SEM_IDS])
|
||||
|
||||
#define sem_unlock(sma) ipc_unlock(&(sma)->sem_perm)
|
||||
@ -1426,6 +1472,8 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
|
||||
|
||||
queue.status = -EINTR;
|
||||
queue.sleeper = current;
|
||||
|
||||
sleep_again:
|
||||
current->state = TASK_INTERRUPTIBLE;
|
||||
sem_unlock(sma);
|
||||
|
||||
@ -1460,7 +1508,6 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
|
||||
* Array removed? If yes, leave without sem_unlock().
|
||||
*/
|
||||
if (IS_ERR(sma)) {
|
||||
error = -EIDRM;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
@ -1479,6 +1526,13 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf __user *, tsops,
|
||||
*/
|
||||
if (timeout && jiffies_left == 0)
|
||||
error = -EAGAIN;
|
||||
|
||||
/*
|
||||
* If the wakeup was spurious, just retry
|
||||
*/
|
||||
if (error == -EINTR && !signal_pending(current))
|
||||
goto sleep_again;
|
||||
|
||||
unlink_queue(sma, &queue);
|
||||
|
||||
out_unlock_free:
|
||||
|
@ -2027,7 +2027,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
|
||||
goto out_free_group_list;
|
||||
|
||||
/* prevent changes to the threadgroup list while we take a snapshot. */
|
||||
rcu_read_lock();
|
||||
read_lock(&tasklist_lock);
|
||||
if (!thread_group_leader(leader)) {
|
||||
/*
|
||||
* a race with de_thread from another thread's exec() may strip
|
||||
@ -2036,7 +2036,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
|
||||
* throw this task away and try again (from cgroup_procs_write);
|
||||
* this is "double-double-toil-and-trouble-check locking".
|
||||
*/
|
||||
rcu_read_unlock();
|
||||
read_unlock(&tasklist_lock);
|
||||
retval = -EAGAIN;
|
||||
goto out_free_group_list;
|
||||
}
|
||||
@ -2057,7 +2057,7 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
|
||||
} while_each_thread(leader, tsk);
|
||||
/* remember the number of threads in the array for later. */
|
||||
group_size = i;
|
||||
rcu_read_unlock();
|
||||
read_unlock(&tasklist_lock);
|
||||
|
||||
/*
|
||||
* step 1: check that we can legitimately attach to the cgroup.
|
||||
@ -2135,14 +2135,17 @@ int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
|
||||
oldcgrp = task_cgroup_from_root(tsk, root);
|
||||
if (cgrp == oldcgrp)
|
||||
continue;
|
||||
/* attach each task to each subsystem */
|
||||
for_each_subsys(root, ss) {
|
||||
if (ss->attach_task)
|
||||
ss->attach_task(cgrp, tsk);
|
||||
}
|
||||
/* if the thread is PF_EXITING, it can just get skipped. */
|
||||
retval = cgroup_task_migrate(cgrp, oldcgrp, tsk, true);
|
||||
BUG_ON(retval != 0 && retval != -ESRCH);
|
||||
if (retval == 0) {
|
||||
/* attach each task to each subsystem */
|
||||
for_each_subsys(root, ss) {
|
||||
if (ss->attach_task)
|
||||
ss->attach_task(cgrp, tsk);
|
||||
}
|
||||
} else {
|
||||
BUG_ON(retval != -ESRCH);
|
||||
}
|
||||
}
|
||||
/* nothing is sensitive to fork() after this point. */
|
||||
|
||||
@ -4880,9 +4883,9 @@ void free_css_id(struct cgroup_subsys *ss, struct cgroup_subsys_state *css)
|
||||
|
||||
rcu_assign_pointer(id->css, NULL);
|
||||
rcu_assign_pointer(css->id, NULL);
|
||||
spin_lock(&ss->id_lock);
|
||||
write_lock(&ss->id_lock);
|
||||
idr_remove(&ss->idr, id->id);
|
||||
spin_unlock(&ss->id_lock);
|
||||
write_unlock(&ss->id_lock);
|
||||
kfree_rcu(id, rcu_head);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(free_css_id);
|
||||
@ -4908,10 +4911,10 @@ static struct css_id *get_new_cssid(struct cgroup_subsys *ss, int depth)
|
||||
error = -ENOMEM;
|
||||
goto err_out;
|
||||
}
|
||||
spin_lock(&ss->id_lock);
|
||||
write_lock(&ss->id_lock);
|
||||
/* Don't use 0. allocates an ID of 1-65535 */
|
||||
error = idr_get_new_above(&ss->idr, newid, 1, &myid);
|
||||
spin_unlock(&ss->id_lock);
|
||||
write_unlock(&ss->id_lock);
|
||||
|
||||
/* Returns error when there are no free spaces for new ID.*/
|
||||
if (error) {
|
||||
@ -4926,9 +4929,9 @@ static struct css_id *get_new_cssid(struct cgroup_subsys *ss, int depth)
|
||||
return newid;
|
||||
remove_idr:
|
||||
error = -ENOSPC;
|
||||
spin_lock(&ss->id_lock);
|
||||
write_lock(&ss->id_lock);
|
||||
idr_remove(&ss->idr, myid);
|
||||
spin_unlock(&ss->id_lock);
|
||||
write_unlock(&ss->id_lock);
|
||||
err_out:
|
||||
kfree(newid);
|
||||
return ERR_PTR(error);
|
||||
@ -4940,7 +4943,7 @@ static int __init_or_module cgroup_init_idr(struct cgroup_subsys *ss,
|
||||
{
|
||||
struct css_id *newid;
|
||||
|
||||
spin_lock_init(&ss->id_lock);
|
||||
rwlock_init(&ss->id_lock);
|
||||
idr_init(&ss->idr);
|
||||
|
||||
newid = get_new_cssid(ss, 0);
|
||||
@ -5035,9 +5038,9 @@ css_get_next(struct cgroup_subsys *ss, int id,
|
||||
* scan next entry from bitmap(tree), tmpid is updated after
|
||||
* idr_get_next().
|
||||
*/
|
||||
spin_lock(&ss->id_lock);
|
||||
read_lock(&ss->id_lock);
|
||||
tmp = idr_get_next(&ss->idr, &tmpid);
|
||||
spin_unlock(&ss->id_lock);
|
||||
read_unlock(&ss->id_lock);
|
||||
|
||||
if (!tmp)
|
||||
break;
|
||||
|
@ -949,6 +949,8 @@ static void cpuset_migrate_mm(struct mm_struct *mm, const nodemask_t *from,
|
||||
static void cpuset_change_task_nodemask(struct task_struct *tsk,
|
||||
nodemask_t *newmems)
|
||||
{
|
||||
bool masks_disjoint = !nodes_intersects(*newmems, tsk->mems_allowed);
|
||||
|
||||
repeat:
|
||||
/*
|
||||
* Allow tasks that have access to memory reserves because they have
|
||||
@ -963,7 +965,6 @@ repeat:
|
||||
nodes_or(tsk->mems_allowed, tsk->mems_allowed, *newmems);
|
||||
mpol_rebind_task(tsk, newmems, MPOL_REBIND_STEP1);
|
||||
|
||||
|
||||
/*
|
||||
* ensure checking ->mems_allowed_change_disable after setting all new
|
||||
* allowed nodes.
|
||||
@ -980,9 +981,11 @@ repeat:
|
||||
|
||||
/*
|
||||
* Allocation of memory is very fast, we needn't sleep when waiting
|
||||
* for the read-side.
|
||||
* for the read-side. No wait is necessary, however, if at least one
|
||||
* node remains unchanged.
|
||||
*/
|
||||
while (ACCESS_ONCE(tsk->mems_allowed_change_disable)) {
|
||||
while (masks_disjoint &&
|
||||
ACCESS_ONCE(tsk->mems_allowed_change_disable)) {
|
||||
task_unlock(tsk);
|
||||
if (!task_curr(tsk))
|
||||
yield();
|
||||
|
@ -1286,6 +1286,7 @@ SYSCALL_DEFINE2(sethostname, char __user *, name, int, len)
|
||||
memset(u->nodename + len, 0, sizeof(u->nodename) - len);
|
||||
errno = 0;
|
||||
}
|
||||
uts_proc_notify(UTS_PROC_HOSTNAME);
|
||||
up_write(&uts_sem);
|
||||
return errno;
|
||||
}
|
||||
@ -1336,6 +1337,7 @@ SYSCALL_DEFINE2(setdomainname, char __user *, name, int, len)
|
||||
memset(u->domainname + len, 0, sizeof(u->domainname) - len);
|
||||
errno = 0;
|
||||
}
|
||||
uts_proc_notify(UTS_PROC_DOMAINNAME);
|
||||
up_write(&uts_sem);
|
||||
return errno;
|
||||
}
|
||||
|
@ -13,6 +13,7 @@
|
||||
#include <linux/uts.h>
|
||||
#include <linux/utsname.h>
|
||||
#include <linux/sysctl.h>
|
||||
#include <linux/wait.h>
|
||||
|
||||
static void *get_uts(ctl_table *table, int write)
|
||||
{
|
||||
@ -51,12 +52,19 @@ static int proc_do_uts_string(ctl_table *table, int write,
|
||||
uts_table.data = get_uts(table, write);
|
||||
r = proc_dostring(&uts_table,write,buffer,lenp, ppos);
|
||||
put_uts(table, write, uts_table.data);
|
||||
|
||||
if (write)
|
||||
proc_sys_poll_notify(table->poll);
|
||||
|
||||
return r;
|
||||
}
|
||||
#else
|
||||
#define proc_do_uts_string NULL
|
||||
#endif
|
||||
|
||||
static DEFINE_CTL_TABLE_POLL(hostname_poll);
|
||||
static DEFINE_CTL_TABLE_POLL(domainname_poll);
|
||||
|
||||
static struct ctl_table uts_kern_table[] = {
|
||||
{
|
||||
.procname = "ostype",
|
||||
@ -85,6 +93,7 @@ static struct ctl_table uts_kern_table[] = {
|
||||
.maxlen = sizeof(init_uts_ns.name.nodename),
|
||||
.mode = 0644,
|
||||
.proc_handler = proc_do_uts_string,
|
||||
.poll = &hostname_poll,
|
||||
},
|
||||
{
|
||||
.procname = "domainname",
|
||||
@ -92,6 +101,7 @@ static struct ctl_table uts_kern_table[] = {
|
||||
.maxlen = sizeof(init_uts_ns.name.domainname),
|
||||
.mode = 0644,
|
||||
.proc_handler = proc_do_uts_string,
|
||||
.poll = &domainname_poll,
|
||||
},
|
||||
{}
|
||||
};
|
||||
@ -105,6 +115,19 @@ static struct ctl_table uts_root_table[] = {
|
||||
{}
|
||||
};
|
||||
|
||||
#ifdef CONFIG_PROC_SYSCTL
|
||||
/*
|
||||
* Notify userspace about a change in a certain entry of uts_kern_table,
|
||||
* identified by the parameter proc.
|
||||
*/
|
||||
void uts_proc_notify(enum uts_proc proc)
|
||||
{
|
||||
struct ctl_table *table = &uts_kern_table[proc];
|
||||
|
||||
proc_sys_poll_notify(table->poll);
|
||||
}
|
||||
#endif
|
||||
|
||||
static int __init utsname_sysctl_init(void)
|
||||
{
|
||||
register_sysctl_table(uts_root_table);
|
||||
|
11
lib/idr.c
11
lib/idr.c
@ -944,6 +944,7 @@ int ida_simple_get(struct ida *ida, unsigned int start, unsigned int end,
|
||||
{
|
||||
int ret, id;
|
||||
unsigned int max;
|
||||
unsigned long flags;
|
||||
|
||||
BUG_ON((int)start < 0);
|
||||
BUG_ON((int)end < 0);
|
||||
@ -959,7 +960,7 @@ again:
|
||||
if (!ida_pre_get(ida, gfp_mask))
|
||||
return -ENOMEM;
|
||||
|
||||
spin_lock(&simple_ida_lock);
|
||||
spin_lock_irqsave(&simple_ida_lock, flags);
|
||||
ret = ida_get_new_above(ida, start, &id);
|
||||
if (!ret) {
|
||||
if (id > max) {
|
||||
@ -969,7 +970,7 @@ again:
|
||||
ret = id;
|
||||
}
|
||||
}
|
||||
spin_unlock(&simple_ida_lock);
|
||||
spin_unlock_irqrestore(&simple_ida_lock, flags);
|
||||
|
||||
if (unlikely(ret == -EAGAIN))
|
||||
goto again;
|
||||
@ -985,10 +986,12 @@ EXPORT_SYMBOL(ida_simple_get);
|
||||
*/
|
||||
void ida_simple_remove(struct ida *ida, unsigned int id)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
BUG_ON((int)id < 0);
|
||||
spin_lock(&simple_ida_lock);
|
||||
spin_lock_irqsave(&simple_ida_lock, flags);
|
||||
ida_remove(ida, id);
|
||||
spin_unlock(&simple_ida_lock);
|
||||
spin_unlock_irqrestore(&simple_ida_lock, flags);
|
||||
}
|
||||
EXPORT_SYMBOL(ida_simple_remove);
|
||||
|
||||
|
@ -990,7 +990,7 @@ struct page *follow_trans_huge_pmd(struct mm_struct *mm,
|
||||
page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
|
||||
VM_BUG_ON(!PageCompound(page));
|
||||
if (flags & FOLL_GET)
|
||||
get_page(page);
|
||||
get_page_foll(page);
|
||||
|
||||
out:
|
||||
return page;
|
||||
@ -1202,6 +1202,7 @@ static void __split_huge_page_refcount(struct page *page)
|
||||
unsigned long head_index = page->index;
|
||||
struct zone *zone = page_zone(page);
|
||||
int zonestat;
|
||||
int tail_count = 0;
|
||||
|
||||
/* prevent PageLRU to go away from under us, and freeze lru stats */
|
||||
spin_lock_irq(&zone->lru_lock);
|
||||
@ -1210,11 +1211,27 @@ static void __split_huge_page_refcount(struct page *page)
|
||||
for (i = 1; i < HPAGE_PMD_NR; i++) {
|
||||
struct page *page_tail = page + i;
|
||||
|
||||
/* tail_page->_count cannot change */
|
||||
atomic_sub(atomic_read(&page_tail->_count), &page->_count);
|
||||
BUG_ON(page_count(page) <= 0);
|
||||
atomic_add(page_mapcount(page) + 1, &page_tail->_count);
|
||||
BUG_ON(atomic_read(&page_tail->_count) <= 0);
|
||||
/* tail_page->_mapcount cannot change */
|
||||
BUG_ON(page_mapcount(page_tail) < 0);
|
||||
tail_count += page_mapcount(page_tail);
|
||||
/* check for overflow */
|
||||
BUG_ON(tail_count < 0);
|
||||
BUG_ON(atomic_read(&page_tail->_count) != 0);
|
||||
/*
|
||||
* tail_page->_count is zero and not changing from
|
||||
* under us. But get_page_unless_zero() may be running
|
||||
* from under us on the tail_page. If we used
|
||||
* atomic_set() below instead of atomic_add(), we
|
||||
* would then run atomic_set() concurrently with
|
||||
* get_page_unless_zero(), and atomic_set() is
|
||||
* implemented in C not using locked ops. spin_unlock
|
||||
* on x86 sometime uses locked ops because of PPro
|
||||
* errata 66, 92, so unless somebody can guarantee
|
||||
* atomic_set() here would be safe on all archs (and
|
||||
* not only on x86), it's safer to use atomic_add().
|
||||
*/
|
||||
atomic_add(page_mapcount(page) + page_mapcount(page_tail) + 1,
|
||||
&page_tail->_count);
|
||||
|
||||
/* after clearing PageTail the gup refcount can be released */
|
||||
smp_mb();
|
||||
@ -1232,10 +1249,7 @@ static void __split_huge_page_refcount(struct page *page)
|
||||
(1L << PG_uptodate)));
|
||||
page_tail->flags |= (1L << PG_dirty);
|
||||
|
||||
/*
|
||||
* 1) clear PageTail before overwriting first_page
|
||||
* 2) clear PageTail before clearing PageHead for VM_BUG_ON
|
||||
*/
|
||||
/* clear PageTail before overwriting first_page */
|
||||
smp_wmb();
|
||||
|
||||
/*
|
||||
@ -1252,7 +1266,6 @@ static void __split_huge_page_refcount(struct page *page)
|
||||
* status is achieved setting a reserved bit in the
|
||||
* pmd, not by clearing the present bit.
|
||||
*/
|
||||
BUG_ON(page_mapcount(page_tail));
|
||||
page_tail->_mapcount = page->_mapcount;
|
||||
|
||||
BUG_ON(page_tail->mapping);
|
||||
@ -1269,6 +1282,8 @@ static void __split_huge_page_refcount(struct page *page)
|
||||
|
||||
lru_add_page_tail(zone, page, page_tail);
|
||||
}
|
||||
atomic_sub(tail_count, &page->_count);
|
||||
BUG_ON(atomic_read(&page->_count) <= 0);
|
||||
|
||||
__dec_zone_page_state(page, NR_ANON_TRANSPARENT_HUGEPAGES);
|
||||
__mod_zone_page_state(zone, NR_ANON_PAGES, HPAGE_PMD_NR);
|
||||
|
@ -37,6 +37,52 @@ static inline void __put_page(struct page *page)
|
||||
atomic_dec(&page->_count);
|
||||
}
|
||||
|
||||
static inline void __get_page_tail_foll(struct page *page,
|
||||
bool get_page_head)
|
||||
{
|
||||
/*
|
||||
* If we're getting a tail page, the elevated page->_count is
|
||||
* required only in the head page and we will elevate the head
|
||||
* page->_count and tail page->_mapcount.
|
||||
*
|
||||
* We elevate page_tail->_mapcount for tail pages to force
|
||||
* page_tail->_count to be zero at all times to avoid getting
|
||||
* false positives from get_page_unless_zero() with
|
||||
* speculative page access (like in
|
||||
* page_cache_get_speculative()) on tail pages.
|
||||
*/
|
||||
VM_BUG_ON(atomic_read(&page->first_page->_count) <= 0);
|
||||
VM_BUG_ON(atomic_read(&page->_count) != 0);
|
||||
VM_BUG_ON(page_mapcount(page) < 0);
|
||||
if (get_page_head)
|
||||
atomic_inc(&page->first_page->_count);
|
||||
atomic_inc(&page->_mapcount);
|
||||
}
|
||||
|
||||
/*
|
||||
* This is meant to be called as the FOLL_GET operation of
|
||||
* follow_page() and it must be called while holding the proper PT
|
||||
* lock while the pte (or pmd_trans_huge) is still mapping the page.
|
||||
*/
|
||||
static inline void get_page_foll(struct page *page)
|
||||
{
|
||||
if (unlikely(PageTail(page)))
|
||||
/*
|
||||
* This is safe only because
|
||||
* __split_huge_page_refcount() can't run under
|
||||
* get_page_foll() because we hold the proper PT lock.
|
||||
*/
|
||||
__get_page_tail_foll(page, true);
|
||||
else {
|
||||
/*
|
||||
* Getting a normal page or the head of a compound page
|
||||
* requires to already have an elevated page->_count.
|
||||
*/
|
||||
VM_BUG_ON(atomic_read(&page->_count) <= 0);
|
||||
atomic_inc(&page->_count);
|
||||
}
|
||||
}
|
||||
|
||||
extern unsigned long highest_memmap_pfn;
|
||||
|
||||
/*
|
||||
|
1008
mm/memcontrol.c
1008
mm/memcontrol.c
File diff suppressed because it is too large
Load Diff
@ -1503,7 +1503,7 @@ split_fallthrough:
|
||||
}
|
||||
|
||||
if (flags & FOLL_GET)
|
||||
get_page(page);
|
||||
get_page_foll(page);
|
||||
if (flags & FOLL_TOUCH) {
|
||||
if ((flags & FOLL_WRITE) &&
|
||||
!pte_dirty(pte) && !PageDirty(page))
|
||||
|
@ -133,10 +133,13 @@ struct page *lookup_cgroup_page(struct page_cgroup *pc)
|
||||
static void *__meminit alloc_page_cgroup(size_t size, int nid)
|
||||
{
|
||||
void *addr = NULL;
|
||||
gfp_t flags = GFP_KERNEL | __GFP_NOWARN;
|
||||
|
||||
addr = alloc_pages_exact_nid(nid, size, GFP_KERNEL | __GFP_NOWARN);
|
||||
if (addr)
|
||||
addr = alloc_pages_exact_nid(nid, size, flags);
|
||||
if (addr) {
|
||||
kmemleak_alloc(addr, size, 1, flags);
|
||||
return addr;
|
||||
}
|
||||
|
||||
if (node_state(nid, N_HIGH_MEMORY))
|
||||
addr = vmalloc_node(size, nid);
|
||||
@ -357,7 +360,7 @@ struct swap_cgroup_ctrl {
|
||||
spinlock_t lock;
|
||||
};
|
||||
|
||||
struct swap_cgroup_ctrl swap_cgroup_ctrl[MAX_SWAPFILES];
|
||||
static struct swap_cgroup_ctrl swap_cgroup_ctrl[MAX_SWAPFILES];
|
||||
|
||||
struct swap_cgroup {
|
||||
unsigned short id;
|
||||
|
83
mm/swap.c
83
mm/swap.c
@ -78,39 +78,22 @@ static void put_compound_page(struct page *page)
|
||||
{
|
||||
if (unlikely(PageTail(page))) {
|
||||
/* __split_huge_page_refcount can run under us */
|
||||
struct page *page_head = page->first_page;
|
||||
smp_rmb();
|
||||
/*
|
||||
* If PageTail is still set after smp_rmb() we can be sure
|
||||
* that the page->first_page we read wasn't a dangling pointer.
|
||||
* See __split_huge_page_refcount() smp_wmb().
|
||||
*/
|
||||
if (likely(PageTail(page) && get_page_unless_zero(page_head))) {
|
||||
struct page *page_head = compound_trans_head(page);
|
||||
|
||||
if (likely(page != page_head &&
|
||||
get_page_unless_zero(page_head))) {
|
||||
unsigned long flags;
|
||||
/*
|
||||
* Verify that our page_head wasn't converted
|
||||
* to a a regular page before we got a
|
||||
* reference on it.
|
||||
* page_head wasn't a dangling pointer but it
|
||||
* may not be a head page anymore by the time
|
||||
* we obtain the lock. That is ok as long as it
|
||||
* can't be freed from under us.
|
||||
*/
|
||||
if (unlikely(!PageHead(page_head))) {
|
||||
/* PageHead is cleared after PageTail */
|
||||
smp_rmb();
|
||||
VM_BUG_ON(PageTail(page));
|
||||
goto out_put_head;
|
||||
}
|
||||
/*
|
||||
* Only run compound_lock on a valid PageHead,
|
||||
* after having it pinned with
|
||||
* get_page_unless_zero() above.
|
||||
*/
|
||||
smp_mb();
|
||||
/* page_head wasn't a dangling pointer */
|
||||
flags = compound_lock_irqsave(page_head);
|
||||
if (unlikely(!PageTail(page))) {
|
||||
/* __split_huge_page_refcount run before us */
|
||||
compound_unlock_irqrestore(page_head, flags);
|
||||
VM_BUG_ON(PageHead(page_head));
|
||||
out_put_head:
|
||||
if (put_page_testzero(page_head))
|
||||
__put_single_page(page_head);
|
||||
out_put_single:
|
||||
@ -121,16 +104,17 @@ static void put_compound_page(struct page *page)
|
||||
VM_BUG_ON(page_head != page->first_page);
|
||||
/*
|
||||
* We can release the refcount taken by
|
||||
* get_page_unless_zero now that
|
||||
* split_huge_page_refcount is blocked on the
|
||||
* compound_lock.
|
||||
* get_page_unless_zero() now that
|
||||
* __split_huge_page_refcount() is blocked on
|
||||
* the compound_lock.
|
||||
*/
|
||||
if (put_page_testzero(page_head))
|
||||
VM_BUG_ON(1);
|
||||
/* __split_huge_page_refcount will wait now */
|
||||
VM_BUG_ON(atomic_read(&page->_count) <= 0);
|
||||
atomic_dec(&page->_count);
|
||||
VM_BUG_ON(page_mapcount(page) <= 0);
|
||||
atomic_dec(&page->_mapcount);
|
||||
VM_BUG_ON(atomic_read(&page_head->_count) <= 0);
|
||||
VM_BUG_ON(atomic_read(&page->_count) != 0);
|
||||
compound_unlock_irqrestore(page_head, flags);
|
||||
if (put_page_testzero(page_head)) {
|
||||
if (PageHead(page_head))
|
||||
@ -160,6 +144,45 @@ void put_page(struct page *page)
|
||||
}
|
||||
EXPORT_SYMBOL(put_page);
|
||||
|
||||
/*
|
||||
* This function is exported but must not be called by anything other
|
||||
* than get_page(). It implements the slow path of get_page().
|
||||
*/
|
||||
bool __get_page_tail(struct page *page)
|
||||
{
|
||||
/*
|
||||
* This takes care of get_page() if run on a tail page
|
||||
* returned by one of the get_user_pages/follow_page variants.
|
||||
* get_user_pages/follow_page itself doesn't need the compound
|
||||
* lock because it runs __get_page_tail_foll() under the
|
||||
* proper PT lock that already serializes against
|
||||
* split_huge_page().
|
||||
*/
|
||||
unsigned long flags;
|
||||
bool got = false;
|
||||
struct page *page_head = compound_trans_head(page);
|
||||
|
||||
if (likely(page != page_head && get_page_unless_zero(page_head))) {
|
||||
/*
|
||||
* page_head wasn't a dangling pointer but it
|
||||
* may not be a head page anymore by the time
|
||||
* we obtain the lock. That is ok as long as it
|
||||
* can't be freed from under us.
|
||||
*/
|
||||
flags = compound_lock_irqsave(page_head);
|
||||
/* here __split_huge_page_refcount won't run anymore */
|
||||
if (likely(PageTail(page))) {
|
||||
__get_page_tail_foll(page, false);
|
||||
got = true;
|
||||
}
|
||||
compound_unlock_irqrestore(page_head, flags);
|
||||
if (unlikely(!got))
|
||||
put_page(page_head);
|
||||
}
|
||||
return got;
|
||||
}
|
||||
EXPORT_SYMBOL(__get_page_tail);
|
||||
|
||||
/**
|
||||
* put_pages_list() - release a list of pages
|
||||
* @pages: list of pages threaded on page->lru
|
||||
|
@ -1767,7 +1767,7 @@ static int inactive_anon_is_low(struct zone *zone, struct scan_control *sc)
|
||||
if (scanning_global_lru(sc))
|
||||
low = inactive_anon_is_low_global(zone);
|
||||
else
|
||||
low = mem_cgroup_inactive_anon_is_low(sc->mem_cgroup);
|
||||
low = mem_cgroup_inactive_anon_is_low(sc->mem_cgroup, zone);
|
||||
return low;
|
||||
}
|
||||
#else
|
||||
@ -1810,7 +1810,7 @@ static int inactive_file_is_low(struct zone *zone, struct scan_control *sc)
|
||||
if (scanning_global_lru(sc))
|
||||
low = inactive_file_is_low_global(zone);
|
||||
else
|
||||
low = mem_cgroup_inactive_file_is_low(sc->mem_cgroup);
|
||||
low = mem_cgroup_inactive_file_is_low(sc->mem_cgroup, zone);
|
||||
return low;
|
||||
}
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user