Char/Misc driver patches for 4.1-rc1

Here's the big char/misc driver patchset for 4.1-rc1.
 
 Lots of different driver subsystem updates here, nothing major, full
 details are in the shortlog below.
 
 All of this has been in linux-next for a while.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iEYEABECAAYFAlU2IMEACgkQMUfUDdst+yloDQCfbyIRL23WVAn9ckQse/y8gbjB
 OT4AoKTJbwndDP9Kb/lrj2tjd9QjNVrC
 =xhen
 -----END PGP SIGNATURE-----

Merge tag 'char-misc-4.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
 "Here's the big char/misc driver patchset for 4.1-rc1.

  Lots of different driver subsystem updates here, nothing major, full
  details are in the shortlog.

  All of this has been in linux-next for a while"

* tag 'char-misc-4.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (133 commits)
  mei: trace: remove unused TRACE_SYSTEM_STRING
  DTS: ARM: OMAP3-N900: Add lis3lv02d support
  Documentation: DT: lis302: update wakeup binding
  lis3lv02d: DT: add wakeup unit 2 and wakeup threshold
  lis3lv02d: DT: use s32 to support negative values
  Drivers: hv: hv_balloon: correctly handle num_pages>INT_MAX case
  Drivers: hv: hv_balloon: correctly handle val.freeram<num_pages case
  mei: replace check for connection instead of transitioning
  mei: use mei_cl_is_connected consistently
  mei: fix mei_poll operation
  hv_vmbus: Add gradually increased delay for retries in vmbus_post_msg()
  Drivers: hv: hv_balloon: survive ballooning request with num_pages=0
  Drivers: hv: hv_balloon: eliminate jumps in piecewiese linear floor function
  Drivers: hv: hv_balloon: do not online pages in offline blocks
  hv: remove the per-channel workqueue
  hv: don't schedule new works in vmbus_onoffer()/vmbus_onoffer_rescind()
  hv: run non-blocking message handlers in the dispatch tasklet
  coresight: moving to new "hwtracing" directory
  coresight-tmc: Adding a status interface to sysfs
  coresight: remove the unnecessary configuration coresight-default-sink
  ...
This commit is contained in:
Linus Torvalds 2015-04-21 09:42:58 -07:00
commit 1fc149933f
119 changed files with 4045 additions and 1494 deletions

View File

@ -61,7 +61,6 @@ Example:
compatible = "arm,coresight-etb10", "arm,primecell";
reg = <0 0x20010000 0 0x1000>;
coresight-default-sink;
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
port {

View File

@ -0,0 +1,18 @@
USB GPIO Extcon device
This is a virtual device used to generate USB cable states from the USB ID pin
connected to a GPIO pin.
Required properties:
- compatible: Should be "linux,extcon-usb-gpio"
- id-gpio: gpio for USB ID pin. See gpio binding.
Example: Examples of extcon-usb-gpio node in dra7-evm.dts as listed below:
extcon_usb1 {
compatible = "linux,extcon-usb-gpio";
id-gpio = <&gpio6 1 GPIO_ACTIVE_HIGH>;
}
&omap_dwc3_1 {
extcon = <&extcon_usb1>;
};

View File

@ -0,0 +1,75 @@
* Ingenic JZ4780 NAND/external memory controller (NEMC)
This file documents the device tree bindings for the NEMC external memory
controller in Ingenic JZ4780
Required properties:
- compatible: Should be set to one of:
"ingenic,jz4780-nemc" (JZ4780)
- reg: Should specify the NEMC controller registers location and length.
- clocks: Clock for the NEMC controller.
- #address-cells: Must be set to 2.
- #size-cells: Must be set to 1.
- ranges: A set of ranges for each bank describing the physical memory layout.
Each should specify the following 4 integer values:
<cs number> 0 <physical address of mapping> <size of mapping>
Each child of the NEMC node describes a device connected to the NEMC.
Required child node properties:
- reg: Should contain at least one register specifier, given in the following
format:
<cs number> <offset> <size>
Multiple registers can be specified across multiple banks. This is needed,
for example, for packaged NAND devices with multiple dies. Such devices
should be grouped into a single node.
Optional child node properties:
- ingenic,nemc-bus-width: Specifies the bus width in bits. Defaults to 8 bits.
- ingenic,nemc-tAS: Address setup time in nanoseconds.
- ingenic,nemc-tAH: Address hold time in nanoseconds.
- ingenic,nemc-tBP: Burst pitch time in nanoseconds.
- ingenic,nemc-tAW: Access wait time in nanoseconds.
- ingenic,nemc-tSTRV: Static memory recovery time in nanoseconds.
If a child node references multiple banks in its "reg" property, the same value
for all optional parameters will be configured for all banks. If any optional
parameters are omitted, they will be left unchanged from whatever they are
configured to when the NEMC device is probed (which may be the reset value as
given in the hardware reference manual, or a value configured by the boot
loader).
Example (NEMC node with a NAND child device attached at CS1):
nemc: nemc@13410000 {
compatible = "ingenic,jz4780-nemc";
reg = <0x13410000 0x10000>;
#address-cells = <2>;
#size-cells = <1>;
ranges = <1 0 0x1b000000 0x1000000
2 0 0x1a000000 0x1000000
3 0 0x19000000 0x1000000
4 0 0x18000000 0x1000000
5 0 0x17000000 0x1000000
6 0 0x16000000 0x1000000>;
clocks = <&cgu JZ4780_CLK_NEMC>;
nand: nand@1 {
compatible = "ingenic,jz4780-nand";
reg = <1 0 0x1000000>;
ingenic,nemc-tAS = <10>;
ingenic,nemc-tAH = <5>;
ingenic,nemc-tBP = <10>;
ingenic,nemc-tAW = <15>;
ingenic,nemc-tSTRV = <100>;
...
};
};

View File

@ -46,11 +46,18 @@ Optional properties for all bus drivers:
interrupt 2
- st,wakeup-{x,y,z}-{lo,hi}: set wakeup condition on x/y/z axis for
upper/lower limit
- st,wakeup-threshold: set wakeup threshold
- st,wakeup2-{x,y,z}-{lo,hi}: set wakeup condition on x/y/z axis for
upper/lower limit for second wakeup
engine.
- st,wakeup2-threshold: set wakeup threshold for second wakeup
engine.
- st,highpass-cutoff-hz=: 1, 2, 4 or 8 for 1Hz, 2Hz, 4Hz or 8Hz of
highpass cut-off frequency
- st,hipass{1,2}-disable: disable highpass 1/2.
- st,default-rate=: set the default rate
- st,axis-{x,y,z}=: set the axis to map to the three coordinates
- st,axis-{x,y,z}=: set the axis to map to the three coordinates.
Negative values can be used for inverted axis.
- st,{min,max}-limit-{x,y,z} set the min/max limits for x/y/z axis
(used by self-test)

View File

@ -1,6 +1,6 @@
Qualcomm SPMI Controller (PMIC Arbiter)
The SPMI PMIC Arbiter is found on the Snapdragon 800 Series. It is an SPMI
The SPMI PMIC Arbiter is found on Snapdragon chipsets. It is an SPMI
controller with wrapping arbitration logic to allow for multiple on-chip
devices to control a single SPMI master.
@ -19,6 +19,10 @@ Required properties:
"core" - core registers
"intr" - interrupt controller registers
"cnfg" - configuration registers
Registers used only for V2 PMIC Arbiter:
"chnls" - tx-channel per virtual slave registers.
"obsrvr" - rx-channel (called observer) per virtual slave registers.
- reg : address + size pairs describing the PMIC arb register sets; order must
correspond with the order of entries in reg-names
- #address-cells : must be set to 2

View File

@ -276,6 +276,7 @@ IOMAP
devm_ioport_unmap()
devm_ioremap()
devm_ioremap_nocache()
devm_ioremap_wc()
devm_ioremap_resource() : checks resource, requests memory region, ioremaps
devm_iounmap()
pcim_iomap()

View File

@ -14,7 +14,7 @@ document is concerned with the latter.
HW assisted tracing is becoming increasingly useful when dealing with systems
that have many SoCs and other components like GPU and DMA engines. ARM has
developed a HW assisted tracing solution by means of different components, each
being added to a design at systhesis time to cater to specific tracing needs.
being added to a design at synthesis time to cater to specific tracing needs.
Compoments are generally categorised as source, link and sinks and are
(usually) discovered using the AMBA bus.

View File

@ -958,7 +958,7 @@ ARM/CORESIGHT FRAMEWORK AND DRIVERS
M: Mathieu Poirier <mathieu.poirier@linaro.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/coresight/*
F: drivers/hwtracing/coresight/*
F: Documentation/trace/coresight.txt
F: Documentation/devicetree/bindings/arm/coresight.txt
F: Documentation/ABI/testing/sysfs-bus-coresight-devices-*
@ -1828,7 +1828,7 @@ S: Supported
F: drivers/spi/spi-atmel.*
ATMEL SSC DRIVER
M: Bo Shen <voice.shen@atmel.com>
M: Nicolas Ferre <nicolas.ferre@atmel.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Supported
F: drivers/misc/atmel-ssc.c

View File

@ -1610,59 +1610,6 @@ config DEBUG_SET_MODULE_RONX
against certain classes of kernel exploits.
If in doubt, say "N".
menuconfig CORESIGHT
bool "CoreSight Tracing Support"
select ARM_AMBA
help
This framework provides a kernel interface for the CoreSight debug
and trace drivers to register themselves with. It's intended to build
a topological view of the CoreSight components based on a DT
specification and configure the right serie of components when a
trace source gets enabled.
source "drivers/hwtracing/coresight/Kconfig"
if CORESIGHT
config CORESIGHT_LINKS_AND_SINKS
bool "CoreSight Link and Sink drivers"
help
This enables support for CoreSight link and sink drivers that are
responsible for transporting and collecting the trace data
respectively. Link and sinks are dynamically aggregated with a trace
entity at run time to form a complete trace path.
config CORESIGHT_LINK_AND_SINK_TMC
bool "Coresight generic TMC driver"
depends on CORESIGHT_LINKS_AND_SINKS
help
This enables support for the Trace Memory Controller driver. Depending
on its configuration the device can act as a link (embedded trace router
- ETR) or sink (embedded trace FIFO). The driver complies with the
generic implementation of the component without special enhancement or
added features.
config CORESIGHT_SINK_TPIU
bool "Coresight generic TPIU driver"
depends on CORESIGHT_LINKS_AND_SINKS
help
This enables support for the Trace Port Interface Unit driver, responsible
for bridging the gap between the on-chip coresight components and a trace
port collection engine, typically connected to an external host for use
case capturing more traces than the on-board coresight memory can handle.
config CORESIGHT_SINK_ETBV10
bool "Coresight ETBv1.0 driver"
depends on CORESIGHT_LINKS_AND_SINKS
help
This enables support for the Embedded Trace Buffer version 1.0 driver
that complies with the generic implementation of the component without
special enhancement or added features.
config CORESIGHT_SOURCE_ETM3X
bool "CoreSight Embedded Trace Macrocell 3.x driver"
select CORESIGHT_LINKS_AND_SINKS
help
This driver provides support for processor ETM3.x and PTM1.x modules,
which allows tracing the instructions that a processor is executing
This is primarily useful for instruction level tracing. Depending
the ETM version data tracing may also be available.
endif
endmenu

View File

@ -275,7 +275,6 @@
compatible = "arm,coresight-etb10", "arm,primecell";
reg = <0 0xe3c42000 0 0x1000>;
coresight-default-sink;
clocks = <&clk_375m>;
clock-names = "apb_pclk";
port {

View File

@ -150,7 +150,6 @@
compatible = "arm,coresight-etb10", "arm,primecell";
reg = <0x5401b000 0x1000>;
coresight-default-sink;
clocks = <&emu_src_ck>;
clock-names = "apb_pclk";
port {

View File

@ -145,7 +145,6 @@
compatible = "arm,coresight-etb10", "arm,primecell";
reg = <0x5401b000 0x1000>;
coresight-default-sink;
clocks = <&emu_src_ck>;
clock-names = "apb_pclk";
port {

View File

@ -609,6 +609,58 @@
pinctrl-0 = <&i2c3_pins>;
clock-frequency = <400000>;
lis302dl: lis3lv02d@1d {
compatible = "st,lis3lv02d";
reg = <0x1d>;
Vdd-supply = <&vaux1>;
Vdd_IO-supply = <&vio>;
interrupt-parent = <&gpio6>;
interrupts = <21 20>; /* 181 and 180 */
/* click flags */
st,click-single-x;
st,click-single-y;
st,click-single-z;
/* Limits are 0.5g * value */
st,click-threshold-x = <8>;
st,click-threshold-y = <8>;
st,click-threshold-z = <10>;
/* Click must be longer than time limit */
st,click-time-limit = <9>;
/* Kind of debounce filter */
st,click-latency = <50>;
/* Interrupt line 2 for click detection */
st,irq2-click;
st,wakeup-x-hi;
st,wakeup-y-hi;
st,wakeup-threshold = <(800/18)>; /* millig-value / 18 to get HW values */
st,wakeup2-z-hi;
st,wakeup2-threshold = <(900/18)>; /* millig-value / 18 to get HW values */
st,hipass1-disable;
st,hipass2-disable;
st,axis-x = <1>; /* LIS3_DEV_X */
st,axis-y = <(-2)>; /* LIS3_INV_DEV_Y */
st,axis-z = <(-3)>; /* LIS3_INV_DEV_Z */
st,min-limit-x = <(-32)>;
st,min-limit-y = <3>;
st,min-limit-z = <3>;
st,max-limit-x = <(-3)>;
st,max-limit-y = <32>;
st,max-limit-z = <32>;
};
};
&mmc1 {

View File

@ -362,7 +362,6 @@
compatible = "arm,coresight-etb10", "arm,primecell";
reg = <0 0x20010000 0 0x1000>;
coresight-default-sink;
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
port {

View File

@ -89,4 +89,6 @@ config DEBUG_ALIGN_RODATA
If in doubt, say N
source "drivers/hwtracing/coresight/Kconfig"
endmenu

View File

@ -67,6 +67,7 @@ static inline void __iomem *ioremap(unsigned long offset, unsigned long size)
extern void iounmap(volatile void __iomem *addr);
#define ioremap_nocache(off,size) ioremap(off,size)
#define ioremap_wc ioremap_nocache
/*
* IO bus memory addresses are also 1:1 with the physical address

View File

@ -225,6 +225,8 @@
#define HV_STATUS_INVALID_HYPERCALL_CODE 2
#define HV_STATUS_INVALID_HYPERCALL_INPUT 3
#define HV_STATUS_INVALID_ALIGNMENT 4
#define HV_STATUS_INSUFFICIENT_MEMORY 11
#define HV_STATUS_INVALID_CONNECTION_ID 18
#define HV_STATUS_INSUFFICIENT_BUFFERS 19
typedef struct _HV_REFERENCE_TSC_PAGE {

View File

@ -163,5 +163,5 @@ obj-$(CONFIG_POWERCAP) += powercap/
obj-$(CONFIG_MCB) += mcb/
obj-$(CONFIG_RAS) += ras/
obj-$(CONFIG_THUNDERBOLT) += thunderbolt/
obj-$(CONFIG_CORESIGHT) += coresight/
obj-$(CONFIG_CORESIGHT) += hwtracing/coresight/
obj-$(CONFIG_ANDROID) += android/

View File

@ -300,11 +300,14 @@ static const struct file_operations rng_chrdev_ops = {
.llseek = noop_llseek,
};
static const struct attribute_group *rng_dev_groups[];
static struct miscdevice rng_miscdev = {
.minor = RNG_MISCDEV_MINOR,
.name = RNG_MODULE_NAME,
.nodename = "hwrng",
.fops = &rng_chrdev_ops,
.groups = rng_dev_groups,
};
@ -377,37 +380,22 @@ static DEVICE_ATTR(rng_available, S_IRUGO,
hwrng_attr_available_show,
NULL);
static struct attribute *rng_dev_attrs[] = {
&dev_attr_rng_current.attr,
&dev_attr_rng_available.attr,
NULL
};
ATTRIBUTE_GROUPS(rng_dev);
static void __exit unregister_miscdev(void)
{
device_remove_file(rng_miscdev.this_device, &dev_attr_rng_available);
device_remove_file(rng_miscdev.this_device, &dev_attr_rng_current);
misc_deregister(&rng_miscdev);
}
static int __init register_miscdev(void)
{
int err;
err = misc_register(&rng_miscdev);
if (err)
goto out;
err = device_create_file(rng_miscdev.this_device,
&dev_attr_rng_current);
if (err)
goto err_misc_dereg;
err = device_create_file(rng_miscdev.this_device,
&dev_attr_rng_available);
if (err)
goto err_remove_current;
out:
return err;
err_remove_current:
device_remove_file(rng_miscdev.this_device, &dev_attr_rng_current);
err_misc_dereg:
misc_deregister(&rng_miscdev);
goto out;
return misc_register(&rng_miscdev);
}
static int hwrng_fillfn(void *unused)

View File

@ -133,7 +133,7 @@ static int rng_remove(struct platform_device *dev)
return 0;
}
static struct of_device_id rng_match[] = {
static const struct of_device_id rng_match[] = {
{ .compatible = "1682m-rng", },
{ .compatible = "pasemi,pwrficient-rng", },
{ },

View File

@ -61,7 +61,7 @@ static int powernv_rng_probe(struct platform_device *pdev)
return 0;
}
static struct of_device_id powernv_rng_match[] = {
static const struct of_device_id powernv_rng_match[] = {
{ .compatible = "ibm,power-rng",},
{},
};

View File

@ -123,7 +123,7 @@ static int ppc4xx_rng_remove(struct platform_device *dev)
return 0;
}
static struct of_device_id ppc4xx_rng_match[] = {
static const struct of_device_id ppc4xx_rng_match[] = {
{ .compatible = "ppc4xx-rng", },
{ .compatible = "amcc,ppc460ex-rng", },
{ .compatible = "amcc,ppc440epx-rng", },

View File

@ -510,13 +510,15 @@ static int i8k_proc_show(struct seq_file *seq, void *offset)
* 9) AC power
* 10) Fn Key status
*/
return seq_printf(seq, "%s %s %s %d %d %d %d %d %d %d\n",
I8K_PROC_FMT,
bios_version,
i8k_get_dmi_data(DMI_PRODUCT_SERIAL),
cpu_temp,
left_fan, right_fan, left_speed, right_speed,
ac_power, fn_key);
seq_printf(seq, "%s %s %s %d %d %d %d %d %d %d\n",
I8K_PROC_FMT,
bios_version,
i8k_get_dmi_data(DMI_PRODUCT_SERIAL),
cpu_temp,
left_fan, right_fan, left_speed, right_speed,
ac_power, fn_key);
return 0;
}
static int i8k_open_fs(struct inode *inode, struct file *file)

View File

@ -2667,7 +2667,7 @@ static struct pci_driver ipmi_pci_driver = {
};
#endif /* CONFIG_PCI */
static struct of_device_id ipmi_match[];
static const struct of_device_id ipmi_match[];
static int ipmi_probe(struct platform_device *dev)
{
#ifdef CONFIG_OF
@ -2764,7 +2764,7 @@ static int ipmi_remove(struct platform_device *dev)
return 0;
}
static struct of_device_id ipmi_match[] =
static const struct of_device_id ipmi_match[] =
{
{ .type = "ipmi", .compatible = "ipmi-kcs",
.data = (void *)(unsigned long) SI_KCS },

View File

@ -140,12 +140,17 @@ static int misc_open(struct inode * inode, struct file * file)
goto fail;
}
/*
* Place the miscdevice in the file's
* private_data so it can be used by the
* file operations, including f_op->open below
*/
file->private_data = c;
err = 0;
replace_fops(file, new_fops);
if (file->f_op->open) {
file->private_data = c;
if (file->f_op->open)
err = file->f_op->open(inode,file);
}
fail:
mutex_unlock(&misc_mtx);
return err;
@ -169,7 +174,9 @@ static const struct file_operations misc_fops = {
* the minor number requested is used.
*
* The structure passed is linked into the kernel and may not be
* destroyed until it has been unregistered.
* destroyed until it has been unregistered. By default, an open()
* syscall to the device sets file->private_data to point to the
* structure. Drivers don't need open in fops for this.
*
* A zero is returned on success and a negative errno code for
* failure.
@ -205,8 +212,9 @@ int misc_register(struct miscdevice * misc)
dev = MKDEV(MISC_MAJOR, misc->minor);
misc->this_device = device_create(misc_class, misc->parent, dev,
misc, "%s", misc->name);
misc->this_device =
device_create_with_groups(misc_class, misc->parent, dev,
misc, misc->groups, "%s", misc->name);
if (IS_ERR(misc->this_device)) {
int i = DYNAMIC_MINORS - misc->minor - 1;
if (i < DYNAMIC_MINORS && i >= 0)

View File

@ -355,7 +355,7 @@ static inline bool use_multiport(struct ports_device *portdev)
* early_init
*/
if (!portdev->vdev)
return 0;
return false;
return __virtio_test_bit(portdev->vdev, VIRTIO_CONSOLE_F_MULTIPORT);
}

View File

@ -1237,6 +1237,8 @@ static ssize_t xillybus_write(struct file *filp, const char __user *userbuf,
unsigned char *tail;
int i;
howmany = 0;
end_offset_plus1 = bufpos >>
channel->log2_element_size;

View File

@ -31,7 +31,7 @@ MODULE_LICENSE("GPL v2");
static const char xillyname[] = "xillybus_of";
/* Match table for of_platform binding */
static struct of_device_id xillybus_of_match[] = {
static const struct of_device_id xillybus_of_match[] = {
{ .compatible = "xillybus,xillybus-1.00.a", },
{ .compatible = "xlnx,xillybus-1.00.a", }, /* Deprecated */
{}

View File

@ -55,6 +55,16 @@ config EXTCON_MAX77693
Maxim MAX77693 PMIC. The MAX77693 MUIC is a USB port accessory
detector and switch.
config EXTCON_MAX77843
tristate "MAX77843 EXTCON Support"
depends on MFD_MAX77843
select IRQ_DOMAIN
select REGMAP_I2C
help
If you say yes here you get support for the MUIC device of
Maxim MAX77843. The MAX77843 MUIC is a USB port accessory
detector add switch.
config EXTCON_MAX8997
tristate "MAX8997 EXTCON Support"
depends on MFD_MAX8997 && IRQ_DOMAIN
@ -93,4 +103,11 @@ config EXTCON_SM5502
Silicon Mitus SM5502. The SM5502 is a USB port accessory
detector and switch.
config EXTCON_USB_GPIO
tristate "USB GPIO extcon support"
depends on GPIOLIB
help
Say Y here to enable GPIO based USB cable detection extcon support.
Used typically if GPIO is used for USB ID pin detection.
endif # MULTISTATE_SWITCH

View File

@ -2,13 +2,15 @@
# Makefile for external connector class (extcon) devices
#
obj-$(CONFIG_EXTCON) += extcon-class.o
obj-$(CONFIG_EXTCON) += extcon.o
obj-$(CONFIG_EXTCON_ADC_JACK) += extcon-adc-jack.o
obj-$(CONFIG_EXTCON_ARIZONA) += extcon-arizona.o
obj-$(CONFIG_EXTCON_GPIO) += extcon-gpio.o
obj-$(CONFIG_EXTCON_MAX14577) += extcon-max14577.o
obj-$(CONFIG_EXTCON_MAX77693) += extcon-max77693.o
obj-$(CONFIG_EXTCON_MAX77843) += extcon-max77843.o
obj-$(CONFIG_EXTCON_MAX8997) += extcon-max8997.o
obj-$(CONFIG_EXTCON_PALMAS) += extcon-palmas.o
obj-$(CONFIG_EXTCON_RT8973A) += extcon-rt8973a.o
obj-$(CONFIG_EXTCON_SM5502) += extcon-sm5502.o
obj-$(CONFIG_EXTCON_USB_GPIO) += extcon-usb-gpio.o

View File

@ -136,18 +136,35 @@ static const char *arizona_cable[] = {
static void arizona_start_hpdet_acc_id(struct arizona_extcon_info *info);
static void arizona_extcon_do_magic(struct arizona_extcon_info *info,
unsigned int magic)
static void arizona_extcon_hp_clamp(struct arizona_extcon_info *info,
bool clamp)
{
struct arizona *arizona = info->arizona;
unsigned int mask = 0, val = 0;
int ret;
switch (arizona->type) {
case WM5110:
mask = ARIZONA_HP1L_SHRTO | ARIZONA_HP1L_FLWR |
ARIZONA_HP1L_SHRTI;
if (clamp)
val = ARIZONA_HP1L_SHRTO;
else
val = ARIZONA_HP1L_FLWR | ARIZONA_HP1L_SHRTI;
break;
default:
mask = ARIZONA_RMV_SHRT_HP1L;
if (clamp)
val = ARIZONA_RMV_SHRT_HP1L;
break;
};
mutex_lock(&arizona->dapm->card->dapm_mutex);
arizona->hpdet_magic = magic;
arizona->hpdet_clamp = clamp;
/* Keep the HP output stages disabled while doing the magic */
if (magic) {
/* Keep the HP output stages disabled while doing the clamp */
if (clamp) {
ret = regmap_update_bits(arizona->regmap,
ARIZONA_OUTPUT_ENABLES_1,
ARIZONA_OUT1L_ENA |
@ -158,20 +175,20 @@ static void arizona_extcon_do_magic(struct arizona_extcon_info *info,
ret);
}
ret = regmap_update_bits(arizona->regmap, 0x225, 0x4000,
magic);
ret = regmap_update_bits(arizona->regmap, ARIZONA_HP_CTRL_1L,
mask, val);
if (ret != 0)
dev_warn(arizona->dev, "Failed to do magic: %d\n",
dev_warn(arizona->dev, "Failed to do clamp: %d\n",
ret);
ret = regmap_update_bits(arizona->regmap, 0x226, 0x4000,
magic);
ret = regmap_update_bits(arizona->regmap, ARIZONA_HP_CTRL_1R,
mask, val);
if (ret != 0)
dev_warn(arizona->dev, "Failed to do magic: %d\n",
dev_warn(arizona->dev, "Failed to do clamp: %d\n",
ret);
/* Restore the desired state while not doing the magic */
if (!magic) {
/* Restore the desired state while not doing the clamp */
if (!clamp) {
ret = regmap_update_bits(arizona->regmap,
ARIZONA_OUTPUT_ENABLES_1,
ARIZONA_OUT1L_ENA |
@ -603,7 +620,7 @@ done:
ARIZONA_HP_IMPEDANCE_RANGE_MASK | ARIZONA_HP_POLL,
0);
arizona_extcon_do_magic(info, 0);
arizona_extcon_hp_clamp(info, false);
if (id_gpio)
gpio_set_value_cansleep(id_gpio, 0);
@ -648,7 +665,7 @@ static void arizona_identify_headphone(struct arizona_extcon_info *info)
if (info->mic)
arizona_stop_mic(info);
arizona_extcon_do_magic(info, 0x4000);
arizona_extcon_hp_clamp(info, true);
ret = regmap_update_bits(arizona->regmap,
ARIZONA_ACCESSORY_DETECT_MODE_1,
@ -699,7 +716,7 @@ static void arizona_start_hpdet_acc_id(struct arizona_extcon_info *info)
info->hpdet_active = true;
arizona_extcon_do_magic(info, 0x4000);
arizona_extcon_hp_clamp(info, true);
ret = regmap_update_bits(arizona->regmap,
ARIZONA_ACCESSORY_DETECT_MODE_1,

View File

@ -539,8 +539,6 @@ static void max14577_muic_irq_work(struct work_struct *work)
dev_err(info->dev, "failed to handle MUIC interrupt\n");
mutex_unlock(&info->mutex);
return;
}
/*
@ -730,8 +728,7 @@ static int max14577_muic_probe(struct platform_device *pdev)
muic_irq->name, info);
if (ret) {
dev_err(&pdev->dev,
"failed: irq request (IRQ: %d,"
" error :%d)\n",
"failed: irq request (IRQ: %d, error :%d)\n",
muic_irq->irq, ret);
return ret;
}

View File

@ -190,8 +190,8 @@ enum max77693_muic_acc_type {
/* The below accessories have same ADC value so ADCLow and
ADC1K bit is used to separate specific accessory */
/* ADC|VBVolot|ADCLow|ADC1K| */
MAX77693_MUIC_GND_USB_OTG = 0x100, /* 0x0| 0| 0| 0| */
MAX77693_MUIC_GND_USB_OTG_VB = 0x104, /* 0x0| 1| 0| 0| */
MAX77693_MUIC_GND_USB_HOST = 0x100, /* 0x0| 0| 0| 0| */
MAX77693_MUIC_GND_USB_HOST_VB = 0x104, /* 0x0| 1| 0| 0| */
MAX77693_MUIC_GND_AV_CABLE_LOAD = 0x102,/* 0x0| 0| 1| 0| */
MAX77693_MUIC_GND_MHL = 0x103, /* 0x0| 0| 1| 1| */
MAX77693_MUIC_GND_MHL_VB = 0x107, /* 0x0| 1| 1| 1| */
@ -228,7 +228,7 @@ static const char *max77693_extcon_cable[] = {
[EXTCON_CABLE_SLOW_CHARGER] = "Slow-charger",
[EXTCON_CABLE_CHARGE_DOWNSTREAM] = "Charge-downstream",
[EXTCON_CABLE_MHL] = "MHL",
[EXTCON_CABLE_MHL_TA] = "MHL_TA",
[EXTCON_CABLE_MHL_TA] = "MHL-TA",
[EXTCON_CABLE_JIG_USB_ON] = "JIG-USB-ON",
[EXTCON_CABLE_JIG_USB_OFF] = "JIG-USB-OFF",
[EXTCON_CABLE_JIG_UART_OFF] = "JIG-UART-OFF",
@ -403,8 +403,8 @@ static int max77693_muic_get_cable_type(struct max77693_muic_info *info,
/**
* [0x1|VBVolt|ADCLow|ADC1K]
* [0x1| 0| 0| 0] USB_OTG
* [0x1| 1| 0| 0] USB_OTG_VB
* [0x1| 0| 0| 0] USB_HOST
* [0x1| 1| 0| 0] USB_HSOT_VB
* [0x1| 0| 1| 0] Audio Video cable with load
* [0x1| 0| 1| 1] MHL without charging cable
* [0x1| 1| 1| 1] MHL with charging cable
@ -523,7 +523,7 @@ static int max77693_muic_dock_handler(struct max77693_muic_info *info,
* - Support charging and data connection through micro-usb port
* if USB cable is connected between target and host
* device.
* - Support OTG device (Mouse/Keyboard)
* - Support OTG(On-The-Go) device (Ex: Mouse/Keyboard)
*/
ret = max77693_muic_set_path(info, info->path_usb, attached);
if (ret < 0)
@ -609,9 +609,9 @@ static int max77693_muic_adc_ground_handler(struct max77693_muic_info *info)
MAX77693_CABLE_GROUP_ADC_GND, &attached);
switch (cable_type_gnd) {
case MAX77693_MUIC_GND_USB_OTG:
case MAX77693_MUIC_GND_USB_OTG_VB:
/* USB_OTG, PATH: AP_USB */
case MAX77693_MUIC_GND_USB_HOST:
case MAX77693_MUIC_GND_USB_HOST_VB:
/* USB_HOST, PATH: AP_USB */
ret = max77693_muic_set_path(info, CONTROL1_SW_USB, attached);
if (ret < 0)
return ret;
@ -704,7 +704,7 @@ static int max77693_muic_adc_handler(struct max77693_muic_info *info)
switch (cable_type) {
case MAX77693_MUIC_ADC_GROUND:
/* USB_OTG/MHL/Audio */
/* USB_HOST/MHL/Audio */
max77693_muic_adc_ground_handler(info);
break;
case MAX77693_MUIC_ADC_FACTORY_MODE_USB_OFF:
@ -823,19 +823,19 @@ static int max77693_muic_chg_handler(struct max77693_muic_info *info)
case MAX77693_MUIC_GND_MHL:
case MAX77693_MUIC_GND_MHL_VB:
/*
* MHL cable with MHL_TA(USB/TA) cable
* MHL cable with MHL-TA(USB/TA) cable
* - MHL cable include two port(HDMI line and separate
* micro-usb port. When the target connect MHL cable,
* extcon driver check whether MHL_TA(USB/TA) cable is
* connected. If MHL_TA cable is connected, extcon
* extcon driver check whether MHL-TA(USB/TA) cable is
* connected. If MHL-TA cable is connected, extcon
* driver notify state to notifiee for charging battery.
*
* Features of 'MHL_TA(USB/TA) with MHL cable'
* Features of 'MHL-TA(USB/TA) with MHL cable'
* - Support MHL
* - Support charging through micro-usb port without
* data connection
*/
extcon_set_cable_state(info->edev, "MHL_TA", attached);
extcon_set_cable_state(info->edev, "MHL-TA", attached);
if (!cable_attached)
extcon_set_cable_state(info->edev,
"MHL", cable_attached);
@ -886,7 +886,7 @@ static int max77693_muic_chg_handler(struct max77693_muic_info *info)
* - Support charging and data connection through micro-
* usb port if USB cable is connected between target
* and host device
* - Support OTG device (Mouse/Keyboard)
* - Support OTG(On-The-Go) device (Ex: Mouse/Keyboard)
*/
ret = max77693_muic_set_path(info, info->path_usb,
attached);
@ -1019,8 +1019,6 @@ static void max77693_muic_irq_work(struct work_struct *work)
dev_err(info->dev, "failed to handle MUIC interrupt\n");
mutex_unlock(&info->mutex);
return;
}
static irqreturn_t max77693_muic_irq_handler(int irq, void *data)
@ -1171,8 +1169,7 @@ static int max77693_muic_probe(struct platform_device *pdev)
muic_irq->name, info);
if (ret) {
dev_err(&pdev->dev,
"failed: irq request (IRQ: %d,"
" error :%d)\n",
"failed: irq request (IRQ: %d, error :%d)\n",
muic_irq->irq, ret);
return ret;
}

View File

@ -0,0 +1,881 @@
/*
* extcon-max77843.c - Maxim MAX77843 extcon driver to support
* MUIC(Micro USB Interface Controller)
*
* Copyright (C) 2015 Samsung Electronics
* Author: Jaewon Kim <jaewon02.kim@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <linux/extcon.h>
#include <linux/i2c.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/mfd/max77843-private.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/workqueue.h>
#define DELAY_MS_DEFAULT 15000 /* unit: millisecond */
enum max77843_muic_status {
MAX77843_MUIC_STATUS1 = 0,
MAX77843_MUIC_STATUS2,
MAX77843_MUIC_STATUS3,
MAX77843_MUIC_STATUS_NUM,
};
struct max77843_muic_info {
struct device *dev;
struct max77843 *max77843;
struct extcon_dev *edev;
struct mutex mutex;
struct work_struct irq_work;
struct delayed_work wq_detcable;
u8 status[MAX77843_MUIC_STATUS_NUM];
int prev_cable_type;
int prev_chg_type;
int prev_gnd_type;
bool irq_adc;
bool irq_chg;
};
enum max77843_muic_cable_group {
MAX77843_CABLE_GROUP_ADC = 0,
MAX77843_CABLE_GROUP_ADC_GND,
MAX77843_CABLE_GROUP_CHG,
};
enum max77843_muic_adc_debounce_time {
MAX77843_DEBOUNCE_TIME_5MS = 0,
MAX77843_DEBOUNCE_TIME_10MS,
MAX77843_DEBOUNCE_TIME_25MS,
MAX77843_DEBOUNCE_TIME_38_62MS,
};
/* Define accessory cable type */
enum max77843_muic_accessory_type {
MAX77843_MUIC_ADC_GROUND = 0,
MAX77843_MUIC_ADC_SEND_END_BUTTON,
MAX77843_MUIC_ADC_REMOTE_S1_BUTTON,
MAX77843_MUIC_ADC_REMOTE_S2_BUTTON,
MAX77843_MUIC_ADC_REMOTE_S3_BUTTON,
MAX77843_MUIC_ADC_REMOTE_S4_BUTTON,
MAX77843_MUIC_ADC_REMOTE_S5_BUTTON,
MAX77843_MUIC_ADC_REMOTE_S6_BUTTON,
MAX77843_MUIC_ADC_REMOTE_S7_BUTTON,
MAX77843_MUIC_ADC_REMOTE_S8_BUTTON,
MAX77843_MUIC_ADC_REMOTE_S9_BUTTON,
MAX77843_MUIC_ADC_REMOTE_S10_BUTTON,
MAX77843_MUIC_ADC_REMOTE_S11_BUTTON,
MAX77843_MUIC_ADC_REMOTE_S12_BUTTON,
MAX77843_MUIC_ADC_RESERVED_ACC_1,
MAX77843_MUIC_ADC_RESERVED_ACC_2,
MAX77843_MUIC_ADC_RESERVED_ACC_3,
MAX77843_MUIC_ADC_RESERVED_ACC_4,
MAX77843_MUIC_ADC_RESERVED_ACC_5,
MAX77843_MUIC_ADC_AUDIO_DEVICE_TYPE2,
MAX77843_MUIC_ADC_PHONE_POWERED_DEV,
MAX77843_MUIC_ADC_TTY_CONVERTER,
MAX77843_MUIC_ADC_UART_CABLE,
MAX77843_MUIC_ADC_CEA936A_TYPE1_CHG,
MAX77843_MUIC_ADC_FACTORY_MODE_USB_OFF,
MAX77843_MUIC_ADC_FACTORY_MODE_USB_ON,
MAX77843_MUIC_ADC_AV_CABLE_NOLOAD,
MAX77843_MUIC_ADC_CEA936A_TYPE2_CHG,
MAX77843_MUIC_ADC_FACTORY_MODE_UART_OFF,
MAX77843_MUIC_ADC_FACTORY_MODE_UART_ON,
MAX77843_MUIC_ADC_AUDIO_DEVICE_TYPE1,
MAX77843_MUIC_ADC_OPEN,
/* The blow accessories should check
not only ADC value but also ADC1K and VBVolt value. */
/* Offset|ADC1K|VBVolt| */
MAX77843_MUIC_GND_USB_HOST = 0x100, /* 0x1| 0| 0| */
MAX77843_MUIC_GND_USB_HOST_VB = 0x101, /* 0x1| 0| 1| */
MAX77843_MUIC_GND_MHL = 0x102, /* 0x1| 1| 0| */
MAX77843_MUIC_GND_MHL_VB = 0x103, /* 0x1| 1| 1| */
};
/* Define charger cable type */
enum max77843_muic_charger_type {
MAX77843_MUIC_CHG_NONE = 0,
MAX77843_MUIC_CHG_USB,
MAX77843_MUIC_CHG_DOWNSTREAM,
MAX77843_MUIC_CHG_DEDICATED,
MAX77843_MUIC_CHG_SPECIAL_500MA,
MAX77843_MUIC_CHG_SPECIAL_1A,
MAX77843_MUIC_CHG_SPECIAL_BIAS,
MAX77843_MUIC_CHG_RESERVED,
MAX77843_MUIC_CHG_GND,
};
enum {
MAX77843_CABLE_USB = 0,
MAX77843_CABLE_USB_HOST,
MAX77843_CABLE_TA,
MAX77843_CABLE_CHARGE_DOWNSTREAM,
MAX77843_CABLE_FAST_CHARGER,
MAX77843_CABLE_SLOW_CHARGER,
MAX77843_CABLE_MHL,
MAX77843_CABLE_MHL_TA,
MAX77843_CABLE_JIG_USB_ON,
MAX77843_CABLE_JIG_USB_OFF,
MAX77843_CABLE_JIG_UART_ON,
MAX77843_CABLE_JIG_UART_OFF,
MAX77843_CABLE_NUM,
};
static const char *max77843_extcon_cable[] = {
[MAX77843_CABLE_USB] = "USB",
[MAX77843_CABLE_USB_HOST] = "USB-HOST",
[MAX77843_CABLE_TA] = "TA",
[MAX77843_CABLE_CHARGE_DOWNSTREAM] = "CHARGER-DOWNSTREAM",
[MAX77843_CABLE_FAST_CHARGER] = "FAST-CHARGER",
[MAX77843_CABLE_SLOW_CHARGER] = "SLOW-CHARGER",
[MAX77843_CABLE_MHL] = "MHL",
[MAX77843_CABLE_MHL_TA] = "MHL-TA",
[MAX77843_CABLE_JIG_USB_ON] = "JIG-USB-ON",
[MAX77843_CABLE_JIG_USB_OFF] = "JIG-USB-OFF",
[MAX77843_CABLE_JIG_UART_ON] = "JIG-UART-ON",
[MAX77843_CABLE_JIG_UART_OFF] = "JIG-UART-OFF",
};
struct max77843_muic_irq {
unsigned int irq;
const char *name;
unsigned int virq;
};
static struct max77843_muic_irq max77843_muic_irqs[] = {
{ MAX77843_MUIC_IRQ_INT1_ADC, "MUIC-ADC" },
{ MAX77843_MUIC_IRQ_INT1_ADCERROR, "MUIC-ADC_ERROR" },
{ MAX77843_MUIC_IRQ_INT1_ADC1K, "MUIC-ADC1K" },
{ MAX77843_MUIC_IRQ_INT2_CHGTYP, "MUIC-CHGTYP" },
{ MAX77843_MUIC_IRQ_INT2_CHGDETRUN, "MUIC-CHGDETRUN" },
{ MAX77843_MUIC_IRQ_INT2_DCDTMR, "MUIC-DCDTMR" },
{ MAX77843_MUIC_IRQ_INT2_DXOVP, "MUIC-DXOVP" },
{ MAX77843_MUIC_IRQ_INT2_VBVOLT, "MUIC-VBVOLT" },
{ MAX77843_MUIC_IRQ_INT3_VBADC, "MUIC-VBADC" },
{ MAX77843_MUIC_IRQ_INT3_VDNMON, "MUIC-VDNMON" },
{ MAX77843_MUIC_IRQ_INT3_DNRES, "MUIC-DNRES" },
{ MAX77843_MUIC_IRQ_INT3_MPNACK, "MUIC-MPNACK"},
{ MAX77843_MUIC_IRQ_INT3_MRXBUFOW, "MUIC-MRXBUFOW"},
{ MAX77843_MUIC_IRQ_INT3_MRXTRF, "MUIC-MRXTRF"},
{ MAX77843_MUIC_IRQ_INT3_MRXPERR, "MUIC-MRXPERR"},
{ MAX77843_MUIC_IRQ_INT3_MRXRDY, "MUIC-MRXRDY"},
};
static const struct regmap_config max77843_muic_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
.max_register = MAX77843_MUIC_REG_END,
};
static const struct regmap_irq max77843_muic_irq[] = {
/* INT1 interrupt */
{ .reg_offset = 0, .mask = MAX77843_MUIC_ADC, },
{ .reg_offset = 0, .mask = MAX77843_MUIC_ADCERROR, },
{ .reg_offset = 0, .mask = MAX77843_MUIC_ADC1K, },
/* INT2 interrupt */
{ .reg_offset = 1, .mask = MAX77843_MUIC_CHGTYP, },
{ .reg_offset = 1, .mask = MAX77843_MUIC_CHGDETRUN, },
{ .reg_offset = 1, .mask = MAX77843_MUIC_DCDTMR, },
{ .reg_offset = 1, .mask = MAX77843_MUIC_DXOVP, },
{ .reg_offset = 1, .mask = MAX77843_MUIC_VBVOLT, },
/* INT3 interrupt */
{ .reg_offset = 2, .mask = MAX77843_MUIC_VBADC, },
{ .reg_offset = 2, .mask = MAX77843_MUIC_VDNMON, },
{ .reg_offset = 2, .mask = MAX77843_MUIC_DNRES, },
{ .reg_offset = 2, .mask = MAX77843_MUIC_MPNACK, },
{ .reg_offset = 2, .mask = MAX77843_MUIC_MRXBUFOW, },
{ .reg_offset = 2, .mask = MAX77843_MUIC_MRXTRF, },
{ .reg_offset = 2, .mask = MAX77843_MUIC_MRXPERR, },
{ .reg_offset = 2, .mask = MAX77843_MUIC_MRXRDY, },
};
static const struct regmap_irq_chip max77843_muic_irq_chip = {
.name = "max77843-muic",
.status_base = MAX77843_MUIC_REG_INT1,
.mask_base = MAX77843_MUIC_REG_INTMASK1,
.mask_invert = true,
.num_regs = 3,
.irqs = max77843_muic_irq,
.num_irqs = ARRAY_SIZE(max77843_muic_irq),
};
static int max77843_muic_set_path(struct max77843_muic_info *info,
u8 val, bool attached)
{
struct max77843 *max77843 = info->max77843;
int ret = 0;
unsigned int ctrl1, ctrl2;
if (attached)
ctrl1 = val;
else
ctrl1 = CONTROL1_SW_OPEN;
ret = regmap_update_bits(max77843->regmap_muic,
MAX77843_MUIC_REG_CONTROL1,
CONTROL1_COM_SW, ctrl1);
if (ret < 0) {
dev_err(info->dev, "Cannot switch MUIC port\n");
return ret;
}
if (attached)
ctrl2 = MAX77843_MUIC_CONTROL2_CPEN_MASK;
else
ctrl2 = MAX77843_MUIC_CONTROL2_LOWPWR_MASK;
ret = regmap_update_bits(max77843->regmap_muic,
MAX77843_MUIC_REG_CONTROL2,
MAX77843_MUIC_CONTROL2_LOWPWR_MASK |
MAX77843_MUIC_CONTROL2_CPEN_MASK, ctrl2);
if (ret < 0) {
dev_err(info->dev, "Cannot update lowpower mode\n");
return ret;
}
dev_dbg(info->dev,
"CONTROL1 : 0x%02x, CONTROL2 : 0x%02x, state : %s\n",
ctrl1, ctrl2, attached ? "attached" : "detached");
return 0;
}
static int max77843_muic_get_cable_type(struct max77843_muic_info *info,
enum max77843_muic_cable_group group, bool *attached)
{
int adc, chg_type, cable_type, gnd_type;
adc = info->status[MAX77843_MUIC_STATUS1] &
MAX77843_MUIC_STATUS1_ADC_MASK;
adc >>= STATUS1_ADC_SHIFT;
switch (group) {
case MAX77843_CABLE_GROUP_ADC:
if (adc == MAX77843_MUIC_ADC_OPEN) {
*attached = false;
cable_type = info->prev_cable_type;
info->prev_cable_type = MAX77843_MUIC_ADC_OPEN;
} else {
*attached = true;
cable_type = info->prev_cable_type = adc;
}
break;
case MAX77843_CABLE_GROUP_CHG:
chg_type = info->status[MAX77843_MUIC_STATUS2] &
MAX77843_MUIC_STATUS2_CHGTYP_MASK;
/* Check GROUND accessory with charger cable */
if (adc == MAX77843_MUIC_ADC_GROUND) {
if (chg_type == MAX77843_MUIC_CHG_NONE) {
/* The following state when charger cable is
* disconnected but the GROUND accessory still
* connected */
*attached = false;
cable_type = info->prev_chg_type;
info->prev_chg_type = MAX77843_MUIC_CHG_NONE;
} else {
/* The following state when charger cable is
* connected on the GROUND accessory */
*attached = true;
cable_type = MAX77843_MUIC_CHG_GND;
info->prev_chg_type = MAX77843_MUIC_CHG_GND;
}
break;
}
if (chg_type == MAX77843_MUIC_CHG_NONE) {
*attached = false;
cable_type = info->prev_chg_type;
info->prev_chg_type = MAX77843_MUIC_CHG_NONE;
} else {
*attached = true;
cable_type = info->prev_chg_type = chg_type;
}
break;
case MAX77843_CABLE_GROUP_ADC_GND:
if (adc == MAX77843_MUIC_ADC_OPEN) {
*attached = false;
cable_type = info->prev_gnd_type;
info->prev_gnd_type = MAX77843_MUIC_ADC_OPEN;
} else {
*attached = true;
/* Offset|ADC1K|VBVolt|
* 0x1| 0| 0| USB-HOST
* 0x1| 0| 1| USB-HOST with VB
* 0x1| 1| 0| MHL
* 0x1| 1| 1| MHL with VB */
/* Get ADC1K register bit */
gnd_type = (info->status[MAX77843_MUIC_STATUS1] &
MAX77843_MUIC_STATUS1_ADC1K_MASK);
/* Get VBVolt register bit */
gnd_type |= (info->status[MAX77843_MUIC_STATUS2] &
MAX77843_MUIC_STATUS2_VBVOLT_MASK);
gnd_type >>= STATUS2_VBVOLT_SHIFT;
/* Offset of GND cable */
gnd_type |= MAX77843_MUIC_GND_USB_HOST;
cable_type = info->prev_gnd_type = gnd_type;
}
break;
default:
dev_err(info->dev, "Unknown cable group (%d)\n", group);
cable_type = -EINVAL;
break;
}
return cable_type;
}
static int max77843_muic_adc_gnd_handler(struct max77843_muic_info *info)
{
int ret, gnd_cable_type;
bool attached;
gnd_cable_type = max77843_muic_get_cable_type(info,
MAX77843_CABLE_GROUP_ADC_GND, &attached);
dev_dbg(info->dev, "external connector is %s (gnd:0x%02x)\n",
attached ? "attached" : "detached", gnd_cable_type);
switch (gnd_cable_type) {
case MAX77843_MUIC_GND_USB_HOST:
case MAX77843_MUIC_GND_USB_HOST_VB:
ret = max77843_muic_set_path(info, CONTROL1_SW_USB, attached);
if (ret < 0)
return ret;
extcon_set_cable_state(info->edev, "USB-HOST", attached);
break;
case MAX77843_MUIC_GND_MHL_VB:
case MAX77843_MUIC_GND_MHL:
ret = max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
if (ret < 0)
return ret;
extcon_set_cable_state(info->edev, "MHL", attached);
break;
default:
dev_err(info->dev, "failed to detect %s accessory(gnd:0x%x)\n",
attached ? "attached" : "detached", gnd_cable_type);
return -EINVAL;
}
return 0;
}
static int max77843_muic_jig_handler(struct max77843_muic_info *info,
int cable_type, bool attached)
{
int ret;
dev_dbg(info->dev, "external connector is %s (adc:0x%02x)\n",
attached ? "attached" : "detached", cable_type);
switch (cable_type) {
case MAX77843_MUIC_ADC_FACTORY_MODE_USB_OFF:
ret = max77843_muic_set_path(info, CONTROL1_SW_USB, attached);
if (ret < 0)
return ret;
extcon_set_cable_state(info->edev, "JIG-USB-OFF", attached);
break;
case MAX77843_MUIC_ADC_FACTORY_MODE_USB_ON:
ret = max77843_muic_set_path(info, CONTROL1_SW_USB, attached);
if (ret < 0)
return ret;
extcon_set_cable_state(info->edev, "JIG-USB-ON", attached);
break;
case MAX77843_MUIC_ADC_FACTORY_MODE_UART_OFF:
ret = max77843_muic_set_path(info, CONTROL1_SW_UART, attached);
if (ret < 0)
return ret;
extcon_set_cable_state(info->edev, "JIG-UART-OFF", attached);
break;
default:
ret = max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
if (ret < 0)
return ret;
break;
}
return 0;
}
static int max77843_muic_adc_handler(struct max77843_muic_info *info)
{
int ret, cable_type;
bool attached;
cable_type = max77843_muic_get_cable_type(info,
MAX77843_CABLE_GROUP_ADC, &attached);
dev_dbg(info->dev,
"external connector is %s (adc:0x%02x, prev_adc:0x%x)\n",
attached ? "attached" : "detached", cable_type,
info->prev_cable_type);
switch (cable_type) {
case MAX77843_MUIC_ADC_GROUND:
ret = max77843_muic_adc_gnd_handler(info);
if (ret < 0)
return ret;
break;
case MAX77843_MUIC_ADC_FACTORY_MODE_USB_OFF:
case MAX77843_MUIC_ADC_FACTORY_MODE_USB_ON:
case MAX77843_MUIC_ADC_FACTORY_MODE_UART_OFF:
ret = max77843_muic_jig_handler(info, cable_type, attached);
if (ret < 0)
return ret;
break;
case MAX77843_MUIC_ADC_SEND_END_BUTTON:
case MAX77843_MUIC_ADC_REMOTE_S1_BUTTON:
case MAX77843_MUIC_ADC_REMOTE_S2_BUTTON:
case MAX77843_MUIC_ADC_REMOTE_S3_BUTTON:
case MAX77843_MUIC_ADC_REMOTE_S4_BUTTON:
case MAX77843_MUIC_ADC_REMOTE_S5_BUTTON:
case MAX77843_MUIC_ADC_REMOTE_S6_BUTTON:
case MAX77843_MUIC_ADC_REMOTE_S7_BUTTON:
case MAX77843_MUIC_ADC_REMOTE_S8_BUTTON:
case MAX77843_MUIC_ADC_REMOTE_S9_BUTTON:
case MAX77843_MUIC_ADC_REMOTE_S10_BUTTON:
case MAX77843_MUIC_ADC_REMOTE_S11_BUTTON:
case MAX77843_MUIC_ADC_REMOTE_S12_BUTTON:
case MAX77843_MUIC_ADC_RESERVED_ACC_1:
case MAX77843_MUIC_ADC_RESERVED_ACC_2:
case MAX77843_MUIC_ADC_RESERVED_ACC_3:
case MAX77843_MUIC_ADC_RESERVED_ACC_4:
case MAX77843_MUIC_ADC_RESERVED_ACC_5:
case MAX77843_MUIC_ADC_AUDIO_DEVICE_TYPE2:
case MAX77843_MUIC_ADC_PHONE_POWERED_DEV:
case MAX77843_MUIC_ADC_TTY_CONVERTER:
case MAX77843_MUIC_ADC_UART_CABLE:
case MAX77843_MUIC_ADC_CEA936A_TYPE1_CHG:
case MAX77843_MUIC_ADC_AV_CABLE_NOLOAD:
case MAX77843_MUIC_ADC_CEA936A_TYPE2_CHG:
case MAX77843_MUIC_ADC_FACTORY_MODE_UART_ON:
case MAX77843_MUIC_ADC_AUDIO_DEVICE_TYPE1:
case MAX77843_MUIC_ADC_OPEN:
dev_err(info->dev,
"accessory is %s but it isn't used (adc:0x%x)\n",
attached ? "attached" : "detached", cable_type);
return -EAGAIN;
default:
dev_err(info->dev,
"failed to detect %s accessory (adc:0x%x)\n",
attached ? "attached" : "detached", cable_type);
return -EINVAL;
}
return 0;
}
static int max77843_muic_chg_handler(struct max77843_muic_info *info)
{
int ret, chg_type, gnd_type;
bool attached;
chg_type = max77843_muic_get_cable_type(info,
MAX77843_CABLE_GROUP_CHG, &attached);
dev_dbg(info->dev,
"external connector is %s(chg_type:0x%x, prev_chg_type:0x%x)\n",
attached ? "attached" : "detached",
chg_type, info->prev_chg_type);
switch (chg_type) {
case MAX77843_MUIC_CHG_USB:
ret = max77843_muic_set_path(info, CONTROL1_SW_USB, attached);
if (ret < 0)
return ret;
extcon_set_cable_state(info->edev, "USB", attached);
break;
case MAX77843_MUIC_CHG_DOWNSTREAM:
ret = max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
if (ret < 0)
return ret;
extcon_set_cable_state(info->edev,
"CHARGER-DOWNSTREAM", attached);
break;
case MAX77843_MUIC_CHG_DEDICATED:
ret = max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
if (ret < 0)
return ret;
extcon_set_cable_state(info->edev, "TA", attached);
break;
case MAX77843_MUIC_CHG_SPECIAL_500MA:
ret = max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
if (ret < 0)
return ret;
extcon_set_cable_state(info->edev, "SLOW-CHAREGER", attached);
break;
case MAX77843_MUIC_CHG_SPECIAL_1A:
ret = max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
if (ret < 0)
return ret;
extcon_set_cable_state(info->edev, "FAST-CHARGER", attached);
break;
case MAX77843_MUIC_CHG_GND:
gnd_type = max77843_muic_get_cable_type(info,
MAX77843_CABLE_GROUP_ADC_GND, &attached);
/* Charger cable on MHL accessory is attach or detach */
if (gnd_type == MAX77843_MUIC_GND_MHL_VB)
extcon_set_cable_state(info->edev, "MHL-TA", true);
else if (gnd_type == MAX77843_MUIC_GND_MHL)
extcon_set_cable_state(info->edev, "MHL-TA", false);
break;
case MAX77843_MUIC_CHG_NONE:
break;
default:
dev_err(info->dev,
"failed to detect %s accessory (chg_type:0x%x)\n",
attached ? "attached" : "detached", chg_type);
max77843_muic_set_path(info, CONTROL1_SW_OPEN, attached);
return -EINVAL;
}
return 0;
}
static void max77843_muic_irq_work(struct work_struct *work)
{
struct max77843_muic_info *info = container_of(work,
struct max77843_muic_info, irq_work);
struct max77843 *max77843 = info->max77843;
int ret = 0;
mutex_lock(&info->mutex);
ret = regmap_bulk_read(max77843->regmap_muic,
MAX77843_MUIC_REG_STATUS1, info->status,
MAX77843_MUIC_STATUS_NUM);
if (ret) {
dev_err(info->dev, "Cannot read STATUS registers\n");
mutex_unlock(&info->mutex);
return;
}
if (info->irq_adc) {
ret = max77843_muic_adc_handler(info);
if (ret)
dev_err(info->dev, "Unknown cable type\n");
info->irq_adc = false;
}
if (info->irq_chg) {
ret = max77843_muic_chg_handler(info);
if (ret)
dev_err(info->dev, "Unknown charger type\n");
info->irq_chg = false;
}
mutex_unlock(&info->mutex);
}
static irqreturn_t max77843_muic_irq_handler(int irq, void *data)
{
struct max77843_muic_info *info = data;
int i, irq_type = -1;
for (i = 0; i < ARRAY_SIZE(max77843_muic_irqs); i++)
if (irq == max77843_muic_irqs[i].virq)
irq_type = max77843_muic_irqs[i].irq;
switch (irq_type) {
case MAX77843_MUIC_IRQ_INT1_ADC:
case MAX77843_MUIC_IRQ_INT1_ADCERROR:
case MAX77843_MUIC_IRQ_INT1_ADC1K:
info->irq_adc = true;
break;
case MAX77843_MUIC_IRQ_INT2_CHGTYP:
case MAX77843_MUIC_IRQ_INT2_CHGDETRUN:
case MAX77843_MUIC_IRQ_INT2_DCDTMR:
case MAX77843_MUIC_IRQ_INT2_DXOVP:
case MAX77843_MUIC_IRQ_INT2_VBVOLT:
info->irq_chg = true;
break;
case MAX77843_MUIC_IRQ_INT3_VBADC:
case MAX77843_MUIC_IRQ_INT3_VDNMON:
case MAX77843_MUIC_IRQ_INT3_DNRES:
case MAX77843_MUIC_IRQ_INT3_MPNACK:
case MAX77843_MUIC_IRQ_INT3_MRXBUFOW:
case MAX77843_MUIC_IRQ_INT3_MRXTRF:
case MAX77843_MUIC_IRQ_INT3_MRXPERR:
case MAX77843_MUIC_IRQ_INT3_MRXRDY:
break;
default:
dev_err(info->dev, "Cannot recognize IRQ(%d)\n", irq_type);
break;
}
schedule_work(&info->irq_work);
return IRQ_HANDLED;
}
static void max77843_muic_detect_cable_wq(struct work_struct *work)
{
struct max77843_muic_info *info = container_of(to_delayed_work(work),
struct max77843_muic_info, wq_detcable);
struct max77843 *max77843 = info->max77843;
int chg_type, adc, ret;
bool attached;
mutex_lock(&info->mutex);
ret = regmap_bulk_read(max77843->regmap_muic,
MAX77843_MUIC_REG_STATUS1, info->status,
MAX77843_MUIC_STATUS_NUM);
if (ret) {
dev_err(info->dev, "Cannot read STATUS registers\n");
goto err_cable_wq;
}
adc = max77843_muic_get_cable_type(info,
MAX77843_CABLE_GROUP_ADC, &attached);
if (attached && adc != MAX77843_MUIC_ADC_OPEN) {
ret = max77843_muic_adc_handler(info);
if (ret < 0) {
dev_err(info->dev, "Cannot detect accessory\n");
goto err_cable_wq;
}
}
chg_type = max77843_muic_get_cable_type(info,
MAX77843_CABLE_GROUP_CHG, &attached);
if (attached && chg_type != MAX77843_MUIC_CHG_NONE) {
ret = max77843_muic_chg_handler(info);
if (ret < 0) {
dev_err(info->dev, "Cannot detect charger accessory\n");
goto err_cable_wq;
}
}
err_cable_wq:
mutex_unlock(&info->mutex);
}
static int max77843_muic_set_debounce_time(struct max77843_muic_info *info,
enum max77843_muic_adc_debounce_time time)
{
struct max77843 *max77843 = info->max77843;
int ret;
switch (time) {
case MAX77843_DEBOUNCE_TIME_5MS:
case MAX77843_DEBOUNCE_TIME_10MS:
case MAX77843_DEBOUNCE_TIME_25MS:
case MAX77843_DEBOUNCE_TIME_38_62MS:
ret = regmap_update_bits(max77843->regmap_muic,
MAX77843_MUIC_REG_CONTROL4,
MAX77843_MUIC_CONTROL4_ADCDBSET_MASK,
time << CONTROL4_ADCDBSET_SHIFT);
if (ret < 0) {
dev_err(info->dev, "Cannot write MUIC regmap\n");
return ret;
}
break;
default:
dev_err(info->dev, "Invalid ADC debounce time\n");
return -EINVAL;
}
return 0;
}
static int max77843_init_muic_regmap(struct max77843 *max77843)
{
int ret;
max77843->i2c_muic = i2c_new_dummy(max77843->i2c->adapter,
I2C_ADDR_MUIC);
if (!max77843->i2c_muic) {
dev_err(&max77843->i2c->dev,
"Cannot allocate I2C device for MUIC\n");
return -ENOMEM;
}
i2c_set_clientdata(max77843->i2c_muic, max77843);
max77843->regmap_muic = devm_regmap_init_i2c(max77843->i2c_muic,
&max77843_muic_regmap_config);
if (IS_ERR(max77843->regmap_muic)) {
ret = PTR_ERR(max77843->regmap_muic);
goto err_muic_i2c;
}
ret = regmap_add_irq_chip(max77843->regmap_muic, max77843->irq,
IRQF_TRIGGER_LOW | IRQF_ONESHOT | IRQF_SHARED,
0, &max77843_muic_irq_chip, &max77843->irq_data_muic);
if (ret < 0) {
dev_err(&max77843->i2c->dev, "Cannot add MUIC IRQ chip\n");
goto err_muic_i2c;
}
return 0;
err_muic_i2c:
i2c_unregister_device(max77843->i2c_muic);
return ret;
}
static int max77843_muic_probe(struct platform_device *pdev)
{
struct max77843 *max77843 = dev_get_drvdata(pdev->dev.parent);
struct max77843_muic_info *info;
unsigned int id;
int i, ret;
info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
if (!info)
return -ENOMEM;
info->dev = &pdev->dev;
info->max77843 = max77843;
platform_set_drvdata(pdev, info);
mutex_init(&info->mutex);
/* Initialize i2c and regmap */
ret = max77843_init_muic_regmap(max77843);
if (ret) {
dev_err(&pdev->dev, "Failed to init MUIC regmap\n");
return ret;
}
/* Turn off auto detection configuration */
ret = regmap_update_bits(max77843->regmap_muic,
MAX77843_MUIC_REG_CONTROL4,
MAX77843_MUIC_CONTROL4_USBAUTO_MASK |
MAX77843_MUIC_CONTROL4_FCTAUTO_MASK,
CONTROL4_AUTO_DISABLE);
/* Initialize extcon device */
info->edev = devm_extcon_dev_allocate(&pdev->dev,
max77843_extcon_cable);
if (IS_ERR(info->edev)) {
dev_err(&pdev->dev, "Failed to allocate memory for extcon\n");
ret = -ENODEV;
goto err_muic_irq;
}
ret = devm_extcon_dev_register(&pdev->dev, info->edev);
if (ret) {
dev_err(&pdev->dev, "Failed to register extcon device\n");
goto err_muic_irq;
}
/* Set ADC debounce time */
max77843_muic_set_debounce_time(info, MAX77843_DEBOUNCE_TIME_25MS);
/* Set initial path for UART */
max77843_muic_set_path(info, CONTROL1_SW_UART, true);
/* Check revision number of MUIC device */
ret = regmap_read(max77843->regmap_muic, MAX77843_MUIC_REG_ID, &id);
if (ret < 0) {
dev_err(&pdev->dev, "Failed to read revision number\n");
goto err_muic_irq;
}
dev_info(info->dev, "MUIC device ID : 0x%x\n", id);
/* Support virtual irq domain for max77843 MUIC device */
INIT_WORK(&info->irq_work, max77843_muic_irq_work);
for (i = 0; i < ARRAY_SIZE(max77843_muic_irqs); i++) {
struct max77843_muic_irq *muic_irq = &max77843_muic_irqs[i];
unsigned int virq = 0;
virq = regmap_irq_get_virq(max77843->irq_data_muic,
muic_irq->irq);
if (virq <= 0) {
ret = -EINVAL;
goto err_muic_irq;
}
muic_irq->virq = virq;
ret = devm_request_threaded_irq(&pdev->dev, virq, NULL,
max77843_muic_irq_handler, IRQF_NO_SUSPEND,
muic_irq->name, info);
if (ret) {
dev_err(&pdev->dev,
"Failed to request irq (IRQ: %d, error: %d)\n",
muic_irq->irq, ret);
goto err_muic_irq;
}
}
/* Detect accessory after completing the initialization of platform */
INIT_DELAYED_WORK(&info->wq_detcable, max77843_muic_detect_cable_wq);
queue_delayed_work(system_power_efficient_wq,
&info->wq_detcable, msecs_to_jiffies(DELAY_MS_DEFAULT));
return 0;
err_muic_irq:
regmap_del_irq_chip(max77843->irq, max77843->irq_data_muic);
i2c_unregister_device(max77843->i2c_muic);
return ret;
}
static int max77843_muic_remove(struct platform_device *pdev)
{
struct max77843_muic_info *info = platform_get_drvdata(pdev);
struct max77843 *max77843 = info->max77843;
cancel_work_sync(&info->irq_work);
regmap_del_irq_chip(max77843->irq, max77843->irq_data_muic);
i2c_unregister_device(max77843->i2c_muic);
return 0;
}
static const struct platform_device_id max77843_muic_id[] = {
{ "max77843-muic", },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(platform, max77843_muic_id);
static struct platform_driver max77843_muic_driver = {
.driver = {
.name = "max77843-muic",
},
.probe = max77843_muic_probe,
.remove = max77843_muic_remove,
.id_table = max77843_muic_id,
};
static int __init max77843_muic_init(void)
{
return platform_driver_register(&max77843_muic_driver);
}
subsys_initcall(max77843_muic_init);
MODULE_DESCRIPTION("Maxim MAX77843 Extcon driver");
MODULE_AUTHOR("Jaewon Kim <jaewon02.kim@samsung.com>");
MODULE_LICENSE("GPL");

View File

@ -579,8 +579,6 @@ static void max8997_muic_irq_work(struct work_struct *work)
dev_err(info->dev, "failed to handle MUIC interrupt\n");
mutex_unlock(&info->mutex);
return;
}
static irqreturn_t max8997_muic_irq_handler(int irq, void *data)
@ -689,8 +687,7 @@ static int max8997_muic_probe(struct platform_device *pdev)
muic_irq->name, info);
if (ret) {
dev_err(&pdev->dev,
"failed: irq request (IRQ: %d,"
" error :%d)\n",
"failed: irq request (IRQ: %d, error :%d)\n",
muic_irq->irq, ret);
goto err_irq;
}

View File

@ -582,10 +582,8 @@ static int rt8973a_muic_i2c_probe(struct i2c_client *i2c,
return -EINVAL;
info = devm_kzalloc(&i2c->dev, sizeof(*info), GFP_KERNEL);
if (!info) {
dev_err(&i2c->dev, "failed to allocate memory\n");
if (!info)
return -ENOMEM;
}
i2c_set_clientdata(i2c, info);
info->dev = &i2c->dev;
@ -681,7 +679,7 @@ static int rt8973a_muic_i2c_remove(struct i2c_client *i2c)
return 0;
}
static struct of_device_id rt8973a_dt_match[] = {
static const struct of_device_id rt8973a_dt_match[] = {
{ .compatible = "richtek,rt8973a-muic" },
{ },
};

View File

@ -359,8 +359,8 @@ static unsigned int sm5502_muic_get_cable_type(struct sm5502_muic_info *info)
break;
default:
dev_dbg(info->dev,
"cannot identify the cable type: adc(0x%x) "
"dev_type1(0x%x)\n", adc, dev_type1);
"cannot identify the cable type: adc(0x%x)\n",
adc);
return -EINVAL;
};
break;
@ -659,7 +659,7 @@ static int sm5502_muic_i2c_remove(struct i2c_client *i2c)
return 0;
}
static struct of_device_id sm5502_dt_match[] = {
static const struct of_device_id sm5502_dt_match[] = {
{ .compatible = "siliconmitus,sm5502-muic" },
{ },
};

View File

@ -0,0 +1,237 @@
/**
* drivers/extcon/extcon-usb-gpio.c - USB GPIO extcon driver
*
* Copyright (C) 2015 Texas Instruments Incorporated - http://www.ti.com
* Author: Roger Quadros <rogerq@ti.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/extcon.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_gpio.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/workqueue.h>
#define USB_GPIO_DEBOUNCE_MS 20 /* ms */
struct usb_extcon_info {
struct device *dev;
struct extcon_dev *edev;
struct gpio_desc *id_gpiod;
int id_irq;
unsigned long debounce_jiffies;
struct delayed_work wq_detcable;
};
/* List of detectable cables */
enum {
EXTCON_CABLE_USB = 0,
EXTCON_CABLE_USB_HOST,
EXTCON_CABLE_END,
};
static const char *usb_extcon_cable[] = {
[EXTCON_CABLE_USB] = "USB",
[EXTCON_CABLE_USB_HOST] = "USB-HOST",
NULL,
};
static void usb_extcon_detect_cable(struct work_struct *work)
{
int id;
struct usb_extcon_info *info = container_of(to_delayed_work(work),
struct usb_extcon_info,
wq_detcable);
/* check ID and update cable state */
id = gpiod_get_value_cansleep(info->id_gpiod);
if (id) {
/*
* ID = 1 means USB HOST cable detached.
* As we don't have event for USB peripheral cable attached,
* we simulate USB peripheral attach here.
*/
extcon_set_cable_state(info->edev,
usb_extcon_cable[EXTCON_CABLE_USB_HOST],
false);
extcon_set_cable_state(info->edev,
usb_extcon_cable[EXTCON_CABLE_USB],
true);
} else {
/*
* ID = 0 means USB HOST cable attached.
* As we don't have event for USB peripheral cable detached,
* we simulate USB peripheral detach here.
*/
extcon_set_cable_state(info->edev,
usb_extcon_cable[EXTCON_CABLE_USB],
false);
extcon_set_cable_state(info->edev,
usb_extcon_cable[EXTCON_CABLE_USB_HOST],
true);
}
}
static irqreturn_t usb_irq_handler(int irq, void *dev_id)
{
struct usb_extcon_info *info = dev_id;
queue_delayed_work(system_power_efficient_wq, &info->wq_detcable,
info->debounce_jiffies);
return IRQ_HANDLED;
}
static int usb_extcon_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct usb_extcon_info *info;
int ret;
if (!np)
return -EINVAL;
info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
if (!info)
return -ENOMEM;
info->dev = dev;
info->id_gpiod = devm_gpiod_get(&pdev->dev, "id");
if (IS_ERR(info->id_gpiod)) {
dev_err(dev, "failed to get ID GPIO\n");
return PTR_ERR(info->id_gpiod);
}
ret = gpiod_set_debounce(info->id_gpiod,
USB_GPIO_DEBOUNCE_MS * 1000);
if (ret < 0)
info->debounce_jiffies = msecs_to_jiffies(USB_GPIO_DEBOUNCE_MS);
INIT_DELAYED_WORK(&info->wq_detcable, usb_extcon_detect_cable);
info->id_irq = gpiod_to_irq(info->id_gpiod);
if (info->id_irq < 0) {
dev_err(dev, "failed to get ID IRQ\n");
return info->id_irq;
}
ret = devm_request_threaded_irq(dev, info->id_irq, NULL,
usb_irq_handler,
IRQF_TRIGGER_RISING |
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
pdev->name, info);
if (ret < 0) {
dev_err(dev, "failed to request handler for ID IRQ\n");
return ret;
}
info->edev = devm_extcon_dev_allocate(dev, usb_extcon_cable);
if (IS_ERR(info->edev)) {
dev_err(dev, "failed to allocate extcon device\n");
return -ENOMEM;
}
ret = devm_extcon_dev_register(dev, info->edev);
if (ret < 0) {
dev_err(dev, "failed to register extcon device\n");
return ret;
}
platform_set_drvdata(pdev, info);
device_init_wakeup(dev, 1);
/* Perform initial detection */
usb_extcon_detect_cable(&info->wq_detcable.work);
return 0;
}
static int usb_extcon_remove(struct platform_device *pdev)
{
struct usb_extcon_info *info = platform_get_drvdata(pdev);
cancel_delayed_work_sync(&info->wq_detcable);
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int usb_extcon_suspend(struct device *dev)
{
struct usb_extcon_info *info = dev_get_drvdata(dev);
int ret = 0;
if (device_may_wakeup(dev)) {
ret = enable_irq_wake(info->id_irq);
if (ret)
return ret;
}
/*
* We don't want to process any IRQs after this point
* as GPIOs used behind I2C subsystem might not be
* accessible until resume completes. So disable IRQ.
*/
disable_irq(info->id_irq);
return ret;
}
static int usb_extcon_resume(struct device *dev)
{
struct usb_extcon_info *info = dev_get_drvdata(dev);
int ret = 0;
if (device_may_wakeup(dev)) {
ret = disable_irq_wake(info->id_irq);
if (ret)
return ret;
}
enable_irq(info->id_irq);
return ret;
}
#endif
static SIMPLE_DEV_PM_OPS(usb_extcon_pm_ops,
usb_extcon_suspend, usb_extcon_resume);
static const struct of_device_id usb_extcon_dt_match[] = {
{ .compatible = "linux,extcon-usb-gpio", },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, usb_extcon_dt_match);
static struct platform_driver usb_extcon_driver = {
.probe = usb_extcon_probe,
.remove = usb_extcon_remove,
.driver = {
.name = "extcon-usb-gpio",
.pm = &usb_extcon_pm_ops,
.of_match_table = usb_extcon_dt_match,
},
};
module_platform_driver(usb_extcon_driver);
MODULE_AUTHOR("Roger Quadros <rogerq@ti.com>");
MODULE_DESCRIPTION("USB GPIO extcon driver");
MODULE_LICENSE("GPL v2");

View File

@ -158,6 +158,7 @@ static ssize_t name_show(struct device *dev, struct device_attribute *attr,
/* Optional callback given by the user */
if (edev->print_name) {
int ret = edev->print_name(edev, buf);
if (ret >= 0)
return ret;
}
@ -444,6 +445,9 @@ int extcon_register_interest(struct extcon_specific_cable_nb *obj,
const char *extcon_name, const char *cable_name,
struct notifier_block *nb)
{
unsigned long flags;
int ret;
if (!obj || !cable_name || !nb)
return -EINVAL;
@ -461,8 +465,11 @@ int extcon_register_interest(struct extcon_specific_cable_nb *obj,
obj->internal_nb.notifier_call = _call_per_cable;
return raw_notifier_chain_register(&obj->edev->nh,
spin_lock_irqsave(&obj->edev->lock, flags);
ret = raw_notifier_chain_register(&obj->edev->nh,
&obj->internal_nb);
spin_unlock_irqrestore(&obj->edev->lock, flags);
return ret;
} else {
struct class_dev_iter iter;
struct extcon_dev *extd;
@ -495,10 +502,17 @@ EXPORT_SYMBOL_GPL(extcon_register_interest);
*/
int extcon_unregister_interest(struct extcon_specific_cable_nb *obj)
{
unsigned long flags;
int ret;
if (!obj)
return -EINVAL;
return raw_notifier_chain_unregister(&obj->edev->nh, &obj->internal_nb);
spin_lock_irqsave(&obj->edev->lock, flags);
ret = raw_notifier_chain_unregister(&obj->edev->nh, &obj->internal_nb);
spin_unlock_irqrestore(&obj->edev->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(extcon_unregister_interest);
@ -515,7 +529,14 @@ EXPORT_SYMBOL_GPL(extcon_unregister_interest);
int extcon_register_notifier(struct extcon_dev *edev,
struct notifier_block *nb)
{
return raw_notifier_chain_register(&edev->nh, nb);
unsigned long flags;
int ret;
spin_lock_irqsave(&edev->lock, flags);
ret = raw_notifier_chain_register(&edev->nh, nb);
spin_unlock_irqrestore(&edev->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(extcon_register_notifier);
@ -527,7 +548,14 @@ EXPORT_SYMBOL_GPL(extcon_register_notifier);
int extcon_unregister_notifier(struct extcon_dev *edev,
struct notifier_block *nb)
{
return raw_notifier_chain_unregister(&edev->nh, nb);
unsigned long flags;
int ret;
spin_lock_irqsave(&edev->lock, flags);
ret = raw_notifier_chain_unregister(&edev->nh, nb);
spin_unlock_irqrestore(&edev->lock, flags);
return ret;
}
EXPORT_SYMBOL_GPL(extcon_unregister_notifier);

View File

@ -71,7 +71,8 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
struct vmbus_channel_msginfo *open_info = NULL;
void *in, *out;
unsigned long flags;
int ret, t, err = 0;
int ret, err = 0;
unsigned long t;
spin_lock_irqsave(&newchannel->lock, flags);
if (newchannel->state == CHANNEL_OPEN_STATE) {
@ -89,9 +90,10 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
out = (void *)__get_free_pages(GFP_KERNEL|__GFP_ZERO,
get_order(send_ringbuffer_size + recv_ringbuffer_size));
if (!out)
return -ENOMEM;
if (!out) {
err = -ENOMEM;
goto error0;
}
in = (void *)((unsigned long)out + send_ringbuffer_size);
@ -135,7 +137,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
GFP_KERNEL);
if (!open_info) {
err = -ENOMEM;
goto error0;
goto error_gpadl;
}
init_completion(&open_info->waitevent);
@ -151,7 +153,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
if (userdatalen > MAX_USER_DEFINED_BYTES) {
err = -EINVAL;
goto error0;
goto error_gpadl;
}
if (userdatalen)
@ -195,10 +197,14 @@ error1:
list_del(&open_info->msglistentry);
spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
error_gpadl:
vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle);
error0:
free_pages((unsigned long)out,
get_order(send_ringbuffer_size + recv_ringbuffer_size));
kfree(open_info);
newchannel->state = CHANNEL_OPEN_STATE;
return err;
}
EXPORT_SYMBOL_GPL(vmbus_open);
@ -534,6 +540,12 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
free_pages((unsigned long)channel->ringbuffer_pages,
get_order(channel->ringbuffer_pagecount * PAGE_SIZE));
/*
* If the channel has been rescinded; process device removal.
*/
if (channel->rescind)
hv_process_channel_removal(channel,
channel->offermsg.child_relid);
return ret;
}
@ -569,23 +581,9 @@ void vmbus_close(struct vmbus_channel *channel)
}
EXPORT_SYMBOL_GPL(vmbus_close);
/**
* vmbus_sendpacket() - Send the specified buffer on the given channel
* @channel: Pointer to vmbus_channel structure.
* @buffer: Pointer to the buffer you want to receive the data into.
* @bufferlen: Maximum size of what the the buffer will hold
* @requestid: Identifier of the request
* @type: Type of packet that is being send e.g. negotiate, time
* packet etc.
*
* Sends data in @buffer directly to hyper-v via the vmbus
* This will send the data unparsed to hyper-v.
*
* Mainly used by Hyper-V drivers.
*/
int vmbus_sendpacket(struct vmbus_channel *channel, void *buffer,
int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
u32 bufferlen, u64 requestid,
enum vmbus_packet_type type, u32 flags)
enum vmbus_packet_type type, u32 flags, bool kick_q)
{
struct vmpacket_descriptor desc;
u32 packetlen = sizeof(struct vmpacket_descriptor) + bufferlen;
@ -613,21 +611,61 @@ int vmbus_sendpacket(struct vmbus_channel *channel, void *buffer,
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal);
if (ret == 0 && signal)
/*
* Signalling the host is conditional on many factors:
* 1. The ring state changed from being empty to non-empty.
* This is tracked by the variable "signal".
* 2. The variable kick_q tracks if more data will be placed
* on the ring. We will not signal if more data is
* to be placed.
*
* If we cannot write to the ring-buffer; signal the host
* even if we may not have written anything. This is a rare
* enough condition that it should not matter.
*/
if (((ret == 0) && kick_q && signal) || (ret))
vmbus_setevent(channel);
return ret;
}
EXPORT_SYMBOL(vmbus_sendpacket_ctl);
/**
* vmbus_sendpacket() - Send the specified buffer on the given channel
* @channel: Pointer to vmbus_channel structure.
* @buffer: Pointer to the buffer you want to receive the data into.
* @bufferlen: Maximum size of what the the buffer will hold
* @requestid: Identifier of the request
* @type: Type of packet that is being send e.g. negotiate, time
* packet etc.
*
* Sends data in @buffer directly to hyper-v via the vmbus
* This will send the data unparsed to hyper-v.
*
* Mainly used by Hyper-V drivers.
*/
int vmbus_sendpacket(struct vmbus_channel *channel, void *buffer,
u32 bufferlen, u64 requestid,
enum vmbus_packet_type type, u32 flags)
{
return vmbus_sendpacket_ctl(channel, buffer, bufferlen, requestid,
type, flags, true);
}
EXPORT_SYMBOL(vmbus_sendpacket);
/*
* vmbus_sendpacket_pagebuffer - Send a range of single-page buffer
* packets using a GPADL Direct packet type.
* vmbus_sendpacket_pagebuffer_ctl - Send a range of single-page buffer
* packets using a GPADL Direct packet type. This interface allows you
* to control notifying the host. This will be useful for sending
* batched data. Also the sender can control the send flags
* explicitly.
*/
int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel,
struct hv_page_buffer pagebuffers[],
u32 pagecount, void *buffer, u32 bufferlen,
u64 requestid)
u64 requestid,
u32 flags,
bool kick_q)
{
int ret;
int i;
@ -655,7 +693,7 @@ int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
/* Setup the descriptor */
desc.type = VM_PKT_DATA_USING_GPA_DIRECT;
desc.flags = VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED;
desc.flags = flags;
desc.dataoffset8 = descsize >> 3; /* in 8-bytes grandularity */
desc.length8 = (u16)(packetlen_aligned >> 3);
desc.transactionid = requestid;
@ -676,11 +714,40 @@ int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3, &signal);
if (ret == 0 && signal)
/*
* Signalling the host is conditional on many factors:
* 1. The ring state changed from being empty to non-empty.
* This is tracked by the variable "signal".
* 2. The variable kick_q tracks if more data will be placed
* on the ring. We will not signal if more data is
* to be placed.
*
* If we cannot write to the ring-buffer; signal the host
* even if we may not have written anything. This is a rare
* enough condition that it should not matter.
*/
if (((ret == 0) && kick_q && signal) || (ret))
vmbus_setevent(channel);
return ret;
}
EXPORT_SYMBOL_GPL(vmbus_sendpacket_pagebuffer_ctl);
/*
* vmbus_sendpacket_pagebuffer - Send a range of single-page buffer
* packets using a GPADL Direct packet type.
*/
int vmbus_sendpacket_pagebuffer(struct vmbus_channel *channel,
struct hv_page_buffer pagebuffers[],
u32 pagecount, void *buffer, u32 bufferlen,
u64 requestid)
{
u32 flags = VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED;
return vmbus_sendpacket_pagebuffer_ctl(channel, pagebuffers, pagecount,
buffer, bufferlen, requestid,
flags, true);
}
EXPORT_SYMBOL_GPL(vmbus_sendpacket_pagebuffer);
/*

View File

@ -32,12 +32,6 @@
#include "hyperv_vmbus.h"
struct vmbus_channel_message_table_entry {
enum vmbus_channel_message_type message_type;
void (*message_handler)(struct vmbus_channel_message_header *msg);
};
/**
* vmbus_prep_negotiate_resp() - Create default response for Hyper-V Negotiate message
* @icmsghdrp: Pointer to msg header structure
@ -139,54 +133,29 @@ EXPORT_SYMBOL_GPL(vmbus_prep_negotiate_resp);
*/
static struct vmbus_channel *alloc_channel(void)
{
static atomic_t chan_num = ATOMIC_INIT(0);
struct vmbus_channel *channel;
channel = kzalloc(sizeof(*channel), GFP_ATOMIC);
if (!channel)
return NULL;
channel->id = atomic_inc_return(&chan_num);
spin_lock_init(&channel->inbound_lock);
spin_lock_init(&channel->lock);
INIT_LIST_HEAD(&channel->sc_list);
INIT_LIST_HEAD(&channel->percpu_list);
channel->controlwq = create_workqueue("hv_vmbus_ctl");
if (!channel->controlwq) {
kfree(channel);
return NULL;
}
return channel;
}
/*
* release_hannel - Release the vmbus channel object itself
*/
static void release_channel(struct work_struct *work)
{
struct vmbus_channel *channel = container_of(work,
struct vmbus_channel,
work);
destroy_workqueue(channel->controlwq);
kfree(channel);
}
/*
* free_channel - Release the resources used by the vmbus channel object
*/
static void free_channel(struct vmbus_channel *channel)
{
/*
* We have to release the channel's workqueue/thread in the vmbus's
* workqueue/thread context
* ie we can't destroy ourselves.
*/
INIT_WORK(&channel->work, release_channel);
queue_work(vmbus_connection.work_queue, &channel->work);
kfree(channel);
}
static void percpu_channel_enq(void *arg)
@ -204,33 +173,21 @@ static void percpu_channel_deq(void *arg)
list_del(&channel->percpu_list);
}
/*
* vmbus_process_rescind_offer -
* Rescind the offer by initiating a device removal
*/
static void vmbus_process_rescind_offer(struct work_struct *work)
void hv_process_channel_removal(struct vmbus_channel *channel, u32 relid)
{
struct vmbus_channel *channel = container_of(work,
struct vmbus_channel,
work);
struct vmbus_channel_relid_released msg;
unsigned long flags;
struct vmbus_channel *primary_channel;
struct vmbus_channel_relid_released msg;
struct device *dev;
if (channel->device_obj) {
dev = get_device(&channel->device_obj->device);
if (dev) {
vmbus_device_unregister(channel->device_obj);
put_device(dev);
}
}
memset(&msg, 0, sizeof(struct vmbus_channel_relid_released));
msg.child_relid = channel->offermsg.child_relid;
msg.child_relid = relid;
msg.header.msgtype = CHANNELMSG_RELID_RELEASED;
vmbus_post_msg(&msg, sizeof(struct vmbus_channel_relid_released));
if (channel == NULL)
return;
if (channel->target_cpu != get_cpu()) {
put_cpu();
smp_call_function_single(channel->target_cpu,
@ -259,7 +216,6 @@ void vmbus_free_channels(void)
list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {
vmbus_device_unregister(channel->device_obj);
kfree(channel->device_obj);
free_channel(channel);
}
}
@ -268,15 +224,11 @@ void vmbus_free_channels(void)
* vmbus_process_offer - Process the offer by creating a channel/device
* associated with this offer
*/
static void vmbus_process_offer(struct work_struct *work)
static void vmbus_process_offer(struct vmbus_channel *newchannel)
{
struct vmbus_channel *newchannel = container_of(work,
struct vmbus_channel,
work);
struct vmbus_channel *channel;
bool fnew = true;
bool enq = false;
int ret;
unsigned long flags;
/* Make sure this is a new offer */
@ -335,10 +287,11 @@ static void vmbus_process_offer(struct work_struct *work)
}
newchannel->state = CHANNEL_OPEN_STATE;
channel->num_sc++;
if (channel->sc_creation_callback != NULL)
channel->sc_creation_callback(newchannel);
goto done_init_rescind;
return;
}
goto err_free_chan;
@ -361,33 +314,35 @@ static void vmbus_process_offer(struct work_struct *work)
&newchannel->offermsg.offer.if_instance,
newchannel);
if (!newchannel->device_obj)
goto err_free_chan;
goto err_deq_chan;
/*
* Add the new device to the bus. This will kick off device-driver
* binding which eventually invokes the device driver's AddDevice()
* method.
*/
ret = vmbus_device_register(newchannel->device_obj);
if (ret != 0) {
if (vmbus_device_register(newchannel->device_obj) != 0) {
pr_err("unable to add child device object (relid %d)\n",
newchannel->offermsg.child_relid);
spin_lock_irqsave(&vmbus_connection.channel_lock, flags);
list_del(&newchannel->listentry);
spin_unlock_irqrestore(&vmbus_connection.channel_lock, flags);
newchannel->offermsg.child_relid);
kfree(newchannel->device_obj);
goto err_free_chan;
goto err_deq_chan;
}
done_init_rescind:
spin_lock_irqsave(&newchannel->lock, flags);
/* The next possible work is rescind handling */
INIT_WORK(&newchannel->work, vmbus_process_rescind_offer);
/* Check if rescind offer was already received */
if (newchannel->rescind)
queue_work(newchannel->controlwq, &newchannel->work);
spin_unlock_irqrestore(&newchannel->lock, flags);
return;
err_deq_chan:
spin_lock_irqsave(&vmbus_connection.channel_lock, flags);
list_del(&newchannel->listentry);
spin_unlock_irqrestore(&vmbus_connection.channel_lock, flags);
if (newchannel->target_cpu != get_cpu()) {
put_cpu();
smp_call_function_single(newchannel->target_cpu,
percpu_channel_deq, newchannel, true);
} else {
percpu_channel_deq(newchannel);
put_cpu();
}
err_free_chan:
free_channel(newchannel);
}
@ -411,6 +366,8 @@ static const struct hv_vmbus_device_id hp_devs[] = {
{ HV_SCSI_GUID, },
/* Network */
{ HV_NIC_GUID, },
/* NetworkDirect Guest RDMA */
{ HV_ND_GUID, },
};
@ -511,8 +468,7 @@ static void vmbus_onoffer(struct vmbus_channel_message_header *hdr)
newchannel->monitor_grp = (u8)offer->monitorid / 32;
newchannel->monitor_bit = (u8)offer->monitorid % 32;
INIT_WORK(&newchannel->work, vmbus_process_offer);
queue_work(newchannel->controlwq, &newchannel->work);
vmbus_process_offer(newchannel);
}
/*
@ -525,28 +481,34 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
struct vmbus_channel_rescind_offer *rescind;
struct vmbus_channel *channel;
unsigned long flags;
struct device *dev;
rescind = (struct vmbus_channel_rescind_offer *)hdr;
channel = relid2channel(rescind->child_relid);
if (channel == NULL)
/* Just return here, no channel found */
if (channel == NULL) {
hv_process_channel_removal(NULL, rescind->child_relid);
return;
}
spin_lock_irqsave(&channel->lock, flags);
channel->rescind = true;
/*
* channel->work.func != vmbus_process_rescind_offer means we are still
* processing offer request and the rescind offer processing should be
* postponed. It will be done at the very end of vmbus_process_offer()
* as rescind flag is being checked there.
*/
if (channel->work.func == vmbus_process_rescind_offer)
/* work is initialized for vmbus_process_rescind_offer() from
* vmbus_process_offer() where the channel got created */
queue_work(channel->controlwq, &channel->work);
spin_unlock_irqrestore(&channel->lock, flags);
if (channel->device_obj) {
/*
* We will have to unregister this device from the
* driver core.
*/
dev = get_device(&channel->device_obj->device);
if (dev) {
vmbus_device_unregister(channel->device_obj);
put_device(dev);
}
} else {
hv_process_channel_removal(channel,
channel->offermsg.child_relid);
}
}
/*
@ -731,25 +693,25 @@ static void vmbus_onversion_response(
}
/* Channel message dispatch table */
static struct vmbus_channel_message_table_entry
struct vmbus_channel_message_table_entry
channel_message_table[CHANNELMSG_COUNT] = {
{CHANNELMSG_INVALID, NULL},
{CHANNELMSG_OFFERCHANNEL, vmbus_onoffer},
{CHANNELMSG_RESCIND_CHANNELOFFER, vmbus_onoffer_rescind},
{CHANNELMSG_REQUESTOFFERS, NULL},
{CHANNELMSG_ALLOFFERS_DELIVERED, vmbus_onoffers_delivered},
{CHANNELMSG_OPENCHANNEL, NULL},
{CHANNELMSG_OPENCHANNEL_RESULT, vmbus_onopen_result},
{CHANNELMSG_CLOSECHANNEL, NULL},
{CHANNELMSG_GPADL_HEADER, NULL},
{CHANNELMSG_GPADL_BODY, NULL},
{CHANNELMSG_GPADL_CREATED, vmbus_ongpadl_created},
{CHANNELMSG_GPADL_TEARDOWN, NULL},
{CHANNELMSG_GPADL_TORNDOWN, vmbus_ongpadl_torndown},
{CHANNELMSG_RELID_RELEASED, NULL},
{CHANNELMSG_INITIATE_CONTACT, NULL},
{CHANNELMSG_VERSION_RESPONSE, vmbus_onversion_response},
{CHANNELMSG_UNLOAD, NULL},
{CHANNELMSG_INVALID, 0, NULL},
{CHANNELMSG_OFFERCHANNEL, 0, vmbus_onoffer},
{CHANNELMSG_RESCIND_CHANNELOFFER, 0, vmbus_onoffer_rescind},
{CHANNELMSG_REQUESTOFFERS, 0, NULL},
{CHANNELMSG_ALLOFFERS_DELIVERED, 1, vmbus_onoffers_delivered},
{CHANNELMSG_OPENCHANNEL, 0, NULL},
{CHANNELMSG_OPENCHANNEL_RESULT, 1, vmbus_onopen_result},
{CHANNELMSG_CLOSECHANNEL, 0, NULL},
{CHANNELMSG_GPADL_HEADER, 0, NULL},
{CHANNELMSG_GPADL_BODY, 0, NULL},
{CHANNELMSG_GPADL_CREATED, 1, vmbus_ongpadl_created},
{CHANNELMSG_GPADL_TEARDOWN, 0, NULL},
{CHANNELMSG_GPADL_TORNDOWN, 1, vmbus_ongpadl_torndown},
{CHANNELMSG_RELID_RELEASED, 0, NULL},
{CHANNELMSG_INITIATE_CONTACT, 0, NULL},
{CHANNELMSG_VERSION_RESPONSE, 1, vmbus_onversion_response},
{CHANNELMSG_UNLOAD, 0, NULL},
};
/*
@ -787,7 +749,7 @@ int vmbus_request_offers(void)
{
struct vmbus_channel_message_header *msg;
struct vmbus_channel_msginfo *msginfo;
int ret, t;
int ret;
msginfo = kmalloc(sizeof(*msginfo) +
sizeof(struct vmbus_channel_message_header),
@ -795,8 +757,6 @@ int vmbus_request_offers(void)
if (!msginfo)
return -ENOMEM;
init_completion(&msginfo->waitevent);
msg = (struct vmbus_channel_message_header *)msginfo->msg;
msg->msgtype = CHANNELMSG_REQUESTOFFERS;
@ -810,14 +770,6 @@ int vmbus_request_offers(void)
goto cleanup;
}
t = wait_for_completion_timeout(&msginfo->waitevent, 5*HZ);
if (t == 0) {
ret = -ETIMEDOUT;
goto cleanup;
}
cleanup:
kfree(msginfo);
@ -826,9 +778,8 @@ cleanup:
/*
* Retrieve the (sub) channel on which to send an outgoing request.
* When a primary channel has multiple sub-channels, we choose a
* channel whose VCPU binding is closest to the VCPU on which
* this call is being made.
* When a primary channel has multiple sub-channels, we try to
* distribute the load equally amongst all available channels.
*/
struct vmbus_channel *vmbus_get_outgoing_channel(struct vmbus_channel *primary)
{
@ -836,11 +787,19 @@ struct vmbus_channel *vmbus_get_outgoing_channel(struct vmbus_channel *primary)
int cur_cpu;
struct vmbus_channel *cur_channel;
struct vmbus_channel *outgoing_channel = primary;
int cpu_distance, new_cpu_distance;
int next_channel;
int i = 1;
if (list_empty(&primary->sc_list))
return outgoing_channel;
next_channel = primary->next_oc++;
if (next_channel > (primary->num_sc)) {
primary->next_oc = 0;
return outgoing_channel;
}
cur_cpu = hv_context.vp_index[get_cpu()];
put_cpu();
list_for_each_safe(cur, tmp, &primary->sc_list) {
@ -851,18 +810,10 @@ struct vmbus_channel *vmbus_get_outgoing_channel(struct vmbus_channel *primary)
if (cur_channel->target_vp == cur_cpu)
return cur_channel;
cpu_distance = ((outgoing_channel->target_vp > cur_cpu) ?
(outgoing_channel->target_vp - cur_cpu) :
(cur_cpu - outgoing_channel->target_vp));
if (i == next_channel)
return cur_channel;
new_cpu_distance = ((cur_channel->target_vp > cur_cpu) ?
(cur_channel->target_vp - cur_cpu) :
(cur_cpu - cur_channel->target_vp));
if (cpu_distance < new_cpu_distance)
continue;
outgoing_channel = cur_channel;
i++;
}
return outgoing_channel;

View File

@ -216,10 +216,21 @@ int vmbus_connect(void)
cleanup:
pr_err("Unable to connect to host\n");
vmbus_connection.conn_state = DISCONNECTED;
if (vmbus_connection.work_queue)
vmbus_connection.conn_state = DISCONNECTED;
vmbus_disconnect();
kfree(msginfo);
return ret;
}
void vmbus_disconnect(void)
{
if (vmbus_connection.work_queue) {
drain_workqueue(vmbus_connection.work_queue);
destroy_workqueue(vmbus_connection.work_queue);
}
if (vmbus_connection.int_page) {
free_pages((unsigned long)vmbus_connection.int_page, 0);
@ -230,10 +241,6 @@ cleanup:
free_pages((unsigned long)vmbus_connection.monitor_pages[1], 0);
vmbus_connection.monitor_pages[0] = NULL;
vmbus_connection.monitor_pages[1] = NULL;
kfree(msginfo);
return ret;
}
/*
@ -311,10 +318,8 @@ static void process_chn_event(u32 relid)
*/
channel = pcpu_relid2channel(relid);
if (!channel) {
pr_err("channel not found for relid - %u\n", relid);
if (!channel)
return;
}
/*
* A channel once created is persistent even when there
@ -349,10 +354,7 @@ static void process_chn_event(u32 relid)
else
bytes_to_read = 0;
} while (read_state && (bytes_to_read != 0));
} else {
pr_err("no channel callback for relid - %u\n", relid);
}
}
/*
@ -420,6 +422,7 @@ int vmbus_post_msg(void *buffer, size_t buflen)
union hv_connection_id conn_id;
int ret = 0;
int retries = 0;
u32 msec = 1;
conn_id.asu32 = 0;
conn_id.u.id = VMBUS_MESSAGE_CONNECTION_ID;
@ -429,13 +432,20 @@ int vmbus_post_msg(void *buffer, size_t buflen)
* insufficient resources. Retry the operation a couple of
* times before giving up.
*/
while (retries < 10) {
while (retries < 20) {
ret = hv_post_message(conn_id, 1, buffer, buflen);
switch (ret) {
case HV_STATUS_INVALID_CONNECTION_ID:
/*
* We could get this if we send messages too
* frequently.
*/
ret = -EAGAIN;
break;
case HV_STATUS_INSUFFICIENT_MEMORY:
case HV_STATUS_INSUFFICIENT_BUFFERS:
ret = -ENOMEM;
case -ENOMEM:
break;
case HV_STATUS_SUCCESS:
return ret;
@ -445,7 +455,9 @@ int vmbus_post_msg(void *buffer, size_t buflen)
}
retries++;
msleep(100);
msleep(msec);
if (msec < 2048)
msec *= 2;
}
return ret;
}

View File

@ -312,7 +312,11 @@ static void hv_init_clockevent_device(struct clock_event_device *dev, int cpu)
dev->features = CLOCK_EVT_FEAT_ONESHOT;
dev->cpumask = cpumask_of(cpu);
dev->rating = 1000;
dev->owner = THIS_MODULE;
/*
* Avoid settint dev->owner = THIS_MODULE deliberately as doing so will
* result in clockevents_config_and_register() taking additional
* references to the hv_vmbus module making it impossible to unload.
*/
dev->set_mode = hv_ce_setmode;
dev->set_next_event = hv_ce_set_next_event;
@ -469,6 +473,20 @@ void hv_synic_init(void *arg)
return;
}
/*
* hv_synic_clockevents_cleanup - Cleanup clockevent devices
*/
void hv_synic_clockevents_cleanup(void)
{
int cpu;
if (!(ms_hyperv.features & HV_X64_MSR_SYNTIMER_AVAILABLE))
return;
for_each_online_cpu(cpu)
clockevents_unbind_device(hv_context.clk_evt[cpu], cpu);
}
/*
* hv_synic_cleanup - Cleanup routine for hv_synic_init().
*/
@ -477,11 +495,17 @@ void hv_synic_cleanup(void *arg)
union hv_synic_sint shared_sint;
union hv_synic_simp simp;
union hv_synic_siefp siefp;
union hv_synic_scontrol sctrl;
int cpu = smp_processor_id();
if (!hv_context.synic_initialized)
return;
/* Turn off clockevent device */
if (ms_hyperv.features & HV_X64_MSR_SYNTIMER_AVAILABLE)
hv_ce_setmode(CLOCK_EVT_MODE_SHUTDOWN,
hv_context.clk_evt[cpu]);
rdmsrl(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
shared_sint.masked = 1;
@ -502,6 +526,10 @@ void hv_synic_cleanup(void *arg)
wrmsrl(HV_X64_MSR_SIEFP, siefp.as_uint64);
free_page((unsigned long)hv_context.synic_message_page[cpu]);
free_page((unsigned long)hv_context.synic_event_page[cpu]);
/* Disable the global synic bit */
rdmsrl(HV_X64_MSR_SCONTROL, sctrl.as_uint64);
sctrl.enable = 0;
wrmsrl(HV_X64_MSR_SCONTROL, sctrl.as_uint64);
hv_synic_free_cpu(cpu);
}

View File

@ -428,14 +428,13 @@ struct dm_info_msg {
* currently hot added. We hot add in multiples of 128M
* chunks; it is possible that we may not be able to bring
* online all the pages in the region. The range
* covered_start_pfn : covered_end_pfn defines the pages that can
* covered_end_pfn defines the pages that can
* be brough online.
*/
struct hv_hotadd_state {
struct list_head list;
unsigned long start_pfn;
unsigned long covered_start_pfn;
unsigned long covered_end_pfn;
unsigned long ha_end_pfn;
unsigned long end_pfn;
@ -503,6 +502,8 @@ struct hv_dynmem_device {
* Number of pages we have currently ballooned out.
*/
unsigned int num_pages_ballooned;
unsigned int num_pages_onlined;
unsigned int num_pages_added;
/*
* State to manage the ballooning (up) operation.
@ -534,7 +535,6 @@ struct hv_dynmem_device {
struct task_struct *thread;
struct mutex ha_region_mutex;
struct completion waiter_event;
/*
* A list of hot-add regions.
@ -554,46 +554,32 @@ static struct hv_dynmem_device dm_device;
static void post_status(struct hv_dynmem_device *dm);
#ifdef CONFIG_MEMORY_HOTPLUG
static void acquire_region_mutex(bool trylock)
{
if (trylock) {
reinit_completion(&dm_device.waiter_event);
while (!mutex_trylock(&dm_device.ha_region_mutex))
wait_for_completion(&dm_device.waiter_event);
} else {
mutex_lock(&dm_device.ha_region_mutex);
}
}
static void release_region_mutex(bool trylock)
{
if (trylock) {
mutex_unlock(&dm_device.ha_region_mutex);
} else {
mutex_unlock(&dm_device.ha_region_mutex);
complete(&dm_device.waiter_event);
}
}
static int hv_memory_notifier(struct notifier_block *nb, unsigned long val,
void *v)
{
struct memory_notify *mem = (struct memory_notify *)v;
switch (val) {
case MEM_GOING_ONLINE:
acquire_region_mutex(true);
mutex_lock(&dm_device.ha_region_mutex);
break;
case MEM_ONLINE:
dm_device.num_pages_onlined += mem->nr_pages;
case MEM_CANCEL_ONLINE:
release_region_mutex(true);
mutex_unlock(&dm_device.ha_region_mutex);
if (dm_device.ha_waiting) {
dm_device.ha_waiting = false;
complete(&dm_device.ol_waitevent);
}
break;
case MEM_GOING_OFFLINE:
case MEM_OFFLINE:
mutex_lock(&dm_device.ha_region_mutex);
dm_device.num_pages_onlined -= mem->nr_pages;
mutex_unlock(&dm_device.ha_region_mutex);
break;
case MEM_GOING_OFFLINE:
case MEM_CANCEL_OFFLINE:
break;
}
@ -646,7 +632,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size,
init_completion(&dm_device.ol_waitevent);
dm_device.ha_waiting = true;
release_region_mutex(false);
mutex_unlock(&dm_device.ha_region_mutex);
nid = memory_add_physaddr_to_nid(PFN_PHYS(start_pfn));
ret = add_memory(nid, PFN_PHYS((start_pfn)),
(HA_CHUNK << PAGE_SHIFT));
@ -665,6 +651,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size,
}
has->ha_end_pfn -= HA_CHUNK;
has->covered_end_pfn -= processed_pfn;
mutex_lock(&dm_device.ha_region_mutex);
break;
}
@ -675,7 +662,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size,
* have not been "onlined" within the allowed time.
*/
wait_for_completion_timeout(&dm_device.ol_waitevent, 5*HZ);
acquire_region_mutex(false);
mutex_lock(&dm_device.ha_region_mutex);
post_status(&dm_device);
}
@ -691,8 +678,7 @@ static void hv_online_page(struct page *pg)
list_for_each(cur, &dm_device.ha_region_list) {
has = list_entry(cur, struct hv_hotadd_state, list);
cur_start_pgp = (unsigned long)
pfn_to_page(has->covered_start_pfn);
cur_start_pgp = (unsigned long)pfn_to_page(has->start_pfn);
cur_end_pgp = (unsigned long)pfn_to_page(has->covered_end_pfn);
if (((unsigned long)pg >= cur_start_pgp) &&
@ -704,7 +690,6 @@ static void hv_online_page(struct page *pg)
__online_page_set_limits(pg);
__online_page_increment_counters(pg);
__online_page_free(pg);
has->covered_start_pfn++;
}
}
}
@ -748,10 +733,9 @@ static bool pfn_covered(unsigned long start_pfn, unsigned long pfn_cnt)
* is, update it.
*/
if (has->covered_end_pfn != start_pfn) {
if (has->covered_end_pfn != start_pfn)
has->covered_end_pfn = start_pfn;
has->covered_start_pfn = start_pfn;
}
return true;
}
@ -794,9 +778,18 @@ static unsigned long handle_pg_range(unsigned long pg_start,
pgs_ol = has->ha_end_pfn - start_pfn;
if (pgs_ol > pfn_cnt)
pgs_ol = pfn_cnt;
hv_bring_pgs_online(start_pfn, pgs_ol);
/*
* Check if the corresponding memory block is already
* online by checking its last previously backed page.
* In case it is we need to bring rest (which was not
* backed previously) online too.
*/
if (start_pfn > has->start_pfn &&
!PageReserved(pfn_to_page(start_pfn - 1)))
hv_bring_pgs_online(start_pfn, pgs_ol);
has->covered_end_pfn += pgs_ol;
has->covered_start_pfn += pgs_ol;
pfn_cnt -= pgs_ol;
}
@ -857,7 +850,6 @@ static unsigned long process_hot_add(unsigned long pg_start,
list_add_tail(&ha_region->list, &dm_device.ha_region_list);
ha_region->start_pfn = rg_start;
ha_region->ha_end_pfn = rg_start;
ha_region->covered_start_pfn = pg_start;
ha_region->covered_end_pfn = pg_start;
ha_region->end_pfn = rg_start + rg_size;
}
@ -886,7 +878,7 @@ static void hot_add_req(struct work_struct *dummy)
resp.hdr.size = sizeof(struct dm_hot_add_response);
#ifdef CONFIG_MEMORY_HOTPLUG
acquire_region_mutex(false);
mutex_lock(&dm_device.ha_region_mutex);
pg_start = dm->ha_wrk.ha_page_range.finfo.start_page;
pfn_cnt = dm->ha_wrk.ha_page_range.finfo.page_cnt;
@ -918,7 +910,9 @@ static void hot_add_req(struct work_struct *dummy)
if (do_hot_add)
resp.page_count = process_hot_add(pg_start, pfn_cnt,
rg_start, rg_sz);
release_region_mutex(false);
dm->num_pages_added += resp.page_count;
mutex_unlock(&dm_device.ha_region_mutex);
#endif
/*
* The result field of the response structure has the
@ -982,8 +976,8 @@ static unsigned long compute_balloon_floor(void)
* 128 72 (1/2)
* 512 168 (1/4)
* 2048 360 (1/8)
* 8192 768 (1/16)
* 32768 1536 (1/32)
* 8192 744 (1/16)
* 32768 1512 (1/32)
*/
if (totalram_pages < MB2PAGES(128))
min_pages = MB2PAGES(8) + (totalram_pages >> 1);
@ -992,9 +986,9 @@ static unsigned long compute_balloon_floor(void)
else if (totalram_pages < MB2PAGES(2048))
min_pages = MB2PAGES(104) + (totalram_pages >> 3);
else if (totalram_pages < MB2PAGES(8192))
min_pages = MB2PAGES(256) + (totalram_pages >> 4);
min_pages = MB2PAGES(232) + (totalram_pages >> 4);
else
min_pages = MB2PAGES(512) + (totalram_pages >> 5);
min_pages = MB2PAGES(488) + (totalram_pages >> 5);
#undef MB2PAGES
return min_pages;
}
@ -1031,17 +1025,21 @@ static void post_status(struct hv_dynmem_device *dm)
status.hdr.trans_id = atomic_inc_return(&trans_id);
/*
* The host expects the guest to report free memory.
* Further, the host expects the pressure information to
* include the ballooned out pages.
* For a given amount of memory that we are managing, we
* need to compute a floor below which we should not balloon.
* Compute this and add it to the pressure report.
* The host expects the guest to report free and committed memory.
* Furthermore, the host expects the pressure information to include
* the ballooned out pages. For a given amount of memory that we are
* managing we need to compute a floor below which we should not
* balloon. Compute this and add it to the pressure report.
* We also need to report all offline pages (num_pages_added -
* num_pages_onlined) as committed to the host, otherwise it can try
* asking us to balloon them out.
*/
status.num_avail = val.freeram;
status.num_committed = vm_memory_committed() +
dm->num_pages_ballooned +
compute_balloon_floor();
dm->num_pages_ballooned +
(dm->num_pages_added > dm->num_pages_onlined ?
dm->num_pages_added - dm->num_pages_onlined : 0) +
compute_balloon_floor();
/*
* If our transaction ID is no longer current, just don't
@ -1083,11 +1081,12 @@ static void free_balloon_pages(struct hv_dynmem_device *dm,
static int alloc_balloon_pages(struct hv_dynmem_device *dm, int num_pages,
struct dm_balloon_response *bl_resp, int alloc_unit,
bool *alloc_error)
static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
unsigned int num_pages,
struct dm_balloon_response *bl_resp,
int alloc_unit)
{
int i = 0;
unsigned int i = 0;
struct page *pg;
if (num_pages < alloc_unit)
@ -1106,11 +1105,8 @@ static int alloc_balloon_pages(struct hv_dynmem_device *dm, int num_pages,
__GFP_NOMEMALLOC | __GFP_NOWARN,
get_order(alloc_unit << PAGE_SHIFT));
if (!pg) {
*alloc_error = true;
if (!pg)
return i * alloc_unit;
}
dm->num_pages_ballooned += alloc_unit;
@ -1137,14 +1133,15 @@ static int alloc_balloon_pages(struct hv_dynmem_device *dm, int num_pages,
static void balloon_up(struct work_struct *dummy)
{
int num_pages = dm_device.balloon_wrk.num_pages;
int num_ballooned = 0;
unsigned int num_pages = dm_device.balloon_wrk.num_pages;
unsigned int num_ballooned = 0;
struct dm_balloon_response *bl_resp;
int alloc_unit;
int ret;
bool alloc_error;
bool done = false;
int i;
struct sysinfo val;
unsigned long floor;
/* The host balloons pages in 2M granularity. */
WARN_ON_ONCE(num_pages % PAGES_IN_2M != 0);
@ -1155,6 +1152,15 @@ static void balloon_up(struct work_struct *dummy)
*/
alloc_unit = 512;
si_meminfo(&val);
floor = compute_balloon_floor();
/* Refuse to balloon below the floor, keep the 2M granularity. */
if (val.freeram < num_pages || val.freeram - num_pages < floor) {
num_pages = val.freeram > floor ? (val.freeram - floor) : 0;
num_pages -= num_pages % PAGES_IN_2M;
}
while (!done) {
bl_resp = (struct dm_balloon_response *)send_buffer;
memset(send_buffer, 0, PAGE_SIZE);
@ -1164,18 +1170,15 @@ static void balloon_up(struct work_struct *dummy)
num_pages -= num_ballooned;
alloc_error = false;
num_ballooned = alloc_balloon_pages(&dm_device, num_pages,
bl_resp, alloc_unit,
&alloc_error);
bl_resp, alloc_unit);
if (alloc_unit != 1 && num_ballooned == 0) {
alloc_unit = 1;
continue;
}
if ((alloc_unit == 1 && alloc_error) ||
(num_ballooned == num_pages)) {
if (num_ballooned == 0 || num_ballooned == num_pages) {
bl_resp->more_pages = 0;
done = true;
dm_device.state = DM_INITIALIZED;
@ -1414,7 +1417,8 @@ static void balloon_onchannelcallback(void *context)
static int balloon_probe(struct hv_device *dev,
const struct hv_vmbus_device_id *dev_id)
{
int ret, t;
int ret;
unsigned long t;
struct dm_version_request version_req;
struct dm_capabilities cap_msg;
@ -1439,7 +1443,6 @@ static int balloon_probe(struct hv_device *dev,
dm_device.next_version = DYNMEM_PROTOCOL_VERSION_WIN7;
init_completion(&dm_device.host_event);
init_completion(&dm_device.config_event);
init_completion(&dm_device.waiter_event);
INIT_LIST_HEAD(&dm_device.ha_region_list);
mutex_init(&dm_device.ha_region_mutex);
INIT_WORK(&dm_device.balloon_wrk.wrk, balloon_up);

View File

@ -340,12 +340,8 @@ static int util_probe(struct hv_device *dev,
set_channel_read_state(dev->channel, false);
ret = vmbus_open(dev->channel, 4 * PAGE_SIZE, 4 * PAGE_SIZE, NULL, 0,
srv->util_cb, dev->channel);
if (ret)
goto error;
hv_set_drvdata(dev, srv);
/*
* Based on the host; initialize the framework and
* service version numbers we will negotiate.
@ -365,6 +361,11 @@ static int util_probe(struct hv_device *dev,
hb_srv_version = HB_VERSION;
}
ret = vmbus_open(dev->channel, 4 * PAGE_SIZE, 4 * PAGE_SIZE, NULL, 0,
srv->util_cb, dev->channel);
if (ret)
goto error;
return 0;
error:
@ -379,9 +380,9 @@ static int util_remove(struct hv_device *dev)
{
struct hv_util_service *srv = hv_get_drvdata(dev);
vmbus_close(dev->channel);
if (srv->util_deinit)
srv->util_deinit();
vmbus_close(dev->channel);
kfree(srv->recv_buffer);
return 0;

View File

@ -49,6 +49,17 @@ enum hv_cpuid_function {
HVCPUID_IMPLEMENTATION_LIMITS = 0x40000005,
};
#define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE 0x400
#define HV_X64_MSR_CRASH_P0 0x40000100
#define HV_X64_MSR_CRASH_P1 0x40000101
#define HV_X64_MSR_CRASH_P2 0x40000102
#define HV_X64_MSR_CRASH_P3 0x40000103
#define HV_X64_MSR_CRASH_P4 0x40000104
#define HV_X64_MSR_CRASH_CTL 0x40000105
#define HV_CRASH_CTL_CRASH_NOTIFY (1ULL << 63)
/* Define version of the synthetic interrupt controller. */
#define HV_SYNIC_VERSION (1)
@ -572,6 +583,8 @@ extern void hv_synic_init(void *irqarg);
extern void hv_synic_cleanup(void *arg);
extern void hv_synic_clockevents_cleanup(void);
/*
* Host version information.
*/
@ -672,6 +685,23 @@ struct vmbus_msginfo {
extern struct vmbus_connection vmbus_connection;
enum vmbus_message_handler_type {
/* The related handler can sleep. */
VMHT_BLOCKING = 0,
/* The related handler must NOT sleep. */
VMHT_NON_BLOCKING = 1,
};
struct vmbus_channel_message_table_entry {
enum vmbus_channel_message_type message_type;
enum vmbus_message_handler_type handler_type;
void (*message_handler)(struct vmbus_channel_message_header *msg);
};
extern struct vmbus_channel_message_table_entry
channel_message_table[CHANNELMSG_COUNT];
/* General vmbus interface */
struct hv_device *vmbus_device_create(const uuid_le *type,
@ -692,6 +722,7 @@ void vmbus_free_channels(void);
/* Connection interface */
int vmbus_connect(void);
void vmbus_disconnect(void);
int vmbus_post_msg(void *buffer, size_t buflen);

View File

@ -33,9 +33,12 @@
#include <linux/hyperv.h>
#include <linux/kernel_stat.h>
#include <linux/clockchips.h>
#include <linux/cpu.h>
#include <asm/hyperv.h>
#include <asm/hypervisor.h>
#include <asm/mshyperv.h>
#include <linux/notifier.h>
#include <linux/ptrace.h>
#include "hyperv_vmbus.h"
static struct acpi_device *hv_acpi_dev;
@ -44,6 +47,31 @@ static struct tasklet_struct msg_dpc;
static struct completion probe_event;
static int irq;
static int hyperv_panic_event(struct notifier_block *nb,
unsigned long event, void *ptr)
{
struct pt_regs *regs;
regs = current_pt_regs();
wrmsrl(HV_X64_MSR_CRASH_P0, regs->ip);
wrmsrl(HV_X64_MSR_CRASH_P1, regs->ax);
wrmsrl(HV_X64_MSR_CRASH_P2, regs->bx);
wrmsrl(HV_X64_MSR_CRASH_P3, regs->cx);
wrmsrl(HV_X64_MSR_CRASH_P4, regs->dx);
/*
* Let Hyper-V know there is crash data available
*/
wrmsrl(HV_X64_MSR_CRASH_CTL, HV_CRASH_CTL_CRASH_NOTIFY);
return NOTIFY_DONE;
}
static struct notifier_block hyperv_panic_block = {
.notifier_call = hyperv_panic_event,
};
struct resource hyperv_mmio = {
.name = "hyperv mmio",
.flags = IORESOURCE_MEM,
@ -507,14 +535,26 @@ static int vmbus_probe(struct device *child_device)
*/
static int vmbus_remove(struct device *child_device)
{
struct hv_driver *drv = drv_to_hv_drv(child_device->driver);
struct hv_driver *drv;
struct hv_device *dev = device_to_hv_device(child_device);
u32 relid = dev->channel->offermsg.child_relid;
if (drv->remove)
drv->remove(dev);
else
pr_err("remove not set for driver %s\n",
dev_name(child_device));
if (child_device->driver) {
drv = drv_to_hv_drv(child_device->driver);
if (drv->remove)
drv->remove(dev);
else {
hv_process_channel_removal(dev->channel, relid);
pr_err("remove not set for driver %s\n",
dev_name(child_device));
}
} else {
/*
* We don't have a driver for this device; deal with the
* rescind message by removing the channel.
*/
hv_process_channel_removal(dev->channel, relid);
}
return 0;
}
@ -573,6 +613,10 @@ static void vmbus_onmessage_work(struct work_struct *work)
{
struct onmessage_work_context *ctx;
/* Do not process messages if we're in DISCONNECTED state */
if (vmbus_connection.conn_state == DISCONNECTED)
return;
ctx = container_of(work, struct onmessage_work_context,
work);
vmbus_onmessage(&ctx->msg);
@ -613,21 +657,36 @@ static void vmbus_on_msg_dpc(unsigned long data)
void *page_addr = hv_context.synic_message_page[cpu];
struct hv_message *msg = (struct hv_message *)page_addr +
VMBUS_MESSAGE_SINT;
struct vmbus_channel_message_header *hdr;
struct vmbus_channel_message_table_entry *entry;
struct onmessage_work_context *ctx;
while (1) {
if (msg->header.message_type == HVMSG_NONE) {
if (msg->header.message_type == HVMSG_NONE)
/* no msg */
break;
} else {
hdr = (struct vmbus_channel_message_header *)msg->u.payload;
if (hdr->msgtype >= CHANNELMSG_COUNT) {
WARN_ONCE(1, "unknown msgtype=%d\n", hdr->msgtype);
goto msg_handled;
}
entry = &channel_message_table[hdr->msgtype];
if (entry->handler_type == VMHT_BLOCKING) {
ctx = kmalloc(sizeof(*ctx), GFP_ATOMIC);
if (ctx == NULL)
continue;
INIT_WORK(&ctx->work, vmbus_onmessage_work);
memcpy(&ctx->msg, msg, sizeof(*msg));
queue_work(vmbus_connection.work_queue, &ctx->work);
}
queue_work(vmbus_connection.work_queue, &ctx->work);
} else
entry->message_handler(hdr);
msg_handled:
msg->header.message_type = HVMSG_NONE;
/*
@ -704,6 +763,39 @@ static void vmbus_isr(void)
}
}
#ifdef CONFIG_HOTPLUG_CPU
static int hyperv_cpu_disable(void)
{
return -ENOSYS;
}
static void hv_cpu_hotplug_quirk(bool vmbus_loaded)
{
static void *previous_cpu_disable;
/*
* Offlining a CPU when running on newer hypervisors (WS2012R2, Win8,
* ...) is not supported at this moment as channel interrupts are
* distributed across all of them.
*/
if ((vmbus_proto_version == VERSION_WS2008) ||
(vmbus_proto_version == VERSION_WIN7))
return;
if (vmbus_loaded) {
previous_cpu_disable = smp_ops.cpu_disable;
smp_ops.cpu_disable = hyperv_cpu_disable;
pr_notice("CPU offlining is not supported by hypervisor\n");
} else if (previous_cpu_disable)
smp_ops.cpu_disable = previous_cpu_disable;
}
#else
static void hv_cpu_hotplug_quirk(bool vmbus_loaded)
{
}
#endif
/*
* vmbus_bus_init -Main vmbus driver initialization routine.
*
@ -744,6 +836,16 @@ static int vmbus_bus_init(int irq)
if (ret)
goto err_alloc;
hv_cpu_hotplug_quirk(true);
/*
* Only register if the crash MSRs are available
*/
if (ms_hyperv.features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) {
atomic_notifier_chain_register(&panic_notifier_list,
&hyperv_panic_block);
}
vmbus_request_offers();
return 0;
@ -840,10 +942,8 @@ int vmbus_device_register(struct hv_device *child_device_obj)
{
int ret = 0;
static atomic_t device_num = ATOMIC_INIT(0);
dev_set_name(&child_device_obj->device, "vmbus_0_%d",
atomic_inc_return(&device_num));
dev_set_name(&child_device_obj->device, "vmbus_%d",
child_device_obj->channel->id);
child_device_obj->device.bus = &hv_bus;
child_device_obj->device.parent = &hv_acpi_dev->dev;
@ -992,11 +1092,19 @@ cleanup:
static void __exit vmbus_exit(void)
{
int cpu;
vmbus_connection.conn_state = DISCONNECTED;
hv_synic_clockevents_cleanup();
hv_remove_vmbus_irq();
vmbus_free_channels();
bus_unregister(&hv_bus);
hv_cleanup();
for_each_online_cpu(cpu)
smp_call_function_single(cpu, hv_synic_cleanup, NULL, 1);
acpi_bus_unregister_driver(&vmbus_acpi_driver);
hv_cpu_hotplug_quirk(false);
vmbus_disconnect();
}

View File

@ -0,0 +1,61 @@
#
# Coresight configuration
#
menuconfig CORESIGHT
bool "CoreSight Tracing Support"
select ARM_AMBA
help
This framework provides a kernel interface for the CoreSight debug
and trace drivers to register themselves with. It's intended to build
a topological view of the CoreSight components based on a DT
specification and configure the right serie of components when a
trace source gets enabled.
if CORESIGHT
config CORESIGHT_LINKS_AND_SINKS
bool "CoreSight Link and Sink drivers"
help
This enables support for CoreSight link and sink drivers that are
responsible for transporting and collecting the trace data
respectively. Link and sinks are dynamically aggregated with a trace
entity at run time to form a complete trace path.
config CORESIGHT_LINK_AND_SINK_TMC
bool "Coresight generic TMC driver"
depends on CORESIGHT_LINKS_AND_SINKS
help
This enables support for the Trace Memory Controller driver.
Depending on its configuration the device can act as a link (embedded
trace router - ETR) or sink (embedded trace FIFO). The driver
complies with the generic implementation of the component without
special enhancement or added features.
config CORESIGHT_SINK_TPIU
bool "Coresight generic TPIU driver"
depends on CORESIGHT_LINKS_AND_SINKS
help
This enables support for the Trace Port Interface Unit driver,
responsible for bridging the gap between the on-chip coresight
components and a trace for bridging the gap between the on-chip
coresight components and a trace port collection engine, typically
connected to an external host for use case capturing more traces than
the on-board coresight memory can handle.
config CORESIGHT_SINK_ETBV10
bool "Coresight ETBv1.0 driver"
depends on CORESIGHT_LINKS_AND_SINKS
help
This enables support for the Embedded Trace Buffer version 1.0 driver
that complies with the generic implementation of the component without
special enhancement or added features.
config CORESIGHT_SOURCE_ETM3X
bool "CoreSight Embedded Trace Macrocell 3.x driver"
depends on !ARM64
select CORESIGHT_LINKS_AND_SINKS
help
This driver provides support for processor ETM3.x and PTM1.x modules,
which allows tracing the instructions that a processor is executing
This is primarily useful for instruction level tracing. Depending
the ETM version data tracing may also be available.
endif

View File

@ -313,8 +313,8 @@ static ssize_t etb_read(struct file *file, char __user *data,
*ppos += len;
dev_dbg(drvdata->dev, "%s: %d bytes copied, %d bytes left\n",
__func__, len, (int) (depth * 4 - *ppos));
dev_dbg(drvdata->dev, "%s: %zu bytes copied, %d bytes left\n",
__func__, len, (int)(depth * 4 - *ppos));
return len;
}

View File

@ -107,7 +107,7 @@ static int replicator_remove(struct platform_device *pdev)
return 0;
}
static struct of_device_id replicator_match[] = {
static const struct of_device_id replicator_match[] = {
{.compatible = "arm,coresight-replicator"},
{}
};

View File

@ -533,8 +533,8 @@ static ssize_t tmc_read(struct file *file, char __user *data, size_t len,
*ppos += len;
dev_dbg(drvdata->dev, "%s: %d bytes copied, %d bytes left\n",
__func__, len, (int) (drvdata->size - *ppos));
dev_dbg(drvdata->dev, "%s: %zu bytes copied, %d bytes left\n",
__func__, len, (int)(drvdata->size - *ppos));
return len;
}
@ -565,6 +565,59 @@ static const struct file_operations tmc_fops = {
.llseek = no_llseek,
};
static ssize_t status_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int ret;
unsigned long flags;
u32 tmc_rsz, tmc_sts, tmc_rrp, tmc_rwp, tmc_trg;
u32 tmc_ctl, tmc_ffsr, tmc_ffcr, tmc_mode, tmc_pscr;
u32 devid;
struct tmc_drvdata *drvdata = dev_get_drvdata(dev->parent);
ret = clk_prepare_enable(drvdata->clk);
if (ret)
goto out;
spin_lock_irqsave(&drvdata->spinlock, flags);
CS_UNLOCK(drvdata->base);
tmc_rsz = readl_relaxed(drvdata->base + TMC_RSZ);
tmc_sts = readl_relaxed(drvdata->base + TMC_STS);
tmc_rrp = readl_relaxed(drvdata->base + TMC_RRP);
tmc_rwp = readl_relaxed(drvdata->base + TMC_RWP);
tmc_trg = readl_relaxed(drvdata->base + TMC_TRG);
tmc_ctl = readl_relaxed(drvdata->base + TMC_CTL);
tmc_ffsr = readl_relaxed(drvdata->base + TMC_FFSR);
tmc_ffcr = readl_relaxed(drvdata->base + TMC_FFCR);
tmc_mode = readl_relaxed(drvdata->base + TMC_MODE);
tmc_pscr = readl_relaxed(drvdata->base + TMC_PSCR);
devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID);
CS_LOCK(drvdata->base);
spin_unlock_irqrestore(&drvdata->spinlock, flags);
clk_disable_unprepare(drvdata->clk);
return sprintf(buf,
"Depth:\t\t0x%x\n"
"Status:\t\t0x%x\n"
"RAM read ptr:\t0x%x\n"
"RAM wrt ptr:\t0x%x\n"
"Trigger cnt:\t0x%x\n"
"Control:\t0x%x\n"
"Flush status:\t0x%x\n"
"Flush ctrl:\t0x%x\n"
"Mode:\t\t0x%x\n"
"PSRC:\t\t0x%x\n"
"DEVID:\t\t0x%x\n",
tmc_rsz, tmc_sts, tmc_rrp, tmc_rwp, tmc_trg,
tmc_ctl, tmc_ffsr, tmc_ffcr, tmc_mode, tmc_pscr, devid);
out:
return -EINVAL;
}
static DEVICE_ATTR_RO(status);
static ssize_t trigger_cntr_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@ -593,18 +646,21 @@ static DEVICE_ATTR_RW(trigger_cntr);
static struct attribute *coresight_etb_attrs[] = {
&dev_attr_trigger_cntr.attr,
&dev_attr_status.attr,
NULL,
};
ATTRIBUTE_GROUPS(coresight_etb);
static struct attribute *coresight_etr_attrs[] = {
&dev_attr_trigger_cntr.attr,
&dev_attr_status.attr,
NULL,
};
ATTRIBUTE_GROUPS(coresight_etr);
static struct attribute *coresight_etf_attrs[] = {
&dev_attr_trigger_cntr.attr,
&dev_attr_status.attr,
NULL,
};
ATTRIBUTE_GROUPS(coresight_etf);

View File

@ -305,7 +305,9 @@ static int coresight_build_paths(struct coresight_device *csdev,
list_add(&csdev->path_link, path);
if (csdev->type == CORESIGHT_DEV_TYPE_SINK && csdev->activated) {
if ((csdev->type == CORESIGHT_DEV_TYPE_SINK ||
csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) &&
csdev->activated) {
if (enable)
ret = coresight_enable_path(path);
else

View File

@ -22,6 +22,7 @@
#include <linux/platform_device.h>
#include <linux/amba/bus.h>
#include <linux/coresight.h>
#include <linux/cpumask.h>
#include <asm/smp_plat.h>
@ -104,7 +105,7 @@ static int of_coresight_alloc_memory(struct device *dev,
struct coresight_platform_data *of_get_coresight_platform_data(
struct device *dev, struct device_node *node)
{
int i = 0, ret = 0;
int i = 0, ret = 0, cpu;
struct coresight_platform_data *pdata;
struct of_endpoint endpoint, rendpoint;
struct device *rdev;
@ -178,17 +179,10 @@ struct coresight_platform_data *of_get_coresight_platform_data(
/* Affinity defaults to CPU0 */
pdata->cpu = 0;
dn = of_parse_phandle(node, "cpu", 0);
if (dn) {
const u32 *cell;
int len, index;
u64 hwid;
cell = of_get_property(dn, "reg", &len);
if (cell) {
hwid = of_read_number(cell, of_n_addr_cells(dn));
index = get_logical_index(hwid);
if (index != -EINVAL)
pdata->cpu = index;
for (cpu = 0; dn && cpu < nr_cpu_ids; cpu++) {
if (dn == of_get_cpu_node(cpu, NULL)) {
pdata->cpu = cpu;
break;
}
}

View File

@ -56,9 +56,9 @@ static int mcb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
res = request_mem_region(priv->mapbase, CHAM_HEADER_SIZE,
KBUILD_MODNAME);
if (IS_ERR(res)) {
if (!res) {
dev_err(&pdev->dev, "Failed to request PCI memory\n");
ret = PTR_ERR(res);
ret = -EBUSY;
goto out_disable;
}

View File

@ -83,6 +83,15 @@ config FSL_IFC
bool
depends on FSL_SOC
config JZ4780_NEMC
bool "Ingenic JZ4780 SoC NEMC driver"
default y
depends on MACH_JZ4780
help
This driver is for the NAND/External Memory Controller (NEMC) in
the Ingenic JZ4780. This controller is used to handle external
memory devices such as NAND and SRAM.
source "drivers/memory/tegra/Kconfig"
endif

View File

@ -13,5 +13,6 @@ obj-$(CONFIG_FSL_CORENET_CF) += fsl-corenet-cf.o
obj-$(CONFIG_FSL_IFC) += fsl_ifc.o
obj-$(CONFIG_MVEBU_DEVBUS) += mvebu-devbus.o
obj-$(CONFIG_TEGRA20_MC) += tegra20-mc.o
obj-$(CONFIG_JZ4780_NEMC) += jz4780-nemc.o
obj-$(CONFIG_TEGRA_MC) += tegra/

View File

@ -0,0 +1,391 @@
/*
* JZ4780 NAND/external memory controller (NEMC)
*
* Copyright (c) 2015 Imagination Technologies
* Author: Alex Smith <alex@alex-smith.me.uk>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*/
#include <linux/clk.h>
#include <linux/init.h>
#include <linux/math64.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/jz4780-nemc.h>
#define NEMC_SMCRn(n) (0x14 + (((n) - 1) * 4))
#define NEMC_NFCSR 0x50
#define NEMC_SMCR_SMT BIT(0)
#define NEMC_SMCR_BW_SHIFT 6
#define NEMC_SMCR_BW_MASK (0x3 << NEMC_SMCR_BW_SHIFT)
#define NEMC_SMCR_BW_8 (0 << 6)
#define NEMC_SMCR_TAS_SHIFT 8
#define NEMC_SMCR_TAS_MASK (0xf << NEMC_SMCR_TAS_SHIFT)
#define NEMC_SMCR_TAH_SHIFT 12
#define NEMC_SMCR_TAH_MASK (0xf << NEMC_SMCR_TAH_SHIFT)
#define NEMC_SMCR_TBP_SHIFT 16
#define NEMC_SMCR_TBP_MASK (0xf << NEMC_SMCR_TBP_SHIFT)
#define NEMC_SMCR_TAW_SHIFT 20
#define NEMC_SMCR_TAW_MASK (0xf << NEMC_SMCR_TAW_SHIFT)
#define NEMC_SMCR_TSTRV_SHIFT 24
#define NEMC_SMCR_TSTRV_MASK (0x3f << NEMC_SMCR_TSTRV_SHIFT)
#define NEMC_NFCSR_NFEn(n) BIT(((n) - 1) << 1)
#define NEMC_NFCSR_NFCEn(n) BIT((((n) - 1) << 1) + 1)
#define NEMC_NFCSR_TNFEn(n) BIT(16 + (n) - 1)
struct jz4780_nemc {
spinlock_t lock;
struct device *dev;
void __iomem *base;
struct clk *clk;
uint32_t clk_period;
unsigned long banks_present;
};
/**
* jz4780_nemc_num_banks() - count the number of banks referenced by a device
* @dev: device to count banks for, must be a child of the NEMC.
*
* Return: The number of unique NEMC banks referred to by the specified NEMC
* child device. Unique here means that a device that references the same bank
* multiple times in the its "reg" property will only count once.
*/
unsigned int jz4780_nemc_num_banks(struct device *dev)
{
const __be32 *prop;
unsigned int bank, count = 0;
unsigned long referenced = 0;
int i = 0;
while ((prop = of_get_address(dev->of_node, i++, NULL, NULL))) {
bank = of_read_number(prop, 1);
if (!(referenced & BIT(bank))) {
referenced |= BIT(bank);
count++;
}
}
return count;
}
EXPORT_SYMBOL(jz4780_nemc_num_banks);
/**
* jz4780_nemc_set_type() - set the type of device connected to a bank
* @dev: child device of the NEMC.
* @bank: bank number to configure.
* @type: type of device connected to the bank.
*/
void jz4780_nemc_set_type(struct device *dev, unsigned int bank,
enum jz4780_nemc_bank_type type)
{
struct jz4780_nemc *nemc = dev_get_drvdata(dev->parent);
uint32_t nfcsr;
nfcsr = readl(nemc->base + NEMC_NFCSR);
/* TODO: Support toggle NAND devices. */
switch (type) {
case JZ4780_NEMC_BANK_SRAM:
nfcsr &= ~(NEMC_NFCSR_TNFEn(bank) | NEMC_NFCSR_NFEn(bank));
break;
case JZ4780_NEMC_BANK_NAND:
nfcsr &= ~NEMC_NFCSR_TNFEn(bank);
nfcsr |= NEMC_NFCSR_NFEn(bank);
break;
}
writel(nfcsr, nemc->base + NEMC_NFCSR);
}
EXPORT_SYMBOL(jz4780_nemc_set_type);
/**
* jz4780_nemc_assert() - (de-)assert a NAND device's chip enable pin
* @dev: child device of the NEMC.
* @bank: bank number of device.
* @assert: whether the chip enable pin should be asserted.
*
* (De-)asserts the chip enable pin for the NAND device connected to the
* specified bank.
*/
void jz4780_nemc_assert(struct device *dev, unsigned int bank, bool assert)
{
struct jz4780_nemc *nemc = dev_get_drvdata(dev->parent);
uint32_t nfcsr;
nfcsr = readl(nemc->base + NEMC_NFCSR);
if (assert)
nfcsr |= NEMC_NFCSR_NFCEn(bank);
else
nfcsr &= ~NEMC_NFCSR_NFCEn(bank);
writel(nfcsr, nemc->base + NEMC_NFCSR);
}
EXPORT_SYMBOL(jz4780_nemc_assert);
static uint32_t jz4780_nemc_clk_period(struct jz4780_nemc *nemc)
{
unsigned long rate;
rate = clk_get_rate(nemc->clk);
if (!rate)
return 0;
/* Return in picoseconds. */
return div64_ul(1000000000000ull, rate);
}
static uint32_t jz4780_nemc_ns_to_cycles(struct jz4780_nemc *nemc, uint32_t ns)
{
return ((ns * 1000) + nemc->clk_period - 1) / nemc->clk_period;
}
static bool jz4780_nemc_configure_bank(struct jz4780_nemc *nemc,
unsigned int bank,
struct device_node *node)
{
uint32_t smcr, val, cycles;
/*
* Conversion of tBP and tAW cycle counts to values supported by the
* hardware (round up to the next supported value).
*/
static const uint32_t convert_tBP_tAW[] = {
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
/* 11 - 12 -> 12 cycles */
11, 11,
/* 13 - 15 -> 15 cycles */
12, 12, 12,
/* 16 - 20 -> 20 cycles */
13, 13, 13, 13, 13,
/* 21 - 25 -> 25 cycles */
14, 14, 14, 14, 14,
/* 26 - 31 -> 31 cycles */
15, 15, 15, 15, 15, 15
};
smcr = readl(nemc->base + NEMC_SMCRn(bank));
smcr &= ~NEMC_SMCR_SMT;
if (!of_property_read_u32(node, "ingenic,nemc-bus-width", &val)) {
smcr &= ~NEMC_SMCR_BW_MASK;
switch (val) {
case 8:
smcr |= NEMC_SMCR_BW_8;
break;
default:
/*
* Earlier SoCs support a 16 bit bus width (the 4780
* does not), until those are properly supported, error.
*/
dev_err(nemc->dev, "unsupported bus width: %u\n", val);
return false;
}
}
if (of_property_read_u32(node, "ingenic,nemc-tAS", &val) == 0) {
smcr &= ~NEMC_SMCR_TAS_MASK;
cycles = jz4780_nemc_ns_to_cycles(nemc, val);
if (cycles > 15) {
dev_err(nemc->dev, "tAS %u is too high (%u cycles)\n",
val, cycles);
return false;
}
smcr |= cycles << NEMC_SMCR_TAS_SHIFT;
}
if (of_property_read_u32(node, "ingenic,nemc-tAH", &val) == 0) {
smcr &= ~NEMC_SMCR_TAH_MASK;
cycles = jz4780_nemc_ns_to_cycles(nemc, val);
if (cycles > 15) {
dev_err(nemc->dev, "tAH %u is too high (%u cycles)\n",
val, cycles);
return false;
}
smcr |= cycles << NEMC_SMCR_TAH_SHIFT;
}
if (of_property_read_u32(node, "ingenic,nemc-tBP", &val) == 0) {
smcr &= ~NEMC_SMCR_TBP_MASK;
cycles = jz4780_nemc_ns_to_cycles(nemc, val);
if (cycles > 31) {
dev_err(nemc->dev, "tBP %u is too high (%u cycles)\n",
val, cycles);
return false;
}
smcr |= convert_tBP_tAW[cycles] << NEMC_SMCR_TBP_SHIFT;
}
if (of_property_read_u32(node, "ingenic,nemc-tAW", &val) == 0) {
smcr &= ~NEMC_SMCR_TAW_MASK;
cycles = jz4780_nemc_ns_to_cycles(nemc, val);
if (cycles > 31) {
dev_err(nemc->dev, "tAW %u is too high (%u cycles)\n",
val, cycles);
return false;
}
smcr |= convert_tBP_tAW[cycles] << NEMC_SMCR_TAW_SHIFT;
}
if (of_property_read_u32(node, "ingenic,nemc-tSTRV", &val) == 0) {
smcr &= ~NEMC_SMCR_TSTRV_MASK;
cycles = jz4780_nemc_ns_to_cycles(nemc, val);
if (cycles > 63) {
dev_err(nemc->dev, "tSTRV %u is too high (%u cycles)\n",
val, cycles);
return false;
}
smcr |= cycles << NEMC_SMCR_TSTRV_SHIFT;
}
writel(smcr, nemc->base + NEMC_SMCRn(bank));
return true;
}
static int jz4780_nemc_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct jz4780_nemc *nemc;
struct resource *res;
struct device_node *child;
const __be32 *prop;
unsigned int bank;
unsigned long referenced;
int i, ret;
nemc = devm_kzalloc(dev, sizeof(*nemc), GFP_KERNEL);
if (!nemc)
return -ENOMEM;
spin_lock_init(&nemc->lock);
nemc->dev = dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
nemc->base = devm_ioremap_resource(dev, res);
if (IS_ERR(nemc->base)) {
dev_err(dev, "failed to get I/O memory\n");
return PTR_ERR(nemc->base);
}
writel(0, nemc->base + NEMC_NFCSR);
nemc->clk = devm_clk_get(dev, NULL);
if (IS_ERR(nemc->clk)) {
dev_err(dev, "failed to get clock\n");
return PTR_ERR(nemc->clk);
}
ret = clk_prepare_enable(nemc->clk);
if (ret) {
dev_err(dev, "failed to enable clock: %d\n", ret);
return ret;
}
nemc->clk_period = jz4780_nemc_clk_period(nemc);
if (!nemc->clk_period) {
dev_err(dev, "failed to calculate clock period\n");
clk_disable_unprepare(nemc->clk);
return -EINVAL;
}
/*
* Iterate over child devices, check that they do not conflict with
* each other, and register child devices for them. If a child device
* has invalid properties, it is ignored and no platform device is
* registered for it.
*/
for_each_child_of_node(nemc->dev->of_node, child) {
referenced = 0;
i = 0;
while ((prop = of_get_address(child, i++, NULL, NULL))) {
bank = of_read_number(prop, 1);
if (bank < 1 || bank >= JZ4780_NEMC_NUM_BANKS) {
dev_err(nemc->dev,
"%s requests invalid bank %u\n",
child->full_name, bank);
/* Will continue the outer loop below. */
referenced = 0;
break;
}
referenced |= BIT(bank);
}
if (!referenced) {
dev_err(nemc->dev, "%s has no addresses\n",
child->full_name);
continue;
} else if (nemc->banks_present & referenced) {
dev_err(nemc->dev, "%s conflicts with another node\n",
child->full_name);
continue;
}
/* Configure bank parameters. */
for_each_set_bit(bank, &referenced, JZ4780_NEMC_NUM_BANKS) {
if (!jz4780_nemc_configure_bank(nemc, bank, child)) {
referenced = 0;
break;
}
}
if (referenced) {
if (of_platform_device_create(child, NULL, nemc->dev))
nemc->banks_present |= referenced;
}
}
platform_set_drvdata(pdev, nemc);
dev_info(dev, "JZ4780 NEMC initialised\n");
return 0;
}
static int jz4780_nemc_remove(struct platform_device *pdev)
{
struct jz4780_nemc *nemc = platform_get_drvdata(pdev);
clk_disable_unprepare(nemc->clk);
return 0;
}
static const struct of_device_id jz4780_nemc_dt_match[] = {
{ .compatible = "ingenic,jz4780-nemc" },
{},
};
static struct platform_driver jz4780_nemc_driver = {
.probe = jz4780_nemc_probe,
.remove = jz4780_nemc_remove,
.driver = {
.name = "jz4780-nemc",
.of_match_table = of_match_ptr(jz4780_nemc_dt_match),
},
};
static int __init jz4780_nemc_init(void)
{
return platform_driver_register(&jz4780_nemc_driver);
}
subsys_initcall(jz4780_nemc_init);

View File

@ -230,6 +230,8 @@ static const struct i2c_device_id bh1780_id[] = {
{ },
};
MODULE_DEVICE_TABLE(i2c, bh1780_id);
#ifdef CONFIG_OF
static const struct of_device_id of_bh1780_match[] = {
{ .compatible = "rohm,bh1780gli", },

View File

@ -479,6 +479,7 @@ static int fpga_program_block(struct fpga_dev *priv, void *buf, size_t count)
static noinline int fpga_program_cpu(struct fpga_dev *priv)
{
int ret;
unsigned long timeout;
/* Disable the programmer */
fpga_programmer_disable(priv);
@ -497,8 +498,8 @@ static noinline int fpga_program_cpu(struct fpga_dev *priv)
goto out_disable_controller;
/* Wait for the interrupt handler to signal that programming finished */
ret = wait_for_completion_timeout(&priv->completion, 2 * HZ);
if (!ret) {
timeout = wait_for_completion_timeout(&priv->completion, 2 * HZ);
if (!timeout) {
dev_err(priv->dev, "Timed out waiting for completion\n");
ret = -ETIMEDOUT;
goto out_disable_controller;
@ -536,6 +537,7 @@ static noinline int fpga_program_dma(struct fpga_dev *priv)
struct sg_table table;
dma_cookie_t cookie;
int ret, i;
unsigned long timeout;
/* Disable the programmer */
fpga_programmer_disable(priv);
@ -623,8 +625,8 @@ static noinline int fpga_program_dma(struct fpga_dev *priv)
dev_dbg(priv->dev, "enabled the controller\n");
/* Wait for the interrupt handler to signal that programming finished */
ret = wait_for_completion_timeout(&priv->completion, 2 * HZ);
if (!ret) {
timeout = wait_for_completion_timeout(&priv->completion, 2 * HZ);
if (!timeout) {
dev_err(priv->dev, "Timed out waiting for completion\n");
ret = -ETIMEDOUT;
goto out_disable_controller;
@ -1142,7 +1144,7 @@ out_return:
return ret;
}
static struct of_device_id fpga_of_match[] = {
static const struct of_device_id fpga_of_match[] = {
{ .compatible = "carma,fpga-programmer", },
{},
};

View File

@ -1486,7 +1486,7 @@ static int data_of_remove(struct platform_device *op)
return 0;
}
static struct of_device_id data_of_match[] = {
static const struct of_device_id data_of_match[] = {
{ .compatible = "carma,carma-fpga", },
{},
};

View File

@ -950,6 +950,7 @@ int lis3lv02d_init_dt(struct lis3lv02d *lis3)
struct lis3lv02d_platform_data *pdata;
struct device_node *np = lis3->of_node;
u32 val;
s32 sval;
if (!lis3->of_node)
return 0;
@ -1031,6 +1032,23 @@ int lis3lv02d_init_dt(struct lis3lv02d *lis3)
pdata->wakeup_flags |= LIS3_WAKEUP_Z_LO;
if (of_get_property(np, "st,wakeup-z-hi", NULL))
pdata->wakeup_flags |= LIS3_WAKEUP_Z_HI;
if (of_get_property(np, "st,wakeup-threshold", &val))
pdata->wakeup_thresh = val;
if (of_get_property(np, "st,wakeup2-x-lo", NULL))
pdata->wakeup_flags2 |= LIS3_WAKEUP_X_LO;
if (of_get_property(np, "st,wakeup2-x-hi", NULL))
pdata->wakeup_flags2 |= LIS3_WAKEUP_X_HI;
if (of_get_property(np, "st,wakeup2-y-lo", NULL))
pdata->wakeup_flags2 |= LIS3_WAKEUP_Y_LO;
if (of_get_property(np, "st,wakeup2-y-hi", NULL))
pdata->wakeup_flags2 |= LIS3_WAKEUP_Y_HI;
if (of_get_property(np, "st,wakeup2-z-lo", NULL))
pdata->wakeup_flags2 |= LIS3_WAKEUP_Z_LO;
if (of_get_property(np, "st,wakeup2-z-hi", NULL))
pdata->wakeup_flags2 |= LIS3_WAKEUP_Z_HI;
if (of_get_property(np, "st,wakeup2-threshold", &val))
pdata->wakeup_thresh2 = val;
if (!of_property_read_u32(np, "st,highpass-cutoff-hz", &val)) {
switch (val) {
@ -1054,29 +1072,29 @@ int lis3lv02d_init_dt(struct lis3lv02d *lis3)
if (of_get_property(np, "st,hipass2-disable", NULL))
pdata->hipass_ctrl |= LIS3_HIPASS2_DISABLE;
if (of_get_property(np, "st,axis-x", &val))
pdata->axis_x = val;
if (of_get_property(np, "st,axis-y", &val))
pdata->axis_y = val;
if (of_get_property(np, "st,axis-z", &val))
pdata->axis_z = val;
if (of_property_read_s32(np, "st,axis-x", &sval) == 0)
pdata->axis_x = sval;
if (of_property_read_s32(np, "st,axis-y", &sval) == 0)
pdata->axis_y = sval;
if (of_property_read_s32(np, "st,axis-z", &sval) == 0)
pdata->axis_z = sval;
if (of_get_property(np, "st,default-rate", NULL))
pdata->default_rate = val;
if (of_get_property(np, "st,min-limit-x", &val))
pdata->st_min_limits[0] = val;
if (of_get_property(np, "st,min-limit-y", &val))
pdata->st_min_limits[1] = val;
if (of_get_property(np, "st,min-limit-z", &val))
pdata->st_min_limits[2] = val;
if (of_property_read_s32(np, "st,min-limit-x", &sval) == 0)
pdata->st_min_limits[0] = sval;
if (of_property_read_s32(np, "st,min-limit-y", &sval) == 0)
pdata->st_min_limits[1] = sval;
if (of_property_read_s32(np, "st,min-limit-z", &sval) == 0)
pdata->st_min_limits[2] = sval;
if (of_get_property(np, "st,max-limit-x", &val))
pdata->st_max_limits[0] = val;
if (of_get_property(np, "st,max-limit-y", &val))
pdata->st_max_limits[1] = val;
if (of_get_property(np, "st,max-limit-z", &val))
pdata->st_max_limits[2] = val;
if (of_property_read_s32(np, "st,max-limit-x", &sval) == 0)
pdata->st_max_limits[0] = sval;
if (of_property_read_s32(np, "st,max-limit-y", &sval) == 0)
pdata->st_max_limits[1] = sval;
if (of_property_read_s32(np, "st,max-limit-z", &sval) == 0)
pdata->st_max_limits[2] = sval;
lis3->pdata = pdata;

View File

@ -106,7 +106,7 @@ static union axis_conversion lis3lv02d_axis_map =
{ .as_array = { LIS3_DEV_X, LIS3_DEV_Y, LIS3_DEV_Z } };
#ifdef CONFIG_OF
static struct of_device_id lis3lv02d_i2c_dt_ids[] = {
static const struct of_device_id lis3lv02d_i2c_dt_ids[] = {
{ .compatible = "st,lis3lv02d" },
{}
};

View File

@ -61,7 +61,7 @@ static union axis_conversion lis3lv02d_axis_normal =
{ .as_array = { 1, 2, 3 } };
#ifdef CONFIG_OF
static struct of_device_id lis302dl_spi_dt_ids[] = {
static const struct of_device_id lis302dl_spi_dt_ids[] = {
{ .compatible = "st,lis302dl-spi" },
{}
};

View File

@ -21,3 +21,6 @@ mei-me-objs += hw-me.o
obj-$(CONFIG_INTEL_MEI_TXE) += mei-txe.o
mei-txe-objs := pci-txe.o
mei-txe-objs += hw-txe.o
mei-$(CONFIG_EVENT_TRACING) += mei-trace.o
CFLAGS_mei-trace.o = -I$(src)

View File

@ -48,10 +48,7 @@ void mei_amthif_reset_params(struct mei_device *dev)
{
/* reset iamthif parameters. */
dev->iamthif_current_cb = NULL;
dev->iamthif_msg_buf_size = 0;
dev->iamthif_msg_buf_index = 0;
dev->iamthif_canceled = false;
dev->iamthif_ioctl = false;
dev->iamthif_state = MEI_IAMTHIF_IDLE;
dev->iamthif_timer = 0;
dev->iamthif_stall_timer = 0;
@ -69,7 +66,6 @@ int mei_amthif_host_init(struct mei_device *dev)
{
struct mei_cl *cl = &dev->iamthif_cl;
struct mei_me_client *me_cl;
unsigned char *msg_buf;
int ret;
dev->iamthif_state = MEI_IAMTHIF_IDLE;
@ -90,18 +86,6 @@ int mei_amthif_host_init(struct mei_device *dev)
dev->iamthif_mtu = me_cl->props.max_msg_length;
dev_dbg(dev->dev, "IAMTHIF_MTU = %d\n", dev->iamthif_mtu);
kfree(dev->iamthif_msg_buf);
dev->iamthif_msg_buf = NULL;
/* allocate storage for ME message buffer */
msg_buf = kcalloc(dev->iamthif_mtu,
sizeof(unsigned char), GFP_KERNEL);
if (!msg_buf) {
ret = -ENOMEM;
goto out;
}
dev->iamthif_msg_buf = msg_buf;
ret = mei_cl_link(cl, MEI_IAMTHIF_HOST_CLIENT_ID);
if (ret < 0) {
@ -194,30 +178,33 @@ int mei_amthif_read(struct mei_device *dev, struct file *file,
dev_dbg(dev->dev, "woke up from sleep\n");
}
if (cb->status) {
rets = cb->status;
dev_dbg(dev->dev, "read operation failed %d\n", rets);
goto free;
}
dev_dbg(dev->dev, "Got amthif data\n");
dev->iamthif_timer = 0;
if (cb) {
timeout = cb->read_time +
mei_secs_to_jiffies(MEI_IAMTHIF_READ_TIMER);
dev_dbg(dev->dev, "amthif timeout = %lud\n",
timeout);
timeout = cb->read_time +
mei_secs_to_jiffies(MEI_IAMTHIF_READ_TIMER);
dev_dbg(dev->dev, "amthif timeout = %lud\n",
timeout);
if (time_after(jiffies, timeout)) {
dev_dbg(dev->dev, "amthif Time out\n");
/* 15 sec for the message has expired */
list_del(&cb->list);
rets = -ETIME;
goto free;
}
if (time_after(jiffies, timeout)) {
dev_dbg(dev->dev, "amthif Time out\n");
/* 15 sec for the message has expired */
list_del_init(&cb->list);
rets = -ETIME;
goto free;
}
/* if the whole message will fit remove it from the list */
if (cb->buf_idx >= *offset && length >= (cb->buf_idx - *offset))
list_del(&cb->list);
list_del_init(&cb->list);
else if (cb->buf_idx > 0 && cb->buf_idx <= *offset) {
/* end of the message has been reached */
list_del(&cb->list);
list_del_init(&cb->list);
rets = 0;
goto free;
}
@ -225,15 +212,15 @@ int mei_amthif_read(struct mei_device *dev, struct file *file,
* remove message from deletion list
*/
dev_dbg(dev->dev, "amthif cb->response_buffer size - %d\n",
cb->response_buffer.size);
dev_dbg(dev->dev, "amthif cb->buf size - %d\n",
cb->buf.size);
dev_dbg(dev->dev, "amthif cb->buf_idx - %lu\n", cb->buf_idx);
/* length is being truncated to PAGE_SIZE, however,
* the buf_idx may point beyond */
length = min_t(size_t, length, (cb->buf_idx - *offset));
if (copy_to_user(ubuf, cb->response_buffer.data + *offset, length)) {
if (copy_to_user(ubuf, cb->buf.data + *offset, length)) {
dev_dbg(dev->dev, "failed to copy data to userland\n");
rets = -EFAULT;
} else {
@ -251,127 +238,89 @@ out:
return rets;
}
/**
* mei_amthif_read_start - queue message for sending read credential
*
* @cl: host client
* @file: file pointer of message recipient
*
* Return: 0 on success, <0 on failure.
*/
static int mei_amthif_read_start(struct mei_cl *cl, struct file *file)
{
struct mei_device *dev = cl->dev;
struct mei_cl_cb *cb;
size_t length = dev->iamthif_mtu;
int rets;
cb = mei_io_cb_init(cl, MEI_FOP_READ, file);
if (!cb) {
rets = -ENOMEM;
goto err;
}
rets = mei_io_cb_alloc_buf(cb, length);
if (rets)
goto err;
list_add_tail(&cb->list, &dev->ctrl_wr_list.list);
dev->iamthif_state = MEI_IAMTHIF_READING;
dev->iamthif_file_object = cb->file_object;
dev->iamthif_current_cb = cb;
return 0;
err:
mei_io_cb_free(cb);
return rets;
}
/**
* mei_amthif_send_cmd - send amthif command to the ME
*
* @dev: the device structure
* @cl: the host client
* @cb: mei call back struct
*
* Return: 0 on success, <0 on failure.
*
*/
static int mei_amthif_send_cmd(struct mei_device *dev, struct mei_cl_cb *cb)
static int mei_amthif_send_cmd(struct mei_cl *cl, struct mei_cl_cb *cb)
{
struct mei_msg_hdr mei_hdr;
struct mei_cl *cl;
struct mei_device *dev;
int ret;
if (!dev || !cb)
if (!cl->dev || !cb)
return -ENODEV;
dev_dbg(dev->dev, "write data to amthif client.\n");
dev = cl->dev;
dev->iamthif_state = MEI_IAMTHIF_WRITING;
dev->iamthif_current_cb = cb;
dev->iamthif_file_object = cb->file_object;
dev->iamthif_canceled = false;
dev->iamthif_ioctl = true;
dev->iamthif_msg_buf_size = cb->request_buffer.size;
memcpy(dev->iamthif_msg_buf, cb->request_buffer.data,
cb->request_buffer.size);
cl = &dev->iamthif_cl;
ret = mei_cl_flow_ctrl_creds(cl);
ret = mei_cl_write(cl, cb, false);
if (ret < 0)
return ret;
if (ret && mei_hbuf_acquire(dev)) {
ret = 0;
if (cb->request_buffer.size > mei_hbuf_max_len(dev)) {
mei_hdr.length = mei_hbuf_max_len(dev);
mei_hdr.msg_complete = 0;
} else {
mei_hdr.length = cb->request_buffer.size;
mei_hdr.msg_complete = 1;
}
if (cb->completed)
cb->status = mei_amthif_read_start(cl, cb->file_object);
mei_hdr.host_addr = cl->host_client_id;
mei_hdr.me_addr = cl->me_client_id;
mei_hdr.reserved = 0;
mei_hdr.internal = 0;
dev->iamthif_msg_buf_index += mei_hdr.length;
ret = mei_write_message(dev, &mei_hdr, dev->iamthif_msg_buf);
if (ret)
return ret;
if (mei_hdr.msg_complete) {
if (mei_cl_flow_ctrl_reduce(cl))
return -EIO;
dev->iamthif_flow_control_pending = true;
dev->iamthif_state = MEI_IAMTHIF_FLOW_CONTROL;
dev_dbg(dev->dev, "add amthif cb to write waiting list\n");
dev->iamthif_current_cb = cb;
dev->iamthif_file_object = cb->file_object;
list_add_tail(&cb->list, &dev->write_waiting_list.list);
} else {
dev_dbg(dev->dev, "message does not complete, so add amthif cb to write list.\n");
list_add_tail(&cb->list, &dev->write_list.list);
}
} else {
list_add_tail(&cb->list, &dev->write_list.list);
}
return 0;
}
/**
* mei_amthif_write - write amthif data to amthif client
*
* @dev: the device structure
* @cb: mei call back struct
*
* Return: 0 on success, <0 on failure.
*
*/
int mei_amthif_write(struct mei_device *dev, struct mei_cl_cb *cb)
{
int ret;
if (!dev || !cb)
return -ENODEV;
ret = mei_io_cb_alloc_resp_buf(cb, dev->iamthif_mtu);
if (ret)
return ret;
cb->fop_type = MEI_FOP_WRITE;
if (!list_empty(&dev->amthif_cmd_list.list) ||
dev->iamthif_state != MEI_IAMTHIF_IDLE) {
dev_dbg(dev->dev,
"amthif state = %d\n", dev->iamthif_state);
dev_dbg(dev->dev, "AMTHIF: add cb to the wait list\n");
list_add_tail(&cb->list, &dev->amthif_cmd_list.list);
return 0;
}
return mei_amthif_send_cmd(dev, cb);
}
/**
* mei_amthif_run_next_cmd - send next amt command from queue
*
* @dev: the device structure
*
* Return: 0 on success, <0 on failure.
*/
void mei_amthif_run_next_cmd(struct mei_device *dev)
int mei_amthif_run_next_cmd(struct mei_device *dev)
{
struct mei_cl *cl = &dev->iamthif_cl;
struct mei_cl_cb *cb;
int ret;
if (!dev)
return;
dev->iamthif_msg_buf_size = 0;
dev->iamthif_msg_buf_index = 0;
dev->iamthif_canceled = false;
dev->iamthif_ioctl = true;
dev->iamthif_state = MEI_IAMTHIF_IDLE;
dev->iamthif_timer = 0;
dev->iamthif_file_object = NULL;
@ -381,13 +330,48 @@ void mei_amthif_run_next_cmd(struct mei_device *dev)
cb = list_first_entry_or_null(&dev->amthif_cmd_list.list,
typeof(*cb), list);
if (!cb)
return;
list_del(&cb->list);
ret = mei_amthif_send_cmd(dev, cb);
if (ret)
dev_warn(dev->dev, "amthif write failed status = %d\n", ret);
return 0;
list_del_init(&cb->list);
return mei_amthif_send_cmd(cl, cb);
}
/**
* mei_amthif_write - write amthif data to amthif client
*
* @cl: host client
* @cb: mei call back struct
*
* Return: 0 on success, <0 on failure.
*/
int mei_amthif_write(struct mei_cl *cl, struct mei_cl_cb *cb)
{
struct mei_device *dev;
if (WARN_ON(!cl || !cl->dev))
return -ENODEV;
if (WARN_ON(!cb))
return -EINVAL;
dev = cl->dev;
list_add_tail(&cb->list, &dev->amthif_cmd_list.list);
return mei_amthif_run_next_cmd(dev);
}
/**
* mei_amthif_poll - the amthif poll function
*
* @dev: the device structure
* @file: pointer to file structure
* @wait: pointer to poll_table structure
*
* Return: poll mask
*
* Locking: called under "dev->device_lock" lock
*/
unsigned int mei_amthif_poll(struct mei_device *dev,
struct file *file, poll_table *wait)
@ -396,19 +380,12 @@ unsigned int mei_amthif_poll(struct mei_device *dev,
poll_wait(file, &dev->iamthif_cl.wait, wait);
mutex_lock(&dev->device_lock);
if (!mei_cl_is_connected(&dev->iamthif_cl)) {
if (dev->iamthif_state == MEI_IAMTHIF_READ_COMPLETE &&
dev->iamthif_file_object == file) {
mask = POLLERR;
} else if (dev->iamthif_state == MEI_IAMTHIF_READ_COMPLETE &&
dev->iamthif_file_object == file) {
mask |= (POLLIN | POLLRDNORM);
dev_dbg(dev->dev, "run next amthif cb\n");
mask |= POLLIN | POLLRDNORM;
mei_amthif_run_next_cmd(dev);
}
mutex_unlock(&dev->device_lock);
return mask;
}
@ -427,71 +404,14 @@ unsigned int mei_amthif_poll(struct mei_device *dev,
int mei_amthif_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
struct mei_cl_cb *cmpl_list)
{
struct mei_device *dev = cl->dev;
struct mei_msg_hdr mei_hdr;
size_t len = dev->iamthif_msg_buf_size - dev->iamthif_msg_buf_index;
u32 msg_slots = mei_data2slots(len);
int slots;
int rets;
int ret;
rets = mei_cl_flow_ctrl_creds(cl);
if (rets < 0)
return rets;
if (rets == 0) {
cl_dbg(dev, cl, "No flow control credentials: not sending.\n");
return 0;
}
mei_hdr.host_addr = cl->host_client_id;
mei_hdr.me_addr = cl->me_client_id;
mei_hdr.reserved = 0;
mei_hdr.internal = 0;
slots = mei_hbuf_empty_slots(dev);
if (slots >= msg_slots) {
mei_hdr.length = len;
mei_hdr.msg_complete = 1;
/* Split the message only if we can write the whole host buffer */
} else if (slots == dev->hbuf_depth) {
msg_slots = slots;
len = (slots * sizeof(u32)) - sizeof(struct mei_msg_hdr);
mei_hdr.length = len;
mei_hdr.msg_complete = 0;
} else {
/* wait for next time the host buffer is empty */
return 0;
}
dev_dbg(dev->dev, MEI_HDR_FMT, MEI_HDR_PRM(&mei_hdr));
rets = mei_write_message(dev, &mei_hdr,
dev->iamthif_msg_buf + dev->iamthif_msg_buf_index);
if (rets) {
dev->iamthif_state = MEI_IAMTHIF_IDLE;
cl->status = rets;
list_del(&cb->list);
return rets;
}
if (mei_cl_flow_ctrl_reduce(cl))
return -EIO;
dev->iamthif_msg_buf_index += mei_hdr.length;
cl->status = 0;
if (mei_hdr.msg_complete) {
dev->iamthif_state = MEI_IAMTHIF_FLOW_CONTROL;
dev->iamthif_flow_control_pending = true;
/* save iamthif cb sent to amthif client */
cb->buf_idx = dev->iamthif_msg_buf_index;
dev->iamthif_current_cb = cb;
list_move_tail(&cb->list, &dev->write_waiting_list.list);
}
ret = mei_cl_irq_write(cl, cb, cmpl_list);
if (ret)
return ret;
if (cb->completed)
cb->status = mei_amthif_read_start(cl, cb->file_object);
return 0;
}
@ -500,83 +420,35 @@ int mei_amthif_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
* mei_amthif_irq_read_msg - read routine after ISR to
* handle the read amthif message
*
* @dev: the device structure
* @cl: mei client
* @mei_hdr: header of amthif message
* @complete_list: An instance of our list structure
* @cmpl_list: completed callbacks list
*
* Return: 0 on success, <0 on failure.
* Return: -ENODEV if cb is NULL 0 otherwise; error message is in cb->status
*/
int mei_amthif_irq_read_msg(struct mei_device *dev,
int mei_amthif_irq_read_msg(struct mei_cl *cl,
struct mei_msg_hdr *mei_hdr,
struct mei_cl_cb *complete_list)
struct mei_cl_cb *cmpl_list)
{
struct mei_cl_cb *cb;
unsigned char *buffer;
struct mei_device *dev;
int ret;
BUG_ON(mei_hdr->me_addr != dev->iamthif_cl.me_client_id);
BUG_ON(dev->iamthif_state != MEI_IAMTHIF_READING);
dev = cl->dev;
buffer = dev->iamthif_msg_buf + dev->iamthif_msg_buf_index;
BUG_ON(dev->iamthif_mtu < dev->iamthif_msg_buf_index + mei_hdr->length);
if (dev->iamthif_state != MEI_IAMTHIF_READING)
return 0;
mei_read_slots(dev, buffer, mei_hdr->length);
dev->iamthif_msg_buf_index += mei_hdr->length;
ret = mei_cl_irq_read_msg(cl, mei_hdr, cmpl_list);
if (ret)
return ret;
if (!mei_hdr->msg_complete)
return 0;
dev_dbg(dev->dev, "amthif_message_buffer_index =%d\n",
mei_hdr->length);
dev_dbg(dev->dev, "completed amthif read.\n ");
if (!dev->iamthif_current_cb)
return -ENODEV;
cb = dev->iamthif_current_cb;
dev->iamthif_current_cb = NULL;
dev->iamthif_stall_timer = 0;
cb->buf_idx = dev->iamthif_msg_buf_index;
cb->read_time = jiffies;
if (dev->iamthif_ioctl) {
/* found the iamthif cb */
dev_dbg(dev->dev, "complete the amthif read cb.\n ");
dev_dbg(dev->dev, "add the amthif read cb to complete.\n ");
list_add_tail(&cb->list, &complete_list->list);
}
return 0;
}
/**
* mei_amthif_irq_read - prepares to read amthif data.
*
* @dev: the device structure.
* @slots: free slots.
*
* Return: 0, OK; otherwise, error.
*/
int mei_amthif_irq_read(struct mei_device *dev, s32 *slots)
{
u32 msg_slots = mei_data2slots(sizeof(struct hbm_flow_control));
if (*slots < msg_slots)
return -EMSGSIZE;
*slots -= msg_slots;
if (mei_hbm_cl_flow_control_req(dev, &dev->iamthif_cl)) {
dev_dbg(dev->dev, "iamthif flow control failed\n");
return -EIO;
}
dev_dbg(dev->dev, "iamthif flow control success\n");
dev->iamthif_state = MEI_IAMTHIF_READING;
dev->iamthif_flow_control_pending = false;
dev->iamthif_msg_buf_index = 0;
dev->iamthif_msg_buf_size = 0;
dev->iamthif_stall_timer = MEI_IAMTHIF_STALL_TIMER;
dev->hbuf_is_ready = mei_hbuf_is_ready(dev);
return 0;
}
@ -588,17 +460,30 @@ int mei_amthif_irq_read(struct mei_device *dev, s32 *slots)
*/
void mei_amthif_complete(struct mei_device *dev, struct mei_cl_cb *cb)
{
if (cb->fop_type == MEI_FOP_WRITE) {
if (!cb->status) {
dev->iamthif_stall_timer = MEI_IAMTHIF_STALL_TIMER;
mei_io_cb_free(cb);
return;
}
/*
* in case of error enqueue the write cb to complete read list
* so it can be propagated to the reader
*/
list_add_tail(&cb->list, &dev->amthif_rd_complete_list.list);
wake_up_interruptible(&dev->iamthif_cl.wait);
return;
}
if (dev->iamthif_canceled != 1) {
dev->iamthif_state = MEI_IAMTHIF_READ_COMPLETE;
dev->iamthif_stall_timer = 0;
memcpy(cb->response_buffer.data,
dev->iamthif_msg_buf,
dev->iamthif_msg_buf_index);
list_add_tail(&cb->list, &dev->amthif_rd_complete_list.list);
dev_dbg(dev->dev, "amthif read completed\n");
dev->iamthif_timer = jiffies;
dev_dbg(dev->dev, "dev->iamthif_timer = %ld\n",
dev->iamthif_timer);
dev->iamthif_timer);
} else {
mei_amthif_run_next_cmd(dev);
}
@ -623,26 +508,22 @@ void mei_amthif_complete(struct mei_device *dev, struct mei_cl_cb *cb)
static bool mei_clear_list(struct mei_device *dev,
const struct file *file, struct list_head *mei_cb_list)
{
struct mei_cl_cb *cb_pos = NULL;
struct mei_cl_cb *cb_next = NULL;
struct mei_cl *cl = &dev->iamthif_cl;
struct mei_cl_cb *cb, *next;
bool removed = false;
/* list all list member */
list_for_each_entry_safe(cb_pos, cb_next, mei_cb_list, list) {
list_for_each_entry_safe(cb, next, mei_cb_list, list) {
/* check if list member associated with a file */
if (file == cb_pos->file_object) {
/* remove member from the list */
list_del(&cb_pos->list);
if (file == cb->file_object) {
/* check if cb equal to current iamthif cb */
if (dev->iamthif_current_cb == cb_pos) {
if (dev->iamthif_current_cb == cb) {
dev->iamthif_current_cb = NULL;
/* send flow control to iamthif client */
mei_hbm_cl_flow_control_req(dev,
&dev->iamthif_cl);
mei_hbm_cl_flow_control_req(dev, cl);
}
/* free all allocated buffers */
mei_io_cb_free(cb_pos);
cb_pos = NULL;
mei_io_cb_free(cb);
removed = true;
}
}

View File

@ -238,7 +238,7 @@ static ssize_t ___mei_cl_send(struct mei_cl *cl, u8 *buf, size_t length,
dev = cl->dev;
mutex_lock(&dev->device_lock);
if (cl->state != MEI_FILE_CONNECTED) {
if (!mei_cl_is_connected(cl)) {
rets = -ENODEV;
goto out;
}
@ -255,17 +255,13 @@ static ssize_t ___mei_cl_send(struct mei_cl *cl, u8 *buf, size_t length,
goto out;
}
cb = mei_io_cb_init(cl, NULL);
cb = mei_cl_alloc_cb(cl, length, MEI_FOP_WRITE, NULL);
if (!cb) {
rets = -ENOMEM;
goto out;
}
rets = mei_io_cb_alloc_req_buf(cb, length);
if (rets < 0)
goto out;
memcpy(cb->request_buffer.data, buf, length);
memcpy(cb->buf.data, buf, length);
rets = mei_cl_write(cl, cb, blocking);
@ -292,20 +288,21 @@ ssize_t __mei_cl_recv(struct mei_cl *cl, u8 *buf, size_t length)
mutex_lock(&dev->device_lock);
if (!cl->read_cb) {
rets = mei_cl_read_start(cl, length);
if (rets < 0)
goto out;
}
cb = mei_cl_read_cb(cl, NULL);
if (cb)
goto copy;
if (cl->reading_state != MEI_READ_COMPLETE &&
!waitqueue_active(&cl->rx_wait)) {
rets = mei_cl_read_start(cl, length, NULL);
if (rets && rets != -EBUSY)
goto out;
if (list_empty(&cl->rd_completed) && !waitqueue_active(&cl->rx_wait)) {
mutex_unlock(&dev->device_lock);
if (wait_event_interruptible(cl->rx_wait,
cl->reading_state == MEI_READ_COMPLETE ||
mei_cl_is_transitioning(cl))) {
(!list_empty(&cl->rd_completed)) ||
(!mei_cl_is_connected(cl)))) {
if (signal_pending(current))
return -EINTR;
@ -313,23 +310,31 @@ ssize_t __mei_cl_recv(struct mei_cl *cl, u8 *buf, size_t length)
}
mutex_lock(&dev->device_lock);
if (!mei_cl_is_connected(cl)) {
rets = -EBUSY;
goto out;
}
}
cb = cl->read_cb;
if (cl->reading_state != MEI_READ_COMPLETE) {
cb = mei_cl_read_cb(cl, NULL);
if (!cb) {
rets = 0;
goto out;
}
copy:
if (cb->status) {
rets = cb->status;
goto free;
}
r_length = min_t(size_t, length, cb->buf_idx);
memcpy(buf, cb->response_buffer.data, r_length);
memcpy(buf, cb->buf.data, r_length);
rets = r_length;
free:
mei_io_cb_free(cb);
cl->reading_state = MEI_IDLE;
cl->read_cb = NULL;
out:
mutex_unlock(&dev->device_lock);
@ -386,7 +391,7 @@ static void mei_bus_event_work(struct work_struct *work)
device->events = 0;
/* Prepare for the next read */
mei_cl_read_start(device->cl, 0);
mei_cl_read_start(device->cl, 0, NULL);
}
int mei_cl_register_event_cb(struct mei_cl_device *device,
@ -400,7 +405,7 @@ int mei_cl_register_event_cb(struct mei_cl_device *device,
device->event_context = context;
INIT_WORK(&device->event_work, mei_bus_event_work);
mei_cl_read_start(device->cl, 0);
mei_cl_read_start(device->cl, 0, NULL);
return 0;
}
@ -441,8 +446,8 @@ int mei_cl_enable_device(struct mei_cl_device *device)
mutex_unlock(&dev->device_lock);
if (device->event_cb && !cl->read_cb)
mei_cl_read_start(device->cl, 0);
if (device->event_cb)
mei_cl_read_start(device->cl, 0, NULL);
if (!device->ops || !device->ops->enable)
return 0;
@ -462,54 +467,34 @@ int mei_cl_disable_device(struct mei_cl_device *device)
dev = cl->dev;
if (device->ops && device->ops->disable)
device->ops->disable(device);
device->event_cb = NULL;
mutex_lock(&dev->device_lock);
if (cl->state != MEI_FILE_CONNECTED) {
mutex_unlock(&dev->device_lock);
if (!mei_cl_is_connected(cl)) {
dev_err(dev->dev, "Already disconnected");
return 0;
err = 0;
goto out;
}
cl->state = MEI_FILE_DISCONNECTING;
err = mei_cl_disconnect(cl);
if (err < 0) {
mutex_unlock(&dev->device_lock);
dev_err(dev->dev,
"Could not disconnect from the ME client");
return err;
dev_err(dev->dev, "Could not disconnect from the ME client");
goto out;
}
/* Flush queues and remove any pending read */
mei_cl_flush_queues(cl);
if (cl->read_cb) {
struct mei_cl_cb *cb = NULL;
cb = mei_cl_find_read_cb(cl);
/* Remove entry from read list */
if (cb)
list_del(&cb->list);
cb = cl->read_cb;
cl->read_cb = NULL;
if (cb) {
mei_io_cb_free(cb);
cb = NULL;
}
}
device->event_cb = NULL;
mei_cl_flush_queues(cl, NULL);
out:
mutex_unlock(&dev->device_lock);
return err;
if (!device->ops || !device->ops->disable)
return 0;
return device->ops->disable(device);
}
EXPORT_SYMBOL_GPL(mei_cl_disable_device);

View File

@ -48,14 +48,14 @@ void mei_me_cl_init(struct mei_me_client *me_cl)
*/
struct mei_me_client *mei_me_cl_get(struct mei_me_client *me_cl)
{
if (me_cl)
kref_get(&me_cl->refcnt);
if (me_cl && kref_get_unless_zero(&me_cl->refcnt))
return me_cl;
return me_cl;
return NULL;
}
/**
* mei_me_cl_release - unlink and free me client
* mei_me_cl_release - free me client
*
* Locking: called under "dev->device_lock" lock
*
@ -65,9 +65,10 @@ static void mei_me_cl_release(struct kref *ref)
{
struct mei_me_client *me_cl =
container_of(ref, struct mei_me_client, refcnt);
list_del(&me_cl->list);
kfree(me_cl);
}
/**
* mei_me_cl_put - decrease me client refcount and free client if necessary
*
@ -81,6 +82,65 @@ void mei_me_cl_put(struct mei_me_client *me_cl)
kref_put(&me_cl->refcnt, mei_me_cl_release);
}
/**
* __mei_me_cl_del - delete me client form the list and decrease
* reference counter
*
* @dev: mei device
* @me_cl: me client
*
* Locking: dev->me_clients_rwsem
*/
static void __mei_me_cl_del(struct mei_device *dev, struct mei_me_client *me_cl)
{
if (!me_cl)
return;
list_del(&me_cl->list);
mei_me_cl_put(me_cl);
}
/**
* mei_me_cl_add - add me client to the list
*
* @dev: mei device
* @me_cl: me client
*/
void mei_me_cl_add(struct mei_device *dev, struct mei_me_client *me_cl)
{
down_write(&dev->me_clients_rwsem);
list_add(&me_cl->list, &dev->me_clients);
up_write(&dev->me_clients_rwsem);
}
/**
* __mei_me_cl_by_uuid - locate me client by uuid
* increases ref count
*
* @dev: mei device
* @uuid: me client uuid
*
* Return: me client or NULL if not found
*
* Locking: dev->me_clients_rwsem
*/
static struct mei_me_client *__mei_me_cl_by_uuid(struct mei_device *dev,
const uuid_le *uuid)
{
struct mei_me_client *me_cl;
const uuid_le *pn;
WARN_ON(!rwsem_is_locked(&dev->me_clients_rwsem));
list_for_each_entry(me_cl, &dev->me_clients, list) {
pn = &me_cl->props.protocol_name;
if (uuid_le_cmp(*uuid, *pn) == 0)
return mei_me_cl_get(me_cl);
}
return NULL;
}
/**
* mei_me_cl_by_uuid - locate me client by uuid
* increases ref count
@ -88,20 +148,20 @@ void mei_me_cl_put(struct mei_me_client *me_cl)
* @dev: mei device
* @uuid: me client uuid
*
* Locking: called under "dev->device_lock" lock
*
* Return: me client or NULL if not found
*
* Locking: dev->me_clients_rwsem
*/
struct mei_me_client *mei_me_cl_by_uuid(const struct mei_device *dev,
struct mei_me_client *mei_me_cl_by_uuid(struct mei_device *dev,
const uuid_le *uuid)
{
struct mei_me_client *me_cl;
list_for_each_entry(me_cl, &dev->me_clients, list)
if (uuid_le_cmp(*uuid, me_cl->props.protocol_name) == 0)
return mei_me_cl_get(me_cl);
down_read(&dev->me_clients_rwsem);
me_cl = __mei_me_cl_by_uuid(dev, uuid);
up_read(&dev->me_clients_rwsem);
return NULL;
return me_cl;
}
/**
@ -111,22 +171,58 @@ struct mei_me_client *mei_me_cl_by_uuid(const struct mei_device *dev,
* @dev: the device structure
* @client_id: me client id
*
* Locking: called under "dev->device_lock" lock
*
* Return: me client or NULL if not found
*
* Locking: dev->me_clients_rwsem
*/
struct mei_me_client *mei_me_cl_by_id(struct mei_device *dev, u8 client_id)
{
struct mei_me_client *me_cl;
struct mei_me_client *__me_cl, *me_cl = NULL;
list_for_each_entry(me_cl, &dev->me_clients, list)
if (me_cl->client_id == client_id)
down_read(&dev->me_clients_rwsem);
list_for_each_entry(__me_cl, &dev->me_clients, list) {
if (__me_cl->client_id == client_id) {
me_cl = mei_me_cl_get(__me_cl);
break;
}
}
up_read(&dev->me_clients_rwsem);
return me_cl;
}
/**
* __mei_me_cl_by_uuid_id - locate me client by client id and uuid
* increases ref count
*
* @dev: the device structure
* @uuid: me client uuid
* @client_id: me client id
*
* Return: me client or null if not found
*
* Locking: dev->me_clients_rwsem
*/
static struct mei_me_client *__mei_me_cl_by_uuid_id(struct mei_device *dev,
const uuid_le *uuid, u8 client_id)
{
struct mei_me_client *me_cl;
const uuid_le *pn;
WARN_ON(!rwsem_is_locked(&dev->me_clients_rwsem));
list_for_each_entry(me_cl, &dev->me_clients, list) {
pn = &me_cl->props.protocol_name;
if (uuid_le_cmp(*uuid, *pn) == 0 &&
me_cl->client_id == client_id)
return mei_me_cl_get(me_cl);
}
return NULL;
}
/**
* mei_me_cl_by_uuid_id - locate me client by client id and uuid
* increases ref count
@ -135,21 +231,18 @@ struct mei_me_client *mei_me_cl_by_id(struct mei_device *dev, u8 client_id)
* @uuid: me client uuid
* @client_id: me client id
*
* Locking: called under "dev->device_lock" lock
*
* Return: me client or NULL if not found
* Return: me client or null if not found
*/
struct mei_me_client *mei_me_cl_by_uuid_id(struct mei_device *dev,
const uuid_le *uuid, u8 client_id)
{
struct mei_me_client *me_cl;
list_for_each_entry(me_cl, &dev->me_clients, list)
if (uuid_le_cmp(*uuid, me_cl->props.protocol_name) == 0 &&
me_cl->client_id == client_id)
return mei_me_cl_get(me_cl);
down_read(&dev->me_clients_rwsem);
me_cl = __mei_me_cl_by_uuid_id(dev, uuid, client_id);
up_read(&dev->me_clients_rwsem);
return NULL;
return me_cl;
}
/**
@ -162,12 +255,14 @@ struct mei_me_client *mei_me_cl_by_uuid_id(struct mei_device *dev,
*/
void mei_me_cl_rm_by_uuid(struct mei_device *dev, const uuid_le *uuid)
{
struct mei_me_client *me_cl, *next;
struct mei_me_client *me_cl;
dev_dbg(dev->dev, "remove %pUl\n", uuid);
list_for_each_entry_safe(me_cl, next, &dev->me_clients, list)
if (uuid_le_cmp(*uuid, me_cl->props.protocol_name) == 0)
mei_me_cl_put(me_cl);
down_write(&dev->me_clients_rwsem);
me_cl = __mei_me_cl_by_uuid(dev, uuid);
__mei_me_cl_del(dev, me_cl);
up_write(&dev->me_clients_rwsem);
}
/**
@ -181,15 +276,14 @@ void mei_me_cl_rm_by_uuid(struct mei_device *dev, const uuid_le *uuid)
*/
void mei_me_cl_rm_by_uuid_id(struct mei_device *dev, const uuid_le *uuid, u8 id)
{
struct mei_me_client *me_cl, *next;
const uuid_le *pn;
struct mei_me_client *me_cl;
dev_dbg(dev->dev, "remove %pUl %d\n", uuid, id);
list_for_each_entry_safe(me_cl, next, &dev->me_clients, list) {
pn = &me_cl->props.protocol_name;
if (me_cl->client_id == id && uuid_le_cmp(*uuid, *pn) == 0)
mei_me_cl_put(me_cl);
}
down_write(&dev->me_clients_rwsem);
me_cl = __mei_me_cl_by_uuid_id(dev, uuid, id);
__mei_me_cl_del(dev, me_cl);
up_write(&dev->me_clients_rwsem);
}
/**
@ -203,12 +297,12 @@ void mei_me_cl_rm_all(struct mei_device *dev)
{
struct mei_me_client *me_cl, *next;
down_write(&dev->me_clients_rwsem);
list_for_each_entry_safe(me_cl, next, &dev->me_clients, list)
mei_me_cl_put(me_cl);
__mei_me_cl_del(dev, me_cl);
up_write(&dev->me_clients_rwsem);
}
/**
* mei_cl_cmp_id - tells if the clients are the same
*
@ -227,7 +321,48 @@ static inline bool mei_cl_cmp_id(const struct mei_cl *cl1,
}
/**
* mei_io_list_flush - removes cbs belonging to cl.
* mei_io_cb_free - free mei_cb_private related memory
*
* @cb: mei callback struct
*/
void mei_io_cb_free(struct mei_cl_cb *cb)
{
if (cb == NULL)
return;
list_del(&cb->list);
kfree(cb->buf.data);
kfree(cb);
}
/**
* mei_io_cb_init - allocate and initialize io callback
*
* @cl: mei client
* @type: operation type
* @fp: pointer to file structure
*
* Return: mei_cl_cb pointer or NULL;
*/
struct mei_cl_cb *mei_io_cb_init(struct mei_cl *cl, enum mei_cb_file_ops type,
struct file *fp)
{
struct mei_cl_cb *cb;
cb = kzalloc(sizeof(struct mei_cl_cb), GFP_KERNEL);
if (!cb)
return NULL;
INIT_LIST_HEAD(&cb->list);
cb->file_object = fp;
cb->cl = cl;
cb->buf_idx = 0;
cb->fop_type = type;
return cb;
}
/**
* __mei_io_list_flush - removes and frees cbs belonging to cl.
*
* @list: an instance of our list structure
* @cl: host client, can be NULL for flushing the whole list
@ -236,13 +371,12 @@ static inline bool mei_cl_cmp_id(const struct mei_cl *cl1,
static void __mei_io_list_flush(struct mei_cl_cb *list,
struct mei_cl *cl, bool free)
{
struct mei_cl_cb *cb;
struct mei_cl_cb *next;
struct mei_cl_cb *cb, *next;
/* enable removing everything if no cl is specified */
list_for_each_entry_safe(cb, next, &list->list, list) {
if (!cl || mei_cl_cmp_id(cl, cb->cl)) {
list_del(&cb->list);
list_del_init(&cb->list);
if (free)
mei_io_cb_free(cb);
}
@ -260,7 +394,6 @@ void mei_io_list_flush(struct mei_cl_cb *list, struct mei_cl *cl)
__mei_io_list_flush(list, cl, false);
}
/**
* mei_io_list_free - removes cb belonging to cl and free them
*
@ -273,103 +406,107 @@ static inline void mei_io_list_free(struct mei_cl_cb *list, struct mei_cl *cl)
}
/**
* mei_io_cb_free - free mei_cb_private related memory
* mei_io_cb_alloc_buf - allocate callback buffer
*
* @cb: mei callback struct
* @cb: io callback structure
* @length: size of the buffer
*
* Return: 0 on success
* -EINVAL if cb is NULL
* -ENOMEM if allocation failed
*/
void mei_io_cb_free(struct mei_cl_cb *cb)
int mei_io_cb_alloc_buf(struct mei_cl_cb *cb, size_t length)
{
if (cb == NULL)
return;
if (!cb)
return -EINVAL;
kfree(cb->request_buffer.data);
kfree(cb->response_buffer.data);
kfree(cb);
if (length == 0)
return 0;
cb->buf.data = kmalloc(length, GFP_KERNEL);
if (!cb->buf.data)
return -ENOMEM;
cb->buf.size = length;
return 0;
}
/**
* mei_io_cb_init - allocate and initialize io callback
* mei_cl_alloc_cb - a convenient wrapper for allocating read cb
*
* @cl: mei client
* @fp: pointer to file structure
* @cl: host client
* @length: size of the buffer
* @type: operation type
* @fp: associated file pointer (might be NULL)
*
* Return: mei_cl_cb pointer or NULL;
* Return: cb on success and NULL on failure
*/
struct mei_cl_cb *mei_io_cb_init(struct mei_cl *cl, struct file *fp)
struct mei_cl_cb *mei_cl_alloc_cb(struct mei_cl *cl, size_t length,
enum mei_cb_file_ops type, struct file *fp)
{
struct mei_cl_cb *cb;
cb = kzalloc(sizeof(struct mei_cl_cb), GFP_KERNEL);
cb = mei_io_cb_init(cl, type, fp);
if (!cb)
return NULL;
mei_io_list_init(cb);
if (mei_io_cb_alloc_buf(cb, length)) {
mei_io_cb_free(cb);
return NULL;
}
cb->file_object = fp;
cb->cl = cl;
cb->buf_idx = 0;
return cb;
}
/**
* mei_io_cb_alloc_req_buf - allocate request buffer
* mei_cl_read_cb - find this cl's callback in the read list
* for a specific file
*
* @cb: io callback structure
* @length: size of the buffer
* @cl: host client
* @fp: file pointer (matching cb file object), may be NULL
*
* Return: 0 on success
* -EINVAL if cb is NULL
* -ENOMEM if allocation failed
* Return: cb on success, NULL if cb is not found
*/
int mei_io_cb_alloc_req_buf(struct mei_cl_cb *cb, size_t length)
struct mei_cl_cb *mei_cl_read_cb(const struct mei_cl *cl, const struct file *fp)
{
if (!cb)
return -EINVAL;
struct mei_cl_cb *cb;
if (length == 0)
return 0;
list_for_each_entry(cb, &cl->rd_completed, list)
if (!fp || fp == cb->file_object)
return cb;
cb->request_buffer.data = kmalloc(length, GFP_KERNEL);
if (!cb->request_buffer.data)
return -ENOMEM;
cb->request_buffer.size = length;
return 0;
return NULL;
}
/**
* mei_io_cb_alloc_resp_buf - allocate response buffer
* mei_cl_read_cb_flush - free client's read pending and completed cbs
* for a specific file
*
* @cb: io callback structure
* @length: size of the buffer
*
* Return: 0 on success
* -EINVAL if cb is NULL
* -ENOMEM if allocation failed
* @cl: host client
* @fp: file pointer (matching cb file object), may be NULL
*/
int mei_io_cb_alloc_resp_buf(struct mei_cl_cb *cb, size_t length)
void mei_cl_read_cb_flush(const struct mei_cl *cl, const struct file *fp)
{
if (!cb)
return -EINVAL;
struct mei_cl_cb *cb, *next;
if (length == 0)
return 0;
list_for_each_entry_safe(cb, next, &cl->rd_completed, list)
if (!fp || fp == cb->file_object)
mei_io_cb_free(cb);
cb->response_buffer.data = kmalloc(length, GFP_KERNEL);
if (!cb->response_buffer.data)
return -ENOMEM;
cb->response_buffer.size = length;
return 0;
list_for_each_entry_safe(cb, next, &cl->rd_pending, list)
if (!fp || fp == cb->file_object)
mei_io_cb_free(cb);
}
/**
* mei_cl_flush_queues - flushes queue lists belonging to cl.
*
* @cl: host client
* @fp: file pointer (matching cb file object), may be NULL
*
* Return: 0 on success, -EINVAL if cl or cl->dev is NULL.
*/
int mei_cl_flush_queues(struct mei_cl *cl)
int mei_cl_flush_queues(struct mei_cl *cl, const struct file *fp)
{
struct mei_device *dev;
@ -379,13 +516,15 @@ int mei_cl_flush_queues(struct mei_cl *cl)
dev = cl->dev;
cl_dbg(dev, cl, "remove list entry belonging to cl\n");
mei_io_list_flush(&cl->dev->read_list, cl);
mei_io_list_free(&cl->dev->write_list, cl);
mei_io_list_free(&cl->dev->write_waiting_list, cl);
mei_io_list_flush(&cl->dev->ctrl_wr_list, cl);
mei_io_list_flush(&cl->dev->ctrl_rd_list, cl);
mei_io_list_flush(&cl->dev->amthif_cmd_list, cl);
mei_io_list_flush(&cl->dev->amthif_rd_complete_list, cl);
mei_cl_read_cb_flush(cl, fp);
return 0;
}
@ -402,9 +541,10 @@ void mei_cl_init(struct mei_cl *cl, struct mei_device *dev)
init_waitqueue_head(&cl->wait);
init_waitqueue_head(&cl->rx_wait);
init_waitqueue_head(&cl->tx_wait);
INIT_LIST_HEAD(&cl->rd_completed);
INIT_LIST_HEAD(&cl->rd_pending);
INIT_LIST_HEAD(&cl->link);
INIT_LIST_HEAD(&cl->device_link);
cl->reading_state = MEI_IDLE;
cl->writing_state = MEI_IDLE;
cl->dev = dev;
}
@ -429,31 +569,14 @@ struct mei_cl *mei_cl_allocate(struct mei_device *dev)
}
/**
* mei_cl_find_read_cb - find this cl's callback in the read list
* mei_cl_link - allocate host id in the host map
*
* @cl: host client
*
* Return: cb on success, NULL on error
*/
struct mei_cl_cb *mei_cl_find_read_cb(struct mei_cl *cl)
{
struct mei_device *dev = cl->dev;
struct mei_cl_cb *cb;
list_for_each_entry(cb, &dev->read_list.list, list)
if (mei_cl_cmp_id(cl, cb->cl))
return cb;
return NULL;
}
/** mei_cl_link: allocate host id in the host map
*
* @cl - host client
* @id - fixed host id or -1 for generic one
* @id: fixed host id or MEI_HOST_CLIENT_ID_ANY (-1) for generic one
*
* Return: 0 on success
* -EINVAL on incorrect values
* -ENONET if client not found
* -EMFILE if open count exceeded.
*/
int mei_cl_link(struct mei_cl *cl, int id)
{
@ -535,28 +658,31 @@ int mei_cl_unlink(struct mei_cl *cl)
void mei_host_client_init(struct work_struct *work)
{
struct mei_device *dev = container_of(work,
struct mei_device, init_work);
struct mei_device *dev =
container_of(work, struct mei_device, init_work);
struct mei_me_client *me_cl;
struct mei_client_properties *props;
mutex_lock(&dev->device_lock);
list_for_each_entry(me_cl, &dev->me_clients, list) {
props = &me_cl->props;
if (!uuid_le_cmp(props->protocol_name, mei_amthif_guid))
mei_amthif_host_init(dev);
else if (!uuid_le_cmp(props->protocol_name, mei_wd_guid))
mei_wd_host_init(dev);
else if (!uuid_le_cmp(props->protocol_name, mei_nfc_guid))
mei_nfc_host_init(dev);
me_cl = mei_me_cl_by_uuid(dev, &mei_amthif_guid);
if (me_cl)
mei_amthif_host_init(dev);
mei_me_cl_put(me_cl);
me_cl = mei_me_cl_by_uuid(dev, &mei_wd_guid);
if (me_cl)
mei_wd_host_init(dev);
mei_me_cl_put(me_cl);
me_cl = mei_me_cl_by_uuid(dev, &mei_nfc_guid);
if (me_cl)
mei_nfc_host_init(dev);
mei_me_cl_put(me_cl);
}
dev->dev_state = MEI_DEV_ENABLED;
dev->reset_count = 0;
mutex_unlock(&dev->device_lock);
pm_runtime_mark_last_busy(dev->dev);
@ -620,13 +746,10 @@ int mei_cl_disconnect(struct mei_cl *cl)
return rets;
}
cb = mei_io_cb_init(cl, NULL);
if (!cb) {
rets = -ENOMEM;
cb = mei_io_cb_init(cl, MEI_FOP_DISCONNECT, NULL);
rets = cb ? 0 : -ENOMEM;
if (rets)
goto free;
}
cb->fop_type = MEI_FOP_DISCONNECT;
if (mei_hbuf_acquire(dev)) {
if (mei_hbm_cl_disconnect_req(dev, cl)) {
@ -727,13 +850,10 @@ int mei_cl_connect(struct mei_cl *cl, struct file *file)
return rets;
}
cb = mei_io_cb_init(cl, file);
if (!cb) {
rets = -ENOMEM;
cb = mei_io_cb_init(cl, MEI_FOP_CONNECT, file);
rets = cb ? 0 : -ENOMEM;
if (rets)
goto out;
}
cb->fop_type = MEI_FOP_CONNECT;
/* run hbuf acquire last so we don't have to undo */
if (!mei_cl_is_other_connecting(cl) && mei_hbuf_acquire(dev)) {
@ -756,7 +876,7 @@ int mei_cl_connect(struct mei_cl *cl, struct file *file)
mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT));
mutex_lock(&dev->device_lock);
if (cl->state != MEI_FILE_CONNECTED) {
if (!mei_cl_is_connected(cl)) {
cl->state = MEI_FILE_DISCONNECTED;
/* something went really wrong */
if (!cl->status)
@ -777,6 +897,37 @@ out:
return rets;
}
/**
* mei_cl_alloc_linked - allocate and link host client
*
* @dev: the device structure
* @id: fixed host id or MEI_HOST_CLIENT_ID_ANY (-1) for generic one
*
* Return: cl on success ERR_PTR on failure
*/
struct mei_cl *mei_cl_alloc_linked(struct mei_device *dev, int id)
{
struct mei_cl *cl;
int ret;
cl = mei_cl_allocate(dev);
if (!cl) {
ret = -ENOMEM;
goto err;
}
ret = mei_cl_link(cl, id);
if (ret)
goto err;
return cl;
err:
kfree(cl);
return ERR_PTR(ret);
}
/**
* mei_cl_flow_ctrl_creds - checks flow_control credits for cl.
*
@ -866,10 +1017,11 @@ out:
*
* @cl: host client
* @length: number of bytes to read
* @fp: pointer to file structure
*
* Return: 0 on success, <0 on failure.
*/
int mei_cl_read_start(struct mei_cl *cl, size_t length)
int mei_cl_read_start(struct mei_cl *cl, size_t length, struct file *fp)
{
struct mei_device *dev;
struct mei_cl_cb *cb;
@ -884,10 +1036,10 @@ int mei_cl_read_start(struct mei_cl *cl, size_t length)
if (!mei_cl_is_connected(cl))
return -ENODEV;
if (cl->read_cb) {
cl_dbg(dev, cl, "read is pending.\n");
/* HW currently supports only one pending read */
if (!list_empty(&cl->rd_pending))
return -EBUSY;
}
me_cl = mei_me_cl_by_uuid_id(dev, &cl->cl_uuid, cl->me_client_id);
if (!me_cl) {
cl_err(dev, cl, "no such me client %d\n", cl->me_client_id);
@ -904,29 +1056,21 @@ int mei_cl_read_start(struct mei_cl *cl, size_t length)
return rets;
}
cb = mei_io_cb_init(cl, NULL);
if (!cb) {
rets = -ENOMEM;
goto out;
}
rets = mei_io_cb_alloc_resp_buf(cb, length);
cb = mei_cl_alloc_cb(cl, length, MEI_FOP_READ, fp);
rets = cb ? 0 : -ENOMEM;
if (rets)
goto out;
cb->fop_type = MEI_FOP_READ;
if (mei_hbuf_acquire(dev)) {
rets = mei_hbm_cl_flow_control_req(dev, cl);
if (rets < 0)
goto out;
list_add_tail(&cb->list, &dev->read_list.list);
list_add_tail(&cb->list, &cl->rd_pending);
} else {
list_add_tail(&cb->list, &dev->ctrl_wr_list.list);
}
cl->read_cb = cb;
out:
cl_dbg(dev, cl, "rpm: autosuspend\n");
pm_runtime_mark_last_busy(dev->dev);
@ -964,7 +1108,7 @@ int mei_cl_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
dev = cl->dev;
buf = &cb->request_buffer;
buf = &cb->buf;
rets = mei_cl_flow_ctrl_creds(cl);
if (rets < 0)
@ -999,7 +1143,7 @@ int mei_cl_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
}
cl_dbg(dev, cl, "buf: size = %d idx = %lu\n",
cb->request_buffer.size, cb->buf_idx);
cb->buf.size, cb->buf_idx);
rets = mei_write_message(dev, &mei_hdr, buf->data + cb->buf_idx);
if (rets) {
@ -1011,6 +1155,7 @@ int mei_cl_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
cl->status = 0;
cl->writing_state = MEI_WRITING;
cb->buf_idx += mei_hdr.length;
cb->completed = mei_hdr.msg_complete == 1;
if (mei_hdr.msg_complete) {
if (mei_cl_flow_ctrl_reduce(cl))
@ -1048,7 +1193,7 @@ int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb, bool blocking)
dev = cl->dev;
buf = &cb->request_buffer;
buf = &cb->buf;
cl_dbg(dev, cl, "size=%d\n", buf->size);
@ -1059,7 +1204,6 @@ int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb, bool blocking)
return rets;
}
cb->fop_type = MEI_FOP_WRITE;
cb->buf_idx = 0;
cl->writing_state = MEI_IDLE;
@ -1099,6 +1243,7 @@ int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb, bool blocking)
cl->writing_state = MEI_WRITING;
cb->buf_idx = mei_hdr.length;
cb->completed = mei_hdr.msg_complete == 1;
out:
if (mei_hdr.msg_complete) {
@ -1151,11 +1296,10 @@ void mei_cl_complete(struct mei_cl *cl, struct mei_cl_cb *cb)
if (waitqueue_active(&cl->tx_wait))
wake_up_interruptible(&cl->tx_wait);
} else if (cb->fop_type == MEI_FOP_READ &&
MEI_READING == cl->reading_state) {
cl->reading_state = MEI_READ_COMPLETE;
} else if (cb->fop_type == MEI_FOP_READ) {
list_add_tail(&cb->list, &cl->rd_completed);
if (waitqueue_active(&cl->rx_wait))
wake_up_interruptible(&cl->rx_wait);
wake_up_interruptible_all(&cl->rx_wait);
else
mei_cl_bus_rx_event(cl);

View File

@ -31,7 +31,10 @@ void mei_me_cl_init(struct mei_me_client *me_cl);
void mei_me_cl_put(struct mei_me_client *me_cl);
struct mei_me_client *mei_me_cl_get(struct mei_me_client *me_cl);
struct mei_me_client *mei_me_cl_by_uuid(const struct mei_device *dev,
void mei_me_cl_add(struct mei_device *dev, struct mei_me_client *me_cl);
void mei_me_cl_del(struct mei_device *dev, struct mei_me_client *me_cl);
struct mei_me_client *mei_me_cl_by_uuid(struct mei_device *dev,
const uuid_le *uuid);
struct mei_me_client *mei_me_cl_by_id(struct mei_device *dev, u8 client_id);
struct mei_me_client *mei_me_cl_by_uuid_id(struct mei_device *dev,
@ -44,10 +47,10 @@ void mei_me_cl_rm_all(struct mei_device *dev);
/*
* MEI IO Functions
*/
struct mei_cl_cb *mei_io_cb_init(struct mei_cl *cl, struct file *fp);
struct mei_cl_cb *mei_io_cb_init(struct mei_cl *cl, enum mei_cb_file_ops type,
struct file *fp);
void mei_io_cb_free(struct mei_cl_cb *priv_cb);
int mei_io_cb_alloc_req_buf(struct mei_cl_cb *cb, size_t length);
int mei_io_cb_alloc_resp_buf(struct mei_cl_cb *cb, size_t length);
int mei_io_cb_alloc_buf(struct mei_cl_cb *cb, size_t length);
/**
@ -72,9 +75,14 @@ void mei_cl_init(struct mei_cl *cl, struct mei_device *dev);
int mei_cl_link(struct mei_cl *cl, int id);
int mei_cl_unlink(struct mei_cl *cl);
int mei_cl_flush_queues(struct mei_cl *cl);
struct mei_cl_cb *mei_cl_find_read_cb(struct mei_cl *cl);
struct mei_cl *mei_cl_alloc_linked(struct mei_device *dev, int id);
struct mei_cl_cb *mei_cl_read_cb(const struct mei_cl *cl,
const struct file *fp);
void mei_cl_read_cb_flush(const struct mei_cl *cl, const struct file *fp);
struct mei_cl_cb *mei_cl_alloc_cb(struct mei_cl *cl, size_t length,
enum mei_cb_file_ops type, struct file *fp);
int mei_cl_flush_queues(struct mei_cl *cl, const struct file *fp);
int mei_cl_flow_ctrl_creds(struct mei_cl *cl);
@ -82,23 +90,25 @@ int mei_cl_flow_ctrl_reduce(struct mei_cl *cl);
/*
* MEI input output function prototype
*/
/**
* mei_cl_is_connected - host client is connected
*
* @cl: host clinet
*
* Return: true if the host clinet is connected
*/
static inline bool mei_cl_is_connected(struct mei_cl *cl)
{
return cl->dev &&
cl->dev->dev_state == MEI_DEV_ENABLED &&
cl->state == MEI_FILE_CONNECTED;
}
static inline bool mei_cl_is_transitioning(struct mei_cl *cl)
{
return MEI_FILE_INITIALIZING == cl->state ||
MEI_FILE_DISCONNECTED == cl->state ||
MEI_FILE_DISCONNECTING == cl->state;
return cl->state == MEI_FILE_CONNECTED;
}
bool mei_cl_is_other_connecting(struct mei_cl *cl);
int mei_cl_disconnect(struct mei_cl *cl);
int mei_cl_connect(struct mei_cl *cl, struct file *file);
int mei_cl_read_start(struct mei_cl *cl, size_t length);
int mei_cl_read_start(struct mei_cl *cl, size_t length, struct file *fp);
int mei_cl_irq_read_msg(struct mei_cl *cl, struct mei_msg_hdr *hdr,
struct mei_cl_cb *cmpl_list);
int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb, bool blocking);
int mei_cl_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
struct mei_cl_cb *cmpl_list);

View File

@ -28,7 +28,7 @@ static ssize_t mei_dbgfs_read_meclients(struct file *fp, char __user *ubuf,
size_t cnt, loff_t *ppos)
{
struct mei_device *dev = fp->private_data;
struct mei_me_client *me_cl, *n;
struct mei_me_client *me_cl;
size_t bufsz = 1;
char *buf;
int i = 0;
@ -38,15 +38,14 @@ static ssize_t mei_dbgfs_read_meclients(struct file *fp, char __user *ubuf,
#define HDR \
" |id|fix| UUID |con|msg len|sb|refc|\n"
mutex_lock(&dev->device_lock);
down_read(&dev->me_clients_rwsem);
list_for_each_entry(me_cl, &dev->me_clients, list)
bufsz++;
bufsz *= sizeof(HDR) + 1;
buf = kzalloc(bufsz, GFP_KERNEL);
if (!buf) {
mutex_unlock(&dev->device_lock);
up_read(&dev->me_clients_rwsem);
return -ENOMEM;
}
@ -56,10 +55,9 @@ static ssize_t mei_dbgfs_read_meclients(struct file *fp, char __user *ubuf,
if (dev->dev_state != MEI_DEV_ENABLED)
goto out;
list_for_each_entry_safe(me_cl, n, &dev->me_clients, list) {
list_for_each_entry(me_cl, &dev->me_clients, list) {
me_cl = mei_me_cl_get(me_cl);
if (me_cl) {
if (mei_me_cl_get(me_cl)) {
pos += scnprintf(buf + pos, bufsz - pos,
"%2d|%2d|%3d|%pUl|%3d|%7d|%2d|%4d|\n",
i++, me_cl->client_id,
@ -69,12 +67,13 @@ static ssize_t mei_dbgfs_read_meclients(struct file *fp, char __user *ubuf,
me_cl->props.max_msg_length,
me_cl->props.single_recv_buf,
atomic_read(&me_cl->refcnt.refcount));
}
mei_me_cl_put(me_cl);
mei_me_cl_put(me_cl);
}
}
out:
mutex_unlock(&dev->device_lock);
up_read(&dev->me_clients_rwsem);
ret = simple_read_from_buffer(ubuf, cnt, ppos, buf, pos);
kfree(buf);
return ret;
@ -118,7 +117,7 @@ static ssize_t mei_dbgfs_read_active(struct file *fp, char __user *ubuf,
pos += scnprintf(buf + pos, bufsz - pos,
"%2d|%2d|%4d|%5d|%2d|%2d|\n",
i, cl->me_client_id, cl->host_client_id, cl->state,
cl->reading_state, cl->writing_state);
!list_empty(&cl->rd_completed), cl->writing_state);
i++;
}
out:

View File

@ -338,7 +338,8 @@ static int mei_hbm_me_cl_add(struct mei_device *dev,
me_cl->client_id = res->me_addr;
me_cl->mei_flow_ctrl_creds = 0;
list_add(&me_cl->list, &dev->me_clients);
mei_me_cl_add(dev, me_cl);
return 0;
}
@ -638,7 +639,7 @@ static void mei_hbm_cl_res(struct mei_device *dev,
continue;
if (mei_hbm_cl_addr_equal(cl, rs)) {
list_del(&cb->list);
list_del_init(&cb->list);
break;
}
}
@ -683,10 +684,9 @@ static int mei_hbm_fw_disconnect_req(struct mei_device *dev,
cl->state = MEI_FILE_DISCONNECTED;
cl->timer_count = 0;
cb = mei_io_cb_init(cl, NULL);
cb = mei_io_cb_init(cl, MEI_FOP_DISCONNECT_RSP, NULL);
if (!cb)
return -ENOMEM;
cb->fop_type = MEI_FOP_DISCONNECT_RSP;
cl_dbg(dev, cl, "add disconnect response as first\n");
list_add(&cb->list, &dev->ctrl_wr_list.list);
}

View File

@ -25,6 +25,8 @@
#include "hw-me.h"
#include "hw-me-regs.h"
#include "mei-trace.h"
/**
* mei_me_reg_read - Reads 32bit data from the mei device
*
@ -61,45 +63,79 @@ static inline void mei_me_reg_write(const struct mei_me_hw *hw,
*
* Return: ME_CB_RW register value (u32)
*/
static u32 mei_me_mecbrw_read(const struct mei_device *dev)
static inline u32 mei_me_mecbrw_read(const struct mei_device *dev)
{
return mei_me_reg_read(to_me_hw(dev), ME_CB_RW);
}
/**
* mei_me_hcbww_write - write 32bit data to the host circular buffer
*
* @dev: the device structure
* @data: 32bit data to be written to the host circular buffer
*/
static inline void mei_me_hcbww_write(struct mei_device *dev, u32 data)
{
mei_me_reg_write(to_me_hw(dev), H_CB_WW, data);
}
/**
* mei_me_mecsr_read - Reads 32bit data from the ME CSR
*
* @hw: the me hardware structure
* @dev: the device structure
*
* Return: ME_CSR_HA register value (u32)
*/
static inline u32 mei_me_mecsr_read(const struct mei_me_hw *hw)
static inline u32 mei_me_mecsr_read(const struct mei_device *dev)
{
return mei_me_reg_read(hw, ME_CSR_HA);
u32 reg;
reg = mei_me_reg_read(to_me_hw(dev), ME_CSR_HA);
trace_mei_reg_read(dev->dev, "ME_CSR_HA", ME_CSR_HA, reg);
return reg;
}
/**
* mei_hcsr_read - Reads 32bit data from the host CSR
*
* @hw: the me hardware structure
* @dev: the device structure
*
* Return: H_CSR register value (u32)
*/
static inline u32 mei_hcsr_read(const struct mei_me_hw *hw)
static inline u32 mei_hcsr_read(const struct mei_device *dev)
{
return mei_me_reg_read(hw, H_CSR);
u32 reg;
reg = mei_me_reg_read(to_me_hw(dev), H_CSR);
trace_mei_reg_read(dev->dev, "H_CSR", H_CSR, reg);
return reg;
}
/**
* mei_hcsr_write - writes H_CSR register to the mei device
*
* @dev: the device structure
* @reg: new register value
*/
static inline void mei_hcsr_write(struct mei_device *dev, u32 reg)
{
trace_mei_reg_write(dev->dev, "H_CSR", H_CSR, reg);
mei_me_reg_write(to_me_hw(dev), H_CSR, reg);
}
/**
* mei_hcsr_set - writes H_CSR register to the mei device,
* and ignores the H_IS bit for it is write-one-to-zero.
*
* @hw: the me hardware structure
* @hcsr: new register value
* @dev: the device structure
* @reg: new register value
*/
static inline void mei_hcsr_set(struct mei_me_hw *hw, u32 hcsr)
static inline void mei_hcsr_set(struct mei_device *dev, u32 reg)
{
hcsr &= ~H_IS;
mei_me_reg_write(hw, H_CSR, hcsr);
reg &= ~H_IS;
mei_hcsr_write(dev, reg);
}
/**
@ -141,7 +177,7 @@ static int mei_me_fw_status(struct mei_device *dev,
static void mei_me_hw_config(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 hcsr = mei_hcsr_read(to_me_hw(dev));
u32 hcsr = mei_hcsr_read(dev);
/* Doesn't change in runtime */
dev->hbuf_depth = (hcsr & H_CBD) >> 24;
@ -170,11 +206,10 @@ static inline enum mei_pg_state mei_me_pg_state(struct mei_device *dev)
*/
static void mei_me_intr_clear(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 hcsr = mei_hcsr_read(hw);
u32 hcsr = mei_hcsr_read(dev);
if ((hcsr & H_IS) == H_IS)
mei_me_reg_write(hw, H_CSR, hcsr);
mei_hcsr_write(dev, hcsr);
}
/**
* mei_me_intr_enable - enables mei device interrupts
@ -183,11 +218,10 @@ static void mei_me_intr_clear(struct mei_device *dev)
*/
static void mei_me_intr_enable(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 hcsr = mei_hcsr_read(hw);
u32 hcsr = mei_hcsr_read(dev);
hcsr |= H_IE;
mei_hcsr_set(hw, hcsr);
mei_hcsr_set(dev, hcsr);
}
/**
@ -197,11 +231,10 @@ static void mei_me_intr_enable(struct mei_device *dev)
*/
static void mei_me_intr_disable(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 hcsr = mei_hcsr_read(hw);
u32 hcsr = mei_hcsr_read(dev);
hcsr &= ~H_IE;
mei_hcsr_set(hw, hcsr);
mei_hcsr_set(dev, hcsr);
}
/**
@ -211,12 +244,11 @@ static void mei_me_intr_disable(struct mei_device *dev)
*/
static void mei_me_hw_reset_release(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 hcsr = mei_hcsr_read(hw);
u32 hcsr = mei_hcsr_read(dev);
hcsr |= H_IG;
hcsr &= ~H_RST;
mei_hcsr_set(hw, hcsr);
mei_hcsr_set(dev, hcsr);
/* complete this write before we set host ready on another CPU */
mmiowb();
@ -231,8 +263,7 @@ static void mei_me_hw_reset_release(struct mei_device *dev)
*/
static int mei_me_hw_reset(struct mei_device *dev, bool intr_enable)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 hcsr = mei_hcsr_read(hw);
u32 hcsr = mei_hcsr_read(dev);
/* H_RST may be found lit before reset is started,
* for example if preceding reset flow hasn't completed.
@ -242,8 +273,8 @@ static int mei_me_hw_reset(struct mei_device *dev, bool intr_enable)
if ((hcsr & H_RST) == H_RST) {
dev_warn(dev->dev, "H_RST is set = 0x%08X", hcsr);
hcsr &= ~H_RST;
mei_hcsr_set(hw, hcsr);
hcsr = mei_hcsr_read(hw);
mei_hcsr_set(dev, hcsr);
hcsr = mei_hcsr_read(dev);
}
hcsr |= H_RST | H_IG | H_IS;
@ -254,13 +285,13 @@ static int mei_me_hw_reset(struct mei_device *dev, bool intr_enable)
hcsr &= ~H_IE;
dev->recvd_hw_ready = false;
mei_me_reg_write(hw, H_CSR, hcsr);
mei_hcsr_write(dev, hcsr);
/*
* Host reads the H_CSR once to ensure that the
* posted write to H_CSR completes.
*/
hcsr = mei_hcsr_read(hw);
hcsr = mei_hcsr_read(dev);
if ((hcsr & H_RST) == 0)
dev_warn(dev->dev, "H_RST is not set = 0x%08X", hcsr);
@ -281,11 +312,10 @@ static int mei_me_hw_reset(struct mei_device *dev, bool intr_enable)
*/
static void mei_me_host_set_ready(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 hcsr = mei_hcsr_read(hw);
u32 hcsr = mei_hcsr_read(dev);
hcsr |= H_IE | H_IG | H_RDY;
mei_hcsr_set(hw, hcsr);
mei_hcsr_set(dev, hcsr);
}
/**
@ -296,8 +326,7 @@ static void mei_me_host_set_ready(struct mei_device *dev)
*/
static bool mei_me_host_is_ready(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 hcsr = mei_hcsr_read(hw);
u32 hcsr = mei_hcsr_read(dev);
return (hcsr & H_RDY) == H_RDY;
}
@ -310,8 +339,7 @@ static bool mei_me_host_is_ready(struct mei_device *dev)
*/
static bool mei_me_hw_is_ready(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 mecsr = mei_me_mecsr_read(hw);
u32 mecsr = mei_me_mecsr_read(dev);
return (mecsr & ME_RDY_HRA) == ME_RDY_HRA;
}
@ -368,11 +396,10 @@ static int mei_me_hw_start(struct mei_device *dev)
*/
static unsigned char mei_hbuf_filled_slots(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 hcsr;
char read_ptr, write_ptr;
hcsr = mei_hcsr_read(hw);
hcsr = mei_hcsr_read(dev);
read_ptr = (char) ((hcsr & H_CBRP) >> 8);
write_ptr = (char) ((hcsr & H_CBWP) >> 16);
@ -439,7 +466,6 @@ static int mei_me_write_message(struct mei_device *dev,
struct mei_msg_hdr *header,
unsigned char *buf)
{
struct mei_me_hw *hw = to_me_hw(dev);
unsigned long rem;
unsigned long length = header->length;
u32 *reg_buf = (u32 *)buf;
@ -457,21 +483,21 @@ static int mei_me_write_message(struct mei_device *dev,
if (empty_slots < 0 || dw_cnt > empty_slots)
return -EMSGSIZE;
mei_me_reg_write(hw, H_CB_WW, *((u32 *) header));
mei_me_hcbww_write(dev, *((u32 *) header));
for (i = 0; i < length / 4; i++)
mei_me_reg_write(hw, H_CB_WW, reg_buf[i]);
mei_me_hcbww_write(dev, reg_buf[i]);
rem = length & 0x3;
if (rem > 0) {
u32 reg = 0;
memcpy(&reg, &buf[length - rem], rem);
mei_me_reg_write(hw, H_CB_WW, reg);
mei_me_hcbww_write(dev, reg);
}
hcsr = mei_hcsr_read(hw) | H_IG;
mei_hcsr_set(hw, hcsr);
hcsr = mei_hcsr_read(dev) | H_IG;
mei_hcsr_set(dev, hcsr);
if (!mei_me_hw_is_ready(dev))
return -EIO;
@ -487,12 +513,11 @@ static int mei_me_write_message(struct mei_device *dev,
*/
static int mei_me_count_full_read_slots(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 me_csr;
char read_ptr, write_ptr;
unsigned char buffer_depth, filled_slots;
me_csr = mei_me_mecsr_read(hw);
me_csr = mei_me_mecsr_read(dev);
buffer_depth = (unsigned char)((me_csr & ME_CBD_HRA) >> 24);
read_ptr = (char) ((me_csr & ME_CBRP_HRA) >> 8);
write_ptr = (char) ((me_csr & ME_CBWP_HRA) >> 16);
@ -518,7 +543,6 @@ static int mei_me_count_full_read_slots(struct mei_device *dev)
static int mei_me_read_slots(struct mei_device *dev, unsigned char *buffer,
unsigned long buffer_length)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 *reg_buf = (u32 *)buffer;
u32 hcsr;
@ -531,49 +555,59 @@ static int mei_me_read_slots(struct mei_device *dev, unsigned char *buffer,
memcpy(reg_buf, &reg, buffer_length);
}
hcsr = mei_hcsr_read(hw) | H_IG;
mei_hcsr_set(hw, hcsr);
hcsr = mei_hcsr_read(dev) | H_IG;
mei_hcsr_set(dev, hcsr);
return 0;
}
/**
* mei_me_pg_enter - write pg enter register
* mei_me_pg_set - write pg enter register
*
* @dev: the device structure
*/
static void mei_me_pg_enter(struct mei_device *dev)
static void mei_me_pg_set(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 reg = mei_me_reg_read(hw, H_HPG_CSR);
u32 reg;
reg = mei_me_reg_read(hw, H_HPG_CSR);
trace_mei_reg_read(dev->dev, "H_HPG_CSR", H_HPG_CSR, reg);
reg |= H_HPG_CSR_PGI;
trace_mei_reg_write(dev->dev, "H_HPG_CSR", H_HPG_CSR, reg);
mei_me_reg_write(hw, H_HPG_CSR, reg);
}
/**
* mei_me_pg_exit - write pg exit register
* mei_me_pg_unset - write pg exit register
*
* @dev: the device structure
*/
static void mei_me_pg_exit(struct mei_device *dev)
static void mei_me_pg_unset(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 reg = mei_me_reg_read(hw, H_HPG_CSR);
u32 reg;
reg = mei_me_reg_read(hw, H_HPG_CSR);
trace_mei_reg_read(dev->dev, "H_HPG_CSR", H_HPG_CSR, reg);
WARN(!(reg & H_HPG_CSR_PGI), "PGI is not set\n");
reg |= H_HPG_CSR_PGIHEXR;
trace_mei_reg_write(dev->dev, "H_HPG_CSR", H_HPG_CSR, reg);
mei_me_reg_write(hw, H_HPG_CSR, reg);
}
/**
* mei_me_pg_set_sync - perform pg entry procedure
* mei_me_pg_enter_sync - perform pg entry procedure
*
* @dev: the device structure
*
* Return: 0 on success an error code otherwise
*/
int mei_me_pg_set_sync(struct mei_device *dev)
int mei_me_pg_enter_sync(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
unsigned long timeout = mei_secs_to_jiffies(MEI_PGI_TIMEOUT);
@ -591,7 +625,7 @@ int mei_me_pg_set_sync(struct mei_device *dev)
mutex_lock(&dev->device_lock);
if (dev->pg_event == MEI_PG_EVENT_RECEIVED) {
mei_me_pg_enter(dev);
mei_me_pg_set(dev);
ret = 0;
} else {
ret = -ETIME;
@ -604,13 +638,13 @@ int mei_me_pg_set_sync(struct mei_device *dev)
}
/**
* mei_me_pg_unset_sync - perform pg exit procedure
* mei_me_pg_exit_sync - perform pg exit procedure
*
* @dev: the device structure
*
* Return: 0 on success an error code otherwise
*/
int mei_me_pg_unset_sync(struct mei_device *dev)
int mei_me_pg_exit_sync(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
unsigned long timeout = mei_secs_to_jiffies(MEI_PGI_TIMEOUT);
@ -621,7 +655,7 @@ int mei_me_pg_unset_sync(struct mei_device *dev)
dev->pg_event = MEI_PG_EVENT_WAIT;
mei_me_pg_exit(dev);
mei_me_pg_unset(dev);
mutex_unlock(&dev->device_lock);
wait_event_timeout(dev->wait_pg,
@ -649,8 +683,7 @@ reply:
*/
static bool mei_me_pg_is_enabled(struct mei_device *dev)
{
struct mei_me_hw *hw = to_me_hw(dev);
u32 reg = mei_me_reg_read(hw, ME_CSR_HA);
u32 reg = mei_me_mecsr_read(dev);
if ((reg & ME_PGIC_HRA) == 0)
goto notsupported;
@ -683,14 +716,13 @@ notsupported:
irqreturn_t mei_me_irq_quick_handler(int irq, void *dev_id)
{
struct mei_device *dev = (struct mei_device *) dev_id;
struct mei_me_hw *hw = to_me_hw(dev);
u32 csr_reg = mei_hcsr_read(hw);
u32 hcsr = mei_hcsr_read(dev);
if ((csr_reg & H_IS) != H_IS)
if ((hcsr & H_IS) != H_IS)
return IRQ_NONE;
/* clear H_IS bit in H_CSR */
mei_me_reg_write(hw, H_CSR, csr_reg);
mei_hcsr_write(dev, hcsr);
return IRQ_WAKE_THREAD;
}

View File

@ -71,8 +71,8 @@ extern const struct mei_cfg mei_me_pch8_sps_cfg;
struct mei_device *mei_me_dev_init(struct pci_dev *pdev,
const struct mei_cfg *cfg);
int mei_me_pg_set_sync(struct mei_device *dev);
int mei_me_pg_unset_sync(struct mei_device *dev);
int mei_me_pg_enter_sync(struct mei_device *dev);
int mei_me_pg_exit_sync(struct mei_device *dev);
irqreturn_t mei_me_irq_quick_handler(int irq, void *dev_id);
irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id);

View File

@ -412,7 +412,7 @@ static void mei_txe_intr_disable(struct mei_device *dev)
mei_txe_br_reg_write(hw, HIER_REG, 0);
}
/**
* mei_txe_intr_disable - enable all interrupts
* mei_txe_intr_enable - enable all interrupts
*
* @dev: the device structure
*/

View File

@ -389,6 +389,7 @@ void mei_device_init(struct mei_device *dev,
INIT_LIST_HEAD(&dev->device_list);
INIT_LIST_HEAD(&dev->me_clients);
mutex_init(&dev->device_lock);
init_rwsem(&dev->me_clients_rwsem);
init_waitqueue_head(&dev->wait_hw_ready);
init_waitqueue_head(&dev->wait_pg);
init_waitqueue_head(&dev->wait_hbm_start);
@ -396,7 +397,6 @@ void mei_device_init(struct mei_device *dev,
dev->dev_state = MEI_DEV_INITIALIZING;
dev->reset_count = 0;
mei_io_list_init(&dev->read_list);
mei_io_list_init(&dev->write_list);
mei_io_list_init(&dev->write_waiting_list);
mei_io_list_init(&dev->ctrl_wr_list);

View File

@ -43,7 +43,7 @@ void mei_irq_compl_handler(struct mei_device *dev, struct mei_cl_cb *compl_list)
list_for_each_entry_safe(cb, next, &compl_list->list, list) {
cl = cb->cl;
list_del(&cb->list);
list_del_init(&cb->list);
dev_dbg(dev->dev, "completing call back.\n");
if (cl == &dev->iamthif_cl)
@ -68,91 +68,91 @@ static inline int mei_cl_hbm_equal(struct mei_cl *cl,
return cl->host_client_id == mei_hdr->host_addr &&
cl->me_client_id == mei_hdr->me_addr;
}
/**
* mei_cl_is_reading - checks if the client
* is the one to read this message
* mei_irq_discard_msg - discard received message
*
* @cl: mei client
* @mei_hdr: header of mei message
*
* Return: true on match and false otherwise
* @dev: mei device
* @hdr: message header
*/
static bool mei_cl_is_reading(struct mei_cl *cl, struct mei_msg_hdr *mei_hdr)
static inline
void mei_irq_discard_msg(struct mei_device *dev, struct mei_msg_hdr *hdr)
{
return mei_cl_hbm_equal(cl, mei_hdr) &&
cl->state == MEI_FILE_CONNECTED &&
cl->reading_state != MEI_READ_COMPLETE;
/*
* no need to check for size as it is guarantied
* that length fits into rd_msg_buf
*/
mei_read_slots(dev, dev->rd_msg_buf, hdr->length);
dev_dbg(dev->dev, "discarding message " MEI_HDR_FMT "\n",
MEI_HDR_PRM(hdr));
}
/**
* mei_cl_irq_read_msg - process client message
*
* @dev: the device structure
* @cl: reading client
* @mei_hdr: header of mei client message
* @complete_list: An instance of our list structure
* @complete_list: completion list
*
* Return: 0 on success, <0 on failure.
* Return: always 0
*/
static int mei_cl_irq_read_msg(struct mei_device *dev,
struct mei_msg_hdr *mei_hdr,
struct mei_cl_cb *complete_list)
int mei_cl_irq_read_msg(struct mei_cl *cl,
struct mei_msg_hdr *mei_hdr,
struct mei_cl_cb *complete_list)
{
struct mei_cl *cl;
struct mei_cl_cb *cb, *next;
struct mei_device *dev = cl->dev;
struct mei_cl_cb *cb;
unsigned char *buffer = NULL;
list_for_each_entry_safe(cb, next, &dev->read_list.list, list) {
cl = cb->cl;
if (!mei_cl_is_reading(cl, mei_hdr))
continue;
cl->reading_state = MEI_READING;
if (cb->response_buffer.size == 0 ||
cb->response_buffer.data == NULL) {
cl_err(dev, cl, "response buffer is not allocated.\n");
list_del(&cb->list);
return -ENOMEM;
}
if (cb->response_buffer.size < mei_hdr->length + cb->buf_idx) {
cl_dbg(dev, cl, "message overflow. size %d len %d idx %ld\n",
cb->response_buffer.size,
mei_hdr->length, cb->buf_idx);
buffer = krealloc(cb->response_buffer.data,
mei_hdr->length + cb->buf_idx,
GFP_KERNEL);
if (!buffer) {
list_del(&cb->list);
return -ENOMEM;
}
cb->response_buffer.data = buffer;
cb->response_buffer.size =
mei_hdr->length + cb->buf_idx;
}
buffer = cb->response_buffer.data + cb->buf_idx;
mei_read_slots(dev, buffer, mei_hdr->length);
cb->buf_idx += mei_hdr->length;
if (mei_hdr->msg_complete) {
cl->status = 0;
list_del(&cb->list);
cl_dbg(dev, cl, "completed read length = %lu\n",
cb->buf_idx);
list_add_tail(&cb->list, &complete_list->list);
}
break;
cb = list_first_entry_or_null(&cl->rd_pending, struct mei_cl_cb, list);
if (!cb) {
cl_err(dev, cl, "pending read cb not found\n");
goto out;
}
dev_dbg(dev->dev, "message read\n");
if (!buffer) {
mei_read_slots(dev, dev->rd_msg_buf, mei_hdr->length);
dev_dbg(dev->dev, "discarding message " MEI_HDR_FMT "\n",
MEI_HDR_PRM(mei_hdr));
if (!mei_cl_is_connected(cl)) {
cl_dbg(dev, cl, "not connected\n");
cb->status = -ENODEV;
goto out;
}
if (cb->buf.size == 0 || cb->buf.data == NULL) {
cl_err(dev, cl, "response buffer is not allocated.\n");
list_move_tail(&cb->list, &complete_list->list);
cb->status = -ENOMEM;
goto out;
}
if (cb->buf.size < mei_hdr->length + cb->buf_idx) {
cl_dbg(dev, cl, "message overflow. size %d len %d idx %ld\n",
cb->buf.size, mei_hdr->length, cb->buf_idx);
buffer = krealloc(cb->buf.data, mei_hdr->length + cb->buf_idx,
GFP_KERNEL);
if (!buffer) {
cb->status = -ENOMEM;
list_move_tail(&cb->list, &complete_list->list);
goto out;
}
cb->buf.data = buffer;
cb->buf.size = mei_hdr->length + cb->buf_idx;
}
buffer = cb->buf.data + cb->buf_idx;
mei_read_slots(dev, buffer, mei_hdr->length);
cb->buf_idx += mei_hdr->length;
if (mei_hdr->msg_complete) {
cb->read_time = jiffies;
cl_dbg(dev, cl, "completed read length = %lu\n", cb->buf_idx);
list_move_tail(&cb->list, &complete_list->list);
}
out:
if (!buffer)
mei_irq_discard_msg(dev, mei_hdr);
return 0;
}
@ -183,7 +183,6 @@ static int mei_cl_irq_disconnect_rsp(struct mei_cl *cl, struct mei_cl_cb *cb,
cl->state = MEI_FILE_DISCONNECTED;
cl->status = 0;
list_del(&cb->list);
mei_io_cb_free(cb);
return ret;
@ -263,7 +262,7 @@ static int mei_cl_irq_read(struct mei_cl *cl, struct mei_cl_cb *cb,
return ret;
}
list_move_tail(&cb->list, &dev->read_list.list);
list_move_tail(&cb->list, &cl->rd_pending);
return 0;
}
@ -301,7 +300,7 @@ static int mei_cl_irq_connect(struct mei_cl *cl, struct mei_cl_cb *cb,
if (ret) {
cl->status = ret;
cb->buf_idx = 0;
list_del(&cb->list);
list_del_init(&cb->list);
return ret;
}
@ -378,25 +377,13 @@ int mei_irq_read_handler(struct mei_device *dev,
goto end;
}
if (mei_hdr->host_addr == dev->iamthif_cl.host_client_id &&
MEI_FILE_CONNECTED == dev->iamthif_cl.state &&
dev->iamthif_state == MEI_IAMTHIF_READING) {
ret = mei_amthif_irq_read_msg(dev, mei_hdr, cmpl_list);
if (ret) {
dev_err(dev->dev, "mei_amthif_irq_read_msg failed = %d\n",
ret);
goto end;
}
if (cl == &dev->iamthif_cl) {
ret = mei_amthif_irq_read_msg(cl, mei_hdr, cmpl_list);
} else {
ret = mei_cl_irq_read_msg(dev, mei_hdr, cmpl_list);
if (ret) {
dev_err(dev->dev, "mei_cl_irq_read_msg failed = %d\n",
ret);
goto end;
}
ret = mei_cl_irq_read_msg(cl, mei_hdr, cmpl_list);
}
reset_slots:
/* reset the number of slots and header */
*slots = mei_count_full_read_slots(dev);
@ -449,21 +436,9 @@ int mei_irq_write_handler(struct mei_device *dev, struct mei_cl_cb *cmpl_list)
cl = cb->cl;
cl->status = 0;
list_del(&cb->list);
if (cb->fop_type == MEI_FOP_WRITE &&
cl != &dev->iamthif_cl) {
cl_dbg(dev, cl, "MEI WRITE COMPLETE\n");
cl->writing_state = MEI_WRITE_COMPLETE;
list_add_tail(&cb->list, &cmpl_list->list);
}
if (cl == &dev->iamthif_cl) {
cl_dbg(dev, cl, "check iamthif flow control.\n");
if (dev->iamthif_flow_control_pending) {
ret = mei_amthif_irq_read(dev, &slots);
if (ret)
return ret;
}
}
cl_dbg(dev, cl, "MEI WRITE COMPLETE\n");
cl->writing_state = MEI_WRITE_COMPLETE;
list_move_tail(&cb->list, &cmpl_list->list);
}
if (dev->wd_state == MEI_WD_STOPPING) {
@ -587,10 +562,7 @@ void mei_timer(struct work_struct *work)
if (--dev->iamthif_stall_timer == 0) {
dev_err(dev->dev, "timer: amthif hanged.\n");
mei_reset(dev);
dev->iamthif_msg_buf_size = 0;
dev->iamthif_msg_buf_index = 0;
dev->iamthif_canceled = false;
dev->iamthif_ioctl = true;
dev->iamthif_state = MEI_IAMTHIF_IDLE;
dev->iamthif_timer = 0;
@ -636,4 +608,3 @@ out:
schedule_delayed_work(&dev->timer_work, 2 * HZ);
mutex_unlock(&dev->device_lock);
}

View File

@ -58,24 +58,18 @@ static int mei_open(struct inode *inode, struct file *file)
mutex_lock(&dev->device_lock);
cl = NULL;
err = -ENODEV;
if (dev->dev_state != MEI_DEV_ENABLED) {
dev_dbg(dev->dev, "dev_state != MEI_ENABLED dev_state = %s\n",
mei_dev_state_str(dev->dev_state));
err = -ENODEV;
goto err_unlock;
}
err = -ENOMEM;
cl = mei_cl_allocate(dev);
if (!cl)
goto err_unlock;
/* open_handle_count check is handled in the mei_cl_link */
err = mei_cl_link(cl, MEI_HOST_CLIENT_ID_ANY);
if (err)
cl = mei_cl_alloc_linked(dev, MEI_HOST_CLIENT_ID_ANY);
if (IS_ERR(cl)) {
err = PTR_ERR(cl);
goto err_unlock;
}
file->private_data = cl;
@ -85,7 +79,6 @@ static int mei_open(struct inode *inode, struct file *file)
err_unlock:
mutex_unlock(&dev->device_lock);
kfree(cl);
return err;
}
@ -100,7 +93,6 @@ err_unlock:
static int mei_release(struct inode *inode, struct file *file)
{
struct mei_cl *cl = file->private_data;
struct mei_cl_cb *cb;
struct mei_device *dev;
int rets = 0;
@ -114,33 +106,18 @@ static int mei_release(struct inode *inode, struct file *file)
rets = mei_amthif_release(dev, file);
goto out;
}
if (cl->state == MEI_FILE_CONNECTED) {
if (mei_cl_is_connected(cl)) {
cl->state = MEI_FILE_DISCONNECTING;
cl_dbg(dev, cl, "disconnecting\n");
rets = mei_cl_disconnect(cl);
}
mei_cl_flush_queues(cl);
mei_cl_flush_queues(cl, file);
cl_dbg(dev, cl, "removing\n");
mei_cl_unlink(cl);
/* free read cb */
cb = NULL;
if (cl->read_cb) {
cb = mei_cl_find_read_cb(cl);
/* Remove entry from read list */
if (cb)
list_del(&cb->list);
cb = cl->read_cb;
cl->read_cb = NULL;
}
file->private_data = NULL;
mei_io_cb_free(cb);
kfree(cl);
out:
mutex_unlock(&dev->device_lock);
@ -162,9 +139,8 @@ static ssize_t mei_read(struct file *file, char __user *ubuf,
size_t length, loff_t *offset)
{
struct mei_cl *cl = file->private_data;
struct mei_cl_cb *cb_pos = NULL;
struct mei_cl_cb *cb = NULL;
struct mei_device *dev;
struct mei_cl_cb *cb = NULL;
int rets;
int err;
@ -191,8 +167,8 @@ static ssize_t mei_read(struct file *file, char __user *ubuf,
goto out;
}
if (cl->read_cb) {
cb = cl->read_cb;
cb = mei_cl_read_cb(cl, file);
if (cb) {
/* read what left */
if (cb->buf_idx > *offset)
goto copy_buffer;
@ -208,7 +184,7 @@ static ssize_t mei_read(struct file *file, char __user *ubuf,
*offset = 0;
}
err = mei_cl_read_start(cl, length);
err = mei_cl_read_start(cl, length, file);
if (err && err != -EBUSY) {
dev_dbg(dev->dev,
"mei start read failure with status = %d\n", err);
@ -216,8 +192,7 @@ static ssize_t mei_read(struct file *file, char __user *ubuf,
goto out;
}
if (MEI_READ_COMPLETE != cl->reading_state &&
!waitqueue_active(&cl->rx_wait)) {
if (list_empty(&cl->rd_completed) && !waitqueue_active(&cl->rx_wait)) {
if (file->f_flags & O_NONBLOCK) {
rets = -EAGAIN;
goto out;
@ -226,8 +201,8 @@ static ssize_t mei_read(struct file *file, char __user *ubuf,
mutex_unlock(&dev->device_lock);
if (wait_event_interruptible(cl->rx_wait,
MEI_READ_COMPLETE == cl->reading_state ||
mei_cl_is_transitioning(cl))) {
(!list_empty(&cl->rd_completed)) ||
(!mei_cl_is_connected(cl)))) {
if (signal_pending(current))
return -EINTR;
@ -235,26 +210,28 @@ static ssize_t mei_read(struct file *file, char __user *ubuf,
}
mutex_lock(&dev->device_lock);
if (mei_cl_is_transitioning(cl)) {
if (!mei_cl_is_connected(cl)) {
rets = -EBUSY;
goto out;
}
}
cb = cl->read_cb;
cb = mei_cl_read_cb(cl, file);
if (!cb) {
rets = -ENODEV;
goto out;
}
if (cl->reading_state != MEI_READ_COMPLETE) {
rets = 0;
goto out;
}
/* now copy the data to user space */
copy_buffer:
/* now copy the data to user space */
if (cb->status) {
rets = cb->status;
dev_dbg(dev->dev, "read operation failed %d\n", rets);
goto free;
}
dev_dbg(dev->dev, "buf.size = %d buf.idx= %ld\n",
cb->response_buffer.size, cb->buf_idx);
cb->buf.size, cb->buf_idx);
if (length == 0 || ubuf == NULL || *offset > cb->buf_idx) {
rets = -EMSGSIZE;
goto free;
@ -264,7 +241,7 @@ copy_buffer:
* however buf_idx may point beyond that */
length = min_t(size_t, length, cb->buf_idx - *offset);
if (copy_to_user(ubuf, cb->response_buffer.data + *offset, length)) {
if (copy_to_user(ubuf, cb->buf.data + *offset, length)) {
dev_dbg(dev->dev, "failed to copy data to userland\n");
rets = -EFAULT;
goto free;
@ -276,13 +253,8 @@ copy_buffer:
goto out;
free:
cb_pos = mei_cl_find_read_cb(cl);
/* Remove entry from read list */
if (cb_pos)
list_del(&cb_pos->list);
mei_io_cb_free(cb);
cl->reading_state = MEI_IDLE;
cl->read_cb = NULL;
out:
dev_dbg(dev->dev, "end mei read rets= %d\n", rets);
mutex_unlock(&dev->device_lock);
@ -336,9 +308,8 @@ static ssize_t mei_write(struct file *file, const char __user *ubuf,
goto out;
}
if (cl->state != MEI_FILE_CONNECTED) {
dev_err(dev->dev, "host client = %d, is not connected to ME client = %d",
cl->host_client_id, cl->me_client_id);
if (!mei_cl_is_connected(cl)) {
cl_err(dev, cl, "is not connected");
rets = -ENODEV;
goto out;
}
@ -349,41 +320,22 @@ static ssize_t mei_write(struct file *file, const char __user *ubuf,
timeout = write_cb->read_time +
mei_secs_to_jiffies(MEI_IAMTHIF_READ_TIMER);
if (time_after(jiffies, timeout) ||
cl->reading_state == MEI_READ_COMPLETE) {
if (time_after(jiffies, timeout)) {
*offset = 0;
list_del(&write_cb->list);
mei_io_cb_free(write_cb);
write_cb = NULL;
}
}
}
/* free entry used in read */
if (cl->reading_state == MEI_READ_COMPLETE) {
*offset = 0;
write_cb = mei_cl_find_read_cb(cl);
if (write_cb) {
list_del(&write_cb->list);
mei_io_cb_free(write_cb);
write_cb = NULL;
cl->reading_state = MEI_IDLE;
cl->read_cb = NULL;
}
} else if (cl->reading_state == MEI_IDLE)
*offset = 0;
write_cb = mei_io_cb_init(cl, file);
*offset = 0;
write_cb = mei_cl_alloc_cb(cl, length, MEI_FOP_WRITE, file);
if (!write_cb) {
rets = -ENOMEM;
goto out;
}
rets = mei_io_cb_alloc_req_buf(write_cb, length);
if (rets)
goto out;
rets = copy_from_user(write_cb->request_buffer.data, ubuf, length);
rets = copy_from_user(write_cb->buf.data, ubuf, length);
if (rets) {
dev_dbg(dev->dev, "failed to copy data from userland\n");
rets = -EFAULT;
@ -391,7 +343,7 @@ static ssize_t mei_write(struct file *file, const char __user *ubuf,
}
if (cl == &dev->iamthif_cl) {
rets = mei_amthif_write(dev, write_cb);
rets = mei_amthif_write(cl, write_cb);
if (rets) {
dev_err(dev->dev,
@ -464,7 +416,7 @@ static int mei_ioctl_connect_client(struct file *file,
*/
if (uuid_le_cmp(data->in_client_uuid, mei_amthif_guid) == 0) {
dev_dbg(dev->dev, "FW Client is amthi\n");
if (dev->iamthif_cl.state != MEI_FILE_CONNECTED) {
if (!mei_cl_is_connected(&dev->iamthif_cl)) {
rets = -ENODEV;
goto end;
}
@ -588,6 +540,7 @@ static long mei_compat_ioctl(struct file *file,
*/
static unsigned int mei_poll(struct file *file, poll_table *wait)
{
unsigned long req_events = poll_requested_events(wait);
struct mei_cl *cl = file->private_data;
struct mei_device *dev;
unsigned int mask = 0;
@ -599,27 +552,26 @@ static unsigned int mei_poll(struct file *file, poll_table *wait)
mutex_lock(&dev->device_lock);
if (!mei_cl_is_connected(cl)) {
if (dev->dev_state != MEI_DEV_ENABLED ||
!mei_cl_is_connected(cl)) {
mask = POLLERR;
goto out;
}
mutex_unlock(&dev->device_lock);
if (cl == &dev->iamthif_cl)
return mei_amthif_poll(dev, file, wait);
poll_wait(file, &cl->tx_wait, wait);
mutex_lock(&dev->device_lock);
if (!mei_cl_is_connected(cl)) {
mask = POLLERR;
if (cl == &dev->iamthif_cl) {
mask = mei_amthif_poll(dev, file, wait);
goto out;
}
mask |= (POLLIN | POLLRDNORM);
if (req_events & (POLLIN | POLLRDNORM)) {
poll_wait(file, &cl->rx_wait, wait);
if (!list_empty(&cl->rd_completed))
mask |= POLLIN | POLLRDNORM;
else
mei_cl_read_start(cl, 0, file);
}
out:
mutex_unlock(&dev->device_lock);

View File

@ -0,0 +1,25 @@
/*
*
* Intel Management Engine Interface (Intel MEI) Linux driver
* Copyright (c) 2015, Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
*/
#include <linux/module.h>
/* sparse doesn't like tracepoint macros */
#ifndef __CHECKER__
#define CREATE_TRACE_POINTS
#include "mei-trace.h"
EXPORT_TRACEPOINT_SYMBOL(mei_reg_read);
EXPORT_TRACEPOINT_SYMBOL(mei_reg_write);
#endif /* __CHECKER__ */

View File

@ -0,0 +1,74 @@
/*
*
* Intel Management Engine Interface (Intel MEI) Linux driver
* Copyright (c) 2015, Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
*/
#if !defined(_MEI_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
#define _MEI_TRACE_H_
#include <linux/stringify.h>
#include <linux/types.h>
#include <linux/tracepoint.h>
#include <linux/device.h>
#undef TRACE_SYSTEM
#define TRACE_SYSTEM mei
TRACE_EVENT(mei_reg_read,
TP_PROTO(const struct device *dev, const char *reg, u32 offs, u32 val),
TP_ARGS(dev, reg, offs, val),
TP_STRUCT__entry(
__string(dev, dev_name(dev))
__field(const char *, reg)
__field(u32, offs)
__field(u32, val)
),
TP_fast_assign(
__assign_str(dev, dev_name(dev))
__entry->reg = reg;
__entry->offs = offs;
__entry->val = val;
),
TP_printk("[%s] read %s:[%#x] = %#x",
__get_str(dev), __entry->reg, __entry->offs, __entry->val)
);
TRACE_EVENT(mei_reg_write,
TP_PROTO(const struct device *dev, const char *reg, u32 offs, u32 val),
TP_ARGS(dev, reg, offs, val),
TP_STRUCT__entry(
__string(dev, dev_name(dev))
__field(const char *, reg)
__field(u32, offs)
__field(u32, val)
),
TP_fast_assign(
__assign_str(dev, dev_name(dev))
__entry->reg = reg;
__entry->offs = offs;
__entry->val = val;
),
TP_printk("[%s] write %s[%#x] = %#x)",
__get_str(dev), __entry->reg, __entry->offs, __entry->val)
);
#endif /* _MEI_TRACE_H_ */
/* This part must be outside protection */
#undef TRACE_INCLUDE_PATH
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_PATH .
#define TRACE_INCLUDE_FILE mei-trace
#include <trace/define_trace.h>

View File

@ -194,23 +194,25 @@ struct mei_cl;
* @list: link in callback queue
* @cl: file client who is running this operation
* @fop_type: file operation type
* @request_buffer: buffer to store request data
* @response_buffer: buffer to store response data
* @buf: buffer for data associated with the callback
* @buf_idx: last read index
* @read_time: last read operation time stamp (iamthif)
* @file_object: pointer to file structure
* @status: io status of the cb
* @internal: communication between driver and FW flag
* @completed: the transfer or reception has completed
*/
struct mei_cl_cb {
struct list_head list;
struct mei_cl *cl;
enum mei_cb_file_ops fop_type;
struct mei_msg_data request_buffer;
struct mei_msg_data response_buffer;
struct mei_msg_data buf;
unsigned long buf_idx;
unsigned long read_time;
struct file *file_object;
int status;
u32 internal:1;
u32 completed:1;
};
/**
@ -229,9 +231,9 @@ struct mei_cl_cb {
* @me_client_id: me/fw id
* @mei_flow_ctrl_creds: transmit flow credentials
* @timer_count: watchdog timer for operation completion
* @reading_state: state of the rx
* @writing_state: state of the tx
* @read_cb: current pending reading callback
* @rd_pending: pending read credits
* @rd_completed: completed read
*
* @device: device on the mei client bus
* @device_link: link to bus clients
@ -249,9 +251,9 @@ struct mei_cl {
u8 me_client_id;
u8 mei_flow_ctrl_creds;
u8 timer_count;
enum mei_file_transaction_states reading_state;
enum mei_file_transaction_states writing_state;
struct mei_cl_cb *read_cb;
struct list_head rd_pending;
struct list_head rd_completed;
/* MEI CL bus data */
struct mei_cl_device *device;
@ -423,7 +425,6 @@ const char *mei_pg_state_str(enum mei_pg_state state);
* @cdev : character device
* @minor : minor number allocated for device
*
* @read_list : read completion list
* @write_list : write pending list
* @write_waiting_list : write completion list
* @ctrl_wr_list : pending control write list
@ -460,6 +461,7 @@ const char *mei_pg_state_str(enum mei_pg_state state);
* @version : HBM protocol version in use
* @hbm_f_pg_supported : hbm feature pgi protocol
*
* @me_clients_rwsem: rw lock over me_clients list
* @me_clients : list of FW clients
* @me_clients_map : FW clients bit map
* @host_clients_map : host clients id pool
@ -480,12 +482,7 @@ const char *mei_pg_state_str(enum mei_pg_state state);
* @iamthif_mtu : amthif client max message length
* @iamthif_timer : time stamp of current amthif command completion
* @iamthif_stall_timer : timer to detect amthif hang
* @iamthif_msg_buf : amthif current message buffer
* @iamthif_msg_buf_size : size of current amthif message request buffer
* @iamthif_msg_buf_index : current index in amthif message request buffer
* @iamthif_state : amthif processor state
* @iamthif_flow_control_pending: amthif waits for flow control
* @iamthif_ioctl : wait for completion if amthif control message
* @iamthif_canceled : current amthif command is canceled
*
* @init_work : work item for the device init
@ -503,7 +500,6 @@ struct mei_device {
struct cdev cdev;
int minor;
struct mei_cl_cb read_list;
struct mei_cl_cb write_list;
struct mei_cl_cb write_waiting_list;
struct mei_cl_cb ctrl_wr_list;
@ -556,6 +552,7 @@ struct mei_device {
struct hbm_version version;
unsigned int hbm_f_pg_supported:1;
struct rw_semaphore me_clients_rwsem;
struct list_head me_clients;
DECLARE_BITMAP(me_clients_map, MEI_CLIENTS_MAX);
DECLARE_BITMAP(host_clients_map, MEI_CLIENTS_MAX);
@ -579,12 +576,7 @@ struct mei_device {
int iamthif_mtu;
unsigned long iamthif_timer;
u32 iamthif_stall_timer;
unsigned char *iamthif_msg_buf; /* Note: memory has to be allocated */
u32 iamthif_msg_buf_size;
u32 iamthif_msg_buf_index;
enum iamthif_states iamthif_state;
bool iamthif_flow_control_pending;
bool iamthif_ioctl;
bool iamthif_canceled;
struct work_struct init_work;
@ -662,8 +654,6 @@ void mei_amthif_reset_params(struct mei_device *dev);
int mei_amthif_host_init(struct mei_device *dev);
int mei_amthif_write(struct mei_device *dev, struct mei_cl_cb *priv_cb);
int mei_amthif_read(struct mei_device *dev, struct file *file,
char __user *ubuf, size_t length, loff_t *offset);
@ -675,13 +665,13 @@ int mei_amthif_release(struct mei_device *dev, struct file *file);
struct mei_cl_cb *mei_amthif_find_read_list_entry(struct mei_device *dev,
struct file *file);
void mei_amthif_run_next_cmd(struct mei_device *dev);
int mei_amthif_write(struct mei_cl *cl, struct mei_cl_cb *cb);
int mei_amthif_run_next_cmd(struct mei_device *dev);
int mei_amthif_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
struct mei_cl_cb *cmpl_list);
void mei_amthif_complete(struct mei_device *dev, struct mei_cl_cb *cb);
int mei_amthif_irq_read_msg(struct mei_device *dev,
int mei_amthif_irq_read_msg(struct mei_cl *cl,
struct mei_msg_hdr *mei_hdr,
struct mei_cl_cb *complete_list);
int mei_amthif_irq_read(struct mei_device *dev, s32 *slots);

View File

@ -482,8 +482,8 @@ err:
int mei_nfc_host_init(struct mei_device *dev)
{
struct mei_nfc_dev *ndev;
struct mei_cl *cl_info, *cl = NULL;
struct mei_me_client *me_cl;
struct mei_cl *cl_info, *cl;
struct mei_me_client *me_cl = NULL;
int ret;
@ -500,17 +500,6 @@ int mei_nfc_host_init(struct mei_device *dev)
goto err;
}
ndev->cl_info = mei_cl_allocate(dev);
ndev->cl = mei_cl_allocate(dev);
cl = ndev->cl;
cl_info = ndev->cl_info;
if (!cl || !cl_info) {
ret = -ENOMEM;
goto err;
}
/* check for valid client id */
me_cl = mei_me_cl_by_uuid(dev, &mei_nfc_info_guid);
if (!me_cl) {
@ -519,17 +508,21 @@ int mei_nfc_host_init(struct mei_device *dev)
goto err;
}
cl_info = mei_cl_alloc_linked(dev, MEI_HOST_CLIENT_ID_ANY);
if (IS_ERR(cl_info)) {
ret = PTR_ERR(cl_info);
goto err;
}
cl_info->me_client_id = me_cl->client_id;
cl_info->cl_uuid = me_cl->props.protocol_name;
mei_me_cl_put(me_cl);
ret = mei_cl_link(cl_info, MEI_HOST_CLIENT_ID_ANY);
if (ret)
goto err;
me_cl = NULL;
list_add_tail(&cl_info->device_link, &dev->device_list);
ndev->cl_info = cl_info;
/* check for valid client id */
me_cl = mei_me_cl_by_uuid(dev, &mei_nfc_guid);
if (!me_cl) {
@ -538,16 +531,21 @@ int mei_nfc_host_init(struct mei_device *dev)
goto err;
}
cl = mei_cl_alloc_linked(dev, MEI_HOST_CLIENT_ID_ANY);
if (IS_ERR(cl)) {
ret = PTR_ERR(cl);
goto err;
}
cl->me_client_id = me_cl->client_id;
cl->cl_uuid = me_cl->props.protocol_name;
mei_me_cl_put(me_cl);
ret = mei_cl_link(cl, MEI_HOST_CLIENT_ID_ANY);
if (ret)
goto err;
me_cl = NULL;
list_add_tail(&cl->device_link, &dev->device_list);
ndev->cl = cl;
ndev->req_id = 1;
INIT_WORK(&ndev->init_work, mei_nfc_init);
@ -557,6 +555,7 @@ int mei_nfc_host_init(struct mei_device *dev)
return 0;
err:
mei_me_cl_put(me_cl);
mei_nfc_free(ndev);
return ret;

View File

@ -388,7 +388,7 @@ static int mei_me_pm_runtime_suspend(struct device *device)
mutex_lock(&dev->device_lock);
if (mei_write_is_idle(dev))
ret = mei_me_pg_set_sync(dev);
ret = mei_me_pg_enter_sync(dev);
else
ret = -EAGAIN;
@ -413,7 +413,7 @@ static int mei_me_pm_runtime_resume(struct device *device)
mutex_lock(&dev->device_lock);
ret = mei_me_pg_unset_sync(dev);
ret = mei_me_pg_exit_sync(dev);
mutex_unlock(&dev->device_lock);

View File

@ -63,7 +63,7 @@ static void mei_txe_pci_iounmap(struct pci_dev *pdev, struct mei_txe_hw *hw)
}
}
/**
* mei_probe - Device Initialization Routine
* mei_txe_probe - Device Initialization Routine
*
* @pdev: PCI device structure
* @ent: entry in mei_txe_pci_tbl
@ -193,7 +193,7 @@ end:
}
/**
* mei_remove - Device Removal Routine
* mei_txe_remove - Device Removal Routine
*
* @pdev: PCI device structure
*

View File

@ -160,9 +160,10 @@ int mei_wd_send(struct mei_device *dev)
*/
int mei_wd_stop(struct mei_device *dev)
{
struct mei_cl *cl = &dev->wd_cl;
int ret;
if (dev->wd_cl.state != MEI_FILE_CONNECTED ||
if (!mei_cl_is_connected(cl) ||
dev->wd_state != MEI_WD_RUNNING)
return 0;
@ -170,7 +171,7 @@ int mei_wd_stop(struct mei_device *dev)
dev->wd_state = MEI_WD_STOPPING;
ret = mei_cl_flow_ctrl_creds(&dev->wd_cl);
ret = mei_cl_flow_ctrl_creds(cl);
if (ret < 0)
goto err;
@ -202,22 +203,25 @@ err:
return ret;
}
/*
/**
* mei_wd_ops_start - wd start command from the watchdog core.
*
* @wd_dev - watchdog device struct
* @wd_dev: watchdog device struct
*
* Return: 0 if success, negative errno code for failure
*/
static int mei_wd_ops_start(struct watchdog_device *wd_dev)
{
int err = -ENODEV;
struct mei_device *dev;
struct mei_cl *cl;
int err = -ENODEV;
dev = watchdog_get_drvdata(wd_dev);
if (!dev)
return -ENODEV;
cl = &dev->wd_cl;
mutex_lock(&dev->device_lock);
if (dev->dev_state != MEI_DEV_ENABLED) {
@ -226,8 +230,8 @@ static int mei_wd_ops_start(struct watchdog_device *wd_dev)
goto end_unlock;
}
if (dev->wd_cl.state != MEI_FILE_CONNECTED) {
dev_dbg(dev->dev, "MEI Driver is not connected to Watchdog Client\n");
if (!mei_cl_is_connected(cl)) {
cl_dbg(dev, cl, "MEI Driver is not connected to Watchdog Client\n");
goto end_unlock;
}
@ -239,10 +243,10 @@ end_unlock:
return err;
}
/*
/**
* mei_wd_ops_stop - wd stop command from the watchdog core.
*
* @wd_dev - watchdog device struct
* @wd_dev: watchdog device struct
*
* Return: 0 if success, negative errno code for failure
*/
@ -261,10 +265,10 @@ static int mei_wd_ops_stop(struct watchdog_device *wd_dev)
return 0;
}
/*
/**
* mei_wd_ops_ping - wd ping command from the watchdog core.
*
* @wd_dev - watchdog device struct
* @wd_dev: watchdog device struct
*
* Return: 0 if success, negative errno code for failure
*/
@ -282,8 +286,8 @@ static int mei_wd_ops_ping(struct watchdog_device *wd_dev)
mutex_lock(&dev->device_lock);
if (cl->state != MEI_FILE_CONNECTED) {
dev_err(dev->dev, "wd: not connected.\n");
if (!mei_cl_is_connected(cl)) {
cl_err(dev, cl, "wd: not connected.\n");
ret = -ENODEV;
goto end;
}
@ -311,11 +315,11 @@ end:
return ret;
}
/*
/**
* mei_wd_ops_set_timeout - wd set timeout command from the watchdog core.
*
* @wd_dev - watchdog device struct
* @timeout - timeout value to set
* @wd_dev: watchdog device struct
* @timeout: timeout value to set
*
* Return: 0 if success, negative errno code for failure
*/

View File

@ -309,7 +309,7 @@ void mic_complete_resume(struct mic_device *mdev)
*/
void mic_prepare_suspend(struct mic_device *mdev)
{
int rc;
unsigned long timeout;
#define MIC_SUSPEND_TIMEOUT (60 * HZ)
@ -331,10 +331,10 @@ void mic_prepare_suspend(struct mic_device *mdev)
*/
mic_set_state(mdev, MIC_SUSPENDING);
mutex_unlock(&mdev->mic_mutex);
rc = wait_for_completion_timeout(&mdev->reset_wait,
MIC_SUSPEND_TIMEOUT);
timeout = wait_for_completion_timeout(&mdev->reset_wait,
MIC_SUSPEND_TIMEOUT);
/* Force reset the card if the shutdown completion timed out */
if (!rc) {
if (!timeout) {
mutex_lock(&mdev->mic_mutex);
mic_set_state(mdev, MIC_SUSPENDED);
mutex_unlock(&mdev->mic_mutex);
@ -348,10 +348,10 @@ void mic_prepare_suspend(struct mic_device *mdev)
*/
mic_set_state(mdev, MIC_SUSPENDED);
mutex_unlock(&mdev->mic_mutex);
rc = wait_for_completion_timeout(&mdev->reset_wait,
MIC_SUSPEND_TIMEOUT);
timeout = wait_for_completion_timeout(&mdev->reset_wait,
MIC_SUSPEND_TIMEOUT);
/* Force reset the card if the shutdown completion timed out */
if (!rc)
if (!timeout)
mic_stop(mdev, true);
break;
default:

View File

@ -363,8 +363,6 @@ static int mic_setup_intx(struct mic_device *mdev, struct pci_dev *pdev)
{
int rc;
pci_msi_off(pdev);
/* Enable intx */
pci_intx(pdev, 1);
rc = mic_setup_callbacks(mdev);

View File

@ -69,12 +69,23 @@ static int sram_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&reserve_list);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
virt_base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(virt_base))
return PTR_ERR(virt_base);
if (!res) {
dev_err(&pdev->dev, "found no memory resource\n");
return -EINVAL;
}
size = resource_size(res);
if (!devm_request_mem_region(&pdev->dev,
res->start, size, pdev->name)) {
dev_err(&pdev->dev, "could not request region for resource\n");
return -EBUSY;
}
virt_base = devm_ioremap_wc(&pdev->dev, res->start, size);
if (IS_ERR(virt_base))
return PTR_ERR(virt_base);
sram = devm_kzalloc(&pdev->dev, sizeof(*sram), GFP_KERNEL);
if (!sram)
return -ENOMEM;
@ -205,7 +216,7 @@ static int sram_remove(struct platform_device *pdev)
}
#ifdef CONFIG_OF
static struct of_device_id sram_dt_ids[] = {
static const struct of_device_id sram_dt_ids[] = {
{ .compatible = "mmio-sram" },
{}
};

View File

@ -236,6 +236,7 @@ static int tifm_7xx1_resume(struct pci_dev *dev)
{
struct tifm_adapter *fm = pci_get_drvdata(dev);
int rc;
unsigned long timeout;
unsigned int good_sockets = 0, bad_sockets = 0;
unsigned long flags;
unsigned char new_ids[fm->num_sockets];
@ -272,8 +273,8 @@ static int tifm_7xx1_resume(struct pci_dev *dev)
if (good_sockets) {
fm->finish_me = &finish_resume;
spin_unlock_irqrestore(&fm->lock, flags);
rc = wait_for_completion_timeout(&finish_resume, HZ);
dev_dbg(&dev->dev, "wait returned %d\n", rc);
timeout = wait_for_completion_timeout(&finish_resume, HZ);
dev_dbg(&dev->dev, "wait returned %lu\n", timeout);
writel(TIFM_IRQ_FIFOMASK(good_sockets)
| TIFM_IRQ_CARDMASK(good_sockets),
fm->addr + FM_CLEAR_INTERRUPT_ENABLE);

View File

@ -113,5 +113,5 @@ module_exit(vmci_drv_exit);
MODULE_AUTHOR("VMware, Inc.");
MODULE_DESCRIPTION("VMware Virtual Machine Communication Interface.");
MODULE_VERSION("1.1.1.0-k");
MODULE_VERSION("1.1.3.0-k");
MODULE_LICENSE("GPL v2");

View File

@ -395,6 +395,12 @@ static int vmci_host_do_send_datagram(struct vmci_host_dev *vmci_host_dev,
return -EFAULT;
}
if (VMCI_DG_SIZE(dg) != send_info.len) {
vmci_ioctl_err("datagram size mismatch\n");
kfree(dg);
return -EINVAL;
}
pr_devel("Datagram dst (handle=0x%x:0x%x) src (handle=0x%x:0x%x), payload (size=%llu bytes)\n",
dg->dst.context, dg->dst.resource,
dg->src.context, dg->src.resource,

View File

@ -295,12 +295,20 @@ static void *qp_alloc_queue(u64 size, u32 flags)
{
u64 i;
struct vmci_queue *queue;
const size_t num_pages = DIV_ROUND_UP(size, PAGE_SIZE) + 1;
const size_t pas_size = num_pages * sizeof(*queue->kernel_if->u.g.pas);
const size_t vas_size = num_pages * sizeof(*queue->kernel_if->u.g.vas);
const size_t queue_size =
sizeof(*queue) + sizeof(*queue->kernel_if) +
pas_size + vas_size;
size_t pas_size;
size_t vas_size;
size_t queue_size = sizeof(*queue) + sizeof(*queue->kernel_if);
const u64 num_pages = DIV_ROUND_UP(size, PAGE_SIZE) + 1;
if (num_pages >
(SIZE_MAX - queue_size) /
(sizeof(*queue->kernel_if->u.g.pas) +
sizeof(*queue->kernel_if->u.g.vas)))
return NULL;
pas_size = num_pages * sizeof(*queue->kernel_if->u.g.pas);
vas_size = num_pages * sizeof(*queue->kernel_if->u.g.vas);
queue_size += pas_size + vas_size;
queue = vmalloc(queue_size);
if (!queue)
@ -615,10 +623,15 @@ static int qp_memcpy_from_queue_iov(void *dest,
static struct vmci_queue *qp_host_alloc_queue(u64 size)
{
struct vmci_queue *queue;
const size_t num_pages = DIV_ROUND_UP(size, PAGE_SIZE) + 1;
size_t queue_page_size;
const u64 num_pages = DIV_ROUND_UP(size, PAGE_SIZE) + 1;
const size_t queue_size = sizeof(*queue) + sizeof(*(queue->kernel_if));
const size_t queue_page_size =
num_pages * sizeof(*queue->kernel_if->u.h.page);
if (num_pages > (SIZE_MAX - queue_size) /
sizeof(*queue->kernel_if->u.h.page))
return NULL;
queue_page_size = num_pages * sizeof(*queue->kernel_if->u.h.page);
queue = kzalloc(queue_size + queue_page_size, GFP_KERNEL);
if (queue) {
@ -737,7 +750,8 @@ static int qp_host_get_user_memory(u64 produce_uva,
produce_q->kernel_if->num_pages, 1,
produce_q->kernel_if->u.h.header_page);
if (retval < produce_q->kernel_if->num_pages) {
pr_warn("get_user_pages(produce) failed (retval=%d)", retval);
pr_debug("get_user_pages_fast(produce) failed (retval=%d)",
retval);
qp_release_pages(produce_q->kernel_if->u.h.header_page,
retval, false);
err = VMCI_ERROR_NO_MEM;
@ -748,7 +762,8 @@ static int qp_host_get_user_memory(u64 produce_uva,
consume_q->kernel_if->num_pages, 1,
consume_q->kernel_if->u.h.header_page);
if (retval < consume_q->kernel_if->num_pages) {
pr_warn("get_user_pages(consume) failed (retval=%d)", retval);
pr_debug("get_user_pages_fast(consume) failed (retval=%d)",
retval);
qp_release_pages(consume_q->kernel_if->u.h.header_page,
retval, false);
qp_release_pages(produce_q->kernel_if->u.h.header_page,

View File

@ -220,9 +220,7 @@ static int __init omap_cf_probe(struct platform_device *pdev)
cf = kzalloc(sizeof *cf, GFP_KERNEL);
if (!cf)
return -ENOMEM;
init_timer(&cf->timer);
cf->timer.function = omap_cf_timer;
cf->timer.data = (unsigned long) cf;
setup_timer(&cf->timer, omap_cf_timer, (unsigned long)cf);
cf->pdev = pdev;
platform_set_drvdata(pdev, cf);

View File

@ -707,11 +707,9 @@ static int pd6729_pci_probe(struct pci_dev *dev,
}
} else {
/* poll Card status change */
init_timer(&socket->poll_timer);
socket->poll_timer.function = pd6729_interrupt_wrapper;
socket->poll_timer.data = (unsigned long)socket;
socket->poll_timer.expires = jiffies + HZ;
add_timer(&socket->poll_timer);
setup_timer(&socket->poll_timer, pd6729_interrupt_wrapper,
(unsigned long)socket);
mod_timer(&socket->poll_timer, jiffies + HZ);
}
for (i = 0; i < MAX_SOCKETS; i++) {

View File

@ -726,9 +726,8 @@ int soc_pcmcia_add_one(struct soc_pcmcia_socket *skt)
{
int ret;
init_timer(&skt->poll_timer);
skt->poll_timer.function = soc_common_pcmcia_poll_event;
skt->poll_timer.data = (unsigned long)skt;
setup_timer(&skt->poll_timer, soc_common_pcmcia_poll_event,
(unsigned long)skt);
skt->poll_timer.expires = jiffies + SOC_PCMCIA_POLL_PERIOD;
ret = request_resource(&iomem_resource, &skt->res_skt);

Some files were not shown because too many files have changed in this diff Show More