media updates for v6.7-rc1

-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE+QmuaPwR3wnBdVwACF8+vY7k4RUFAmVF2z0ACgkQCF8+vY7k
 4RUyHBAAhO7ArWtie5SZZ2lYzeoQ2KWZJsiRUdl7ER+lXeKr5HIa23CqVG5+D3hA
 2VQAn/+2wJHMhfSZUcgS889iKGJMhdEj77JBehakTA0122wq/0NNMfbwN0ebHoIZ
 B5FqhXkU8NvQn+8MVyRSnmC7lzlZq7lUlDxbpjCkqOqm5t1TXuMCD81briZxuKWR
 N+STu3rsQ1Vq+HudAqLHcuQKCJjzqo5x2/MOk7DlI+FHtKPLn50CfizmZNiMIn/2
 lVfp6PoZhtBCJAlQFx3VHjYIir5ENvcmdj0ehsocVe4vYFFfBh0NrN8/ixcWyl0i
 z4BSC9/AQTJuAt2mxj2g/OE9ipFqGkhHspSy87GWqCSzIKIKuZYFRHB55e6h6/kA
 11MceDQ+VNmO6dkU4G6/dChaeXt+5omU9mlEaugzmtb/G0HbvYW5jJJuvVMmmGde
 Gy2F2SazGJsfLLBS+I7yKJRDhn5+m+9Q0gCsiKDbEcDoRLrwsi5zraRRVrsKI9q7
 CAFMrU5MCzniMh1UpJxdETPbuxjc54/Uwp3k3ieg7klIyx2rxvL6MzED9O67qkay
 1m0A8hRNpvi+gqS5Zd+V9YafgOEHDziL/KDypp1je+6KbLBvlBsH3YFBps2APJou
 +VgVElxZnwzas4kAkUC+RbCEmRXc/T96oL1mH8w1q4O3iDoaMVc=
 =RO+Y
 -----END PGP SIGNATURE-----

Merge tag 'media/v6.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media

Pull media updates from Mauro Carvalho Chehab:

 - the old V4L2 core videobuf kAPI was finally removed. All media
   drivers should now be using VB2 kAPI

 - new automotive driver: mgb4

 - new platform video driver: npcm-video

 - new sensor driver: mt9m114

 - new TI driver used in conjunction with Cadence CSI2RX IP to bridge
   TI-specific parts

 - ir-rx51 was removed and the N900 DT binding was moved to the
   pwm-ir-tx generic driver

 - drop atomisp-specific ov5693, using the upstream driver instead

 - the camss driver has gained RDI3 support for VFE 17x

 - the atomisp driver now detects ISP2400 or ISP2401 at run time. No
   need to set it up at build time anymore

 - lots of driver fixes, cleanups and improvements

* tag 'media/v6.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media: (377 commits)
  media: nuvoton: VIDEO_NPCM_VCD_ECE should depend on ARCH_NPCM
  media: venus: Fix firmware path for resources
  media: venus: hfi_cmds: Replace one-element array with flex-array member and use __counted_by
  media: venus: hfi_parser: Add check to keep the number of codecs within range
  media: venus: hfi: add checks to handle capabilities from firmware
  media: venus: hfi: fix the check to handle session buffer requirement
  media: venus: hfi: add checks to perform sanity on queue pointers
  media: platform: cadence: select MIPI_DPHY dependency
  media: MAINTAINERS: Fix path for J721E CSI2RX bindings
  media: cec: meson: always include meson sub-directory in Makefile
  media: videobuf2: Fix IS_ERR checking in vb2_dc_put_userptr()
  media: platform: mtk-mdp3: fix uninitialized variable in mdp_path_config()
  media: mediatek: vcodec: using encoder device to alloc/free encoder memory
  media: imx-jpeg: notify source chagne event when the first picture parsed
  media: cx231xx: Use EP5_BUF_SIZE macro
  media: siano: Drop unnecessary error check for debugfs_create_dir/file()
  media: mediatek: vcodec: Handle invalid encoder vsi
  media: aspeed: Drop unnecessary error check for debugfs_create_file()
  Documentation: media: buffer.rst: fix V4L2_BUF_FLAG_PREPARED
  Documentation: media: gen-errors.rst: fix confusing ENOTTY description
  ...
This commit is contained in:
Linus Torvalds 2023-11-06 15:06:06 -08:00
commit be3ca57cfb
505 changed files with 17372 additions and 18179 deletions

View File

@ -0,0 +1,374 @@
.. SPDX-License-Identifier: GPL-2.0
====================
mgb4 sysfs interface
====================
The mgb4 driver provides a sysfs interface, that is used to configure video
stream related parameters (some of them must be set properly before the v4l2
device can be opened) and obtain the video device/stream status.
There are two types of parameters - global / PCI card related, found under
``/sys/class/video4linux/videoX/device`` and module specific found under
``/sys/class/video4linux/videoX``.
Global (PCI card) parameters
============================
**module_type** (R):
Module type.
| 0 - No module present
| 1 - FPDL3
| 2 - GMSL
**module_version** (R):
Module version number. Zero in case of a missing module.
**fw_type** (R):
Firmware type.
| 1 - FPDL3
| 2 - GMSL
**fw_version** (R):
Firmware version number.
**serial_number** (R):
Card serial number. The format is::
PRODUCT-REVISION-SERIES-SERIAL
where each component is a 8b number.
Common FPDL3/GMSL input parameters
==================================
**input_id** (R):
Input number ID, zero based.
**oldi_lane_width** (RW):
Number of deserializer output lanes.
| 0 - single
| 1 - dual (default)
**color_mapping** (RW):
Mapping of the incoming bits in the signal to the colour bits of the pixels.
| 0 - OLDI/JEIDA
| 1 - SPWG/VESA (default)
**link_status** (R):
Video link status. If the link is locked, chips are properly connected and
communicating at the same speed and protocol. The link can be locked without
an active video stream.
A value of 0 is equivalent to the V4L2_IN_ST_NO_SYNC flag of the V4L2
VIDIOC_ENUMINPUT status bits.
| 0 - unlocked
| 1 - locked
**stream_status** (R):
Video stream status. A stream is detected if the link is locked, the input
pixel clock is running and the DE signal is moving.
A value of 0 is equivalent to the V4L2_IN_ST_NO_SIGNAL flag of the V4L2
VIDIOC_ENUMINPUT status bits.
| 0 - not detected
| 1 - detected
**video_width** (R):
Video stream width. This is the actual width as detected by the HW.
The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in the width
field of the v4l2_bt_timings struct.
**video_height** (R):
Video stream height. This is the actual height as detected by the HW.
The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in the height
field of the v4l2_bt_timings struct.
**vsync_status** (R):
The type of VSYNC pulses as detected by the video format detector.
The value is equivalent to the flags returned by VIDIOC_QUERY_DV_TIMINGS in
the polarities field of the v4l2_bt_timings struct.
| 0 - active low
| 1 - active high
| 2 - not available
**hsync_status** (R):
The type of HSYNC pulses as detected by the video format detector.
The value is equivalent to the flags returned by VIDIOC_QUERY_DV_TIMINGS in
the polarities field of the v4l2_bt_timings struct.
| 0 - active low
| 1 - active high
| 2 - not available
**vsync_gap_length** (RW):
If the incoming video signal does not contain synchronization VSYNC and
HSYNC pulses, these must be generated internally in the FPGA to achieve
the correct frame ordering. This value indicates, how many "empty" pixels
(pixels with deasserted Data Enable signal) are necessary to generate the
internal VSYNC pulse.
**hsync_gap_length** (RW):
If the incoming video signal does not contain synchronization VSYNC and
HSYNC pulses, these must be generated internally in the FPGA to achieve
the correct frame ordering. This value indicates, how many "empty" pixels
(pixels with deasserted Data Enable signal) are necessary to generate the
internal HSYNC pulse. The value must be greater than 1 and smaller than
vsync_gap_length.
**pclk_frequency** (R):
Input pixel clock frequency in kHz.
The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in
the pixelclock field of the v4l2_bt_timings struct.
*Note: The frequency_range parameter must be set properly first to get
a valid frequency here.*
**hsync_width** (R):
Width of the HSYNC signal in PCLK clock ticks.
The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in
the hsync field of the v4l2_bt_timings struct.
**vsync_width** (R):
Width of the VSYNC signal in PCLK clock ticks.
The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in
the vsync field of the v4l2_bt_timings struct.
**hback_porch** (R):
Number of PCLK pulses between deassertion of the HSYNC signal and the first
valid pixel in the video line (marked by DE=1).
The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in
the hbackporch field of the v4l2_bt_timings struct.
**hfront_porch** (R):
Number of PCLK pulses between the end of the last valid pixel in the video
line (marked by DE=1) and assertion of the HSYNC signal.
The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in
the hfrontporch field of the v4l2_bt_timings struct.
**vback_porch** (R):
Number of video lines between deassertion of the VSYNC signal and the video
line with the first valid pixel (marked by DE=1).
The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in
the vbackporch field of the v4l2_bt_timings struct.
**vfront_porch** (R):
Number of video lines between the end of the last valid pixel line (marked
by DE=1) and assertion of the VSYNC signal.
The value is identical to what VIDIOC_QUERY_DV_TIMINGS returns in
the vfrontporch field of the v4l2_bt_timings struct.
**frequency_range** (RW)
PLL frequency range of the OLDI input clock generator. The PLL frequency is
derived from the Pixel Clock Frequency (PCLK) and is equal to PCLK if
oldi_lane_width is set to "single" and PCLK/2 if oldi_lane_width is set to
"dual".
| 0 - PLL < 50MHz (default)
| 1 - PLL >= 50MHz
*Note: This parameter can not be changed while the input v4l2 device is
open.*
Common FPDL3/GMSL output parameters
===================================
**output_id** (R):
Output number ID, zero based.
**video_source** (RW):
Output video source. If set to 0 or 1, the source is the corresponding card
input and the v4l2 output devices are disabled. If set to 2 or 3, the source
is the corresponding v4l2 video output device. The default is
the corresponding v4l2 output, i.e. 2 for OUT1 and 3 for OUT2.
| 0 - input 0
| 1 - input 1
| 2 - v4l2 output 0
| 3 - v4l2 output 1
*Note: This parameter can not be changed while ANY of the input/output v4l2
devices is open.*
**display_width** (RW):
Display width. There is no autodetection of the connected display, so the
proper value must be set before the start of streaming. The default width
is 1280.
*Note: This parameter can not be changed while the output v4l2 device is
open.*
**display_height** (RW):
Display height. There is no autodetection of the connected display, so the
proper value must be set before the start of streaming. The default height
is 640.
*Note: This parameter can not be changed while the output v4l2 device is
open.*
**frame_rate** (RW):
Output video frame rate in frames per second. The default frame rate is
60Hz.
**hsync_polarity** (RW):
HSYNC signal polarity.
| 0 - active low (default)
| 1 - active high
**vsync_polarity** (RW):
VSYNC signal polarity.
| 0 - active low (default)
| 1 - active high
**de_polarity** (RW):
DE signal polarity.
| 0 - active low
| 1 - active high (default)
**pclk_frequency** (RW):
Output pixel clock frequency. Allowed values are between 25000-190000(kHz)
and there is a non-linear stepping between two consecutive allowed
frequencies. The driver finds the nearest allowed frequency to the given
value and sets it. When reading this property, you get the exact
frequency set by the driver. The default frequency is 70000kHz.
*Note: This parameter can not be changed while the output v4l2 device is
open.*
**hsync_width** (RW):
Width of the HSYNC signal in pixels. The default value is 16.
**vsync_width** (RW):
Width of the VSYNC signal in video lines. The default value is 2.
**hback_porch** (RW):
Number of PCLK pulses between deassertion of the HSYNC signal and the first
valid pixel in the video line (marked by DE=1). The default value is 32.
**hfront_porch** (RW):
Number of PCLK pulses between the end of the last valid pixel in the video
line (marked by DE=1) and assertion of the HSYNC signal. The default value
is 32.
**vback_porch** (RW):
Number of video lines between deassertion of the VSYNC signal and the video
line with the first valid pixel (marked by DE=1). The default value is 2.
**vfront_porch** (RW):
Number of video lines between the end of the last valid pixel line (marked
by DE=1) and assertion of the VSYNC signal. The default value is 2.
FPDL3 specific input parameters
===============================
**fpdl3_input_width** (RW):
Number of deserializer input lines.
| 0 - auto (default)
| 1 - single
| 2 - dual
FPDL3 specific output parameters
================================
**fpdl3_output_width** (RW):
Number of serializer output lines.
| 0 - auto (default)
| 1 - single
| 2 - dual
GMSL specific input parameters
==============================
**gmsl_mode** (RW):
GMSL speed mode.
| 0 - 12Gb/s (default)
| 1 - 6Gb/s
| 2 - 3Gb/s
| 3 - 1.5Gb/s
**gmsl_stream_id** (RW):
The GMSL multi-stream contains up to four video streams. This parameter
selects which stream is captured by the video input. The value is the
zero-based index of the stream. The default stream id is 0.
*Note: This parameter can not be changed while the input v4l2 device is
open.*
**gmsl_fec** (RW):
GMSL Forward Error Correction (FEC).
| 0 - disabled
| 1 - enabled (default)
====================
mgb4 mtd partitions
====================
The mgb4 driver creates a MTD device with two partitions:
- mgb4-fw.X - FPGA firmware.
- mgb4-data.X - Factory settings, e.g. card serial number.
The *mgb4-fw* partition is writable and is used for FW updates, *mgb4-data* is
read-only. The *X* attached to the partition name represents the card number.
Depending on the CONFIG_MTD_PARTITIONED_MASTER kernel configuration, you may
also have a third partition named *mgb4-flash* available in the system. This
partition represents the whole, unpartitioned, card's FLASH memory and one should
not fiddle with it...
====================
mgb4 iio (triggers)
====================
The mgb4 driver creates an Industrial I/O (IIO) device that provides trigger and
signal level status capability. The following scan elements are available:
**activity**:
The trigger levels and pending status.
| bit 1 - trigger 1 pending
| bit 2 - trigger 2 pending
| bit 5 - trigger 1 level
| bit 6 - trigger 2 level
**timestamp**:
The trigger event timestamp.
The iio device can operate either in "raw" mode where you can fetch the signal
levels (activity bits 5 and 6) using sysfs access or in triggered buffer mode.
In the triggered buffer mode you can follow the signal level changes (activity
bits 1 and 2) using the iio device in /dev. If you enable the timestamps, you
will also get the exact trigger event time that can be matched to a video frame
(every mgb4 video frame has a timestamp with the same clock source).
*Note: although the activity sample always contains all the status bits, it makes
no sense to get the pending bits in raw mode or the level bits in the triggered
buffer mode - the values do not represent valid data in such case.*

View File

@ -77,6 +77,7 @@ ipu3-cio2 Intel ipu3-cio2 driver
ivtv Conexant cx23416/cx23415 MPEG encoder/decoder
ivtvfb Conexant cx23415 framebuffer
mantis MANTIS based cards
mgb4 Digiteq Automotive MGB4 frame grabber
mxb Siemens-Nixdorf 'Multimedia eXtension Board'
netup-unidvb NetUP Universal DVB card
ngene Micronas nGene

View File

@ -17,6 +17,7 @@ Video4Linux (V4L) driver-specific documentation
imx7
ipu3
ivtv
mgb4
omap3isp
omap4_camera
philips

View File

@ -78,7 +78,7 @@ The trace events are defined on a per-codec basis, e.g.:
.. code-block:: bash
$ ls /sys/kernel/debug/tracing/events/ | grep visl
$ ls /sys/kernel/tracing/events/ | grep visl
visl_fwht_controls
visl_h264_controls
visl_hevc_controls
@ -90,13 +90,13 @@ For example, in order to dump HEVC SPS data:
.. code-block:: bash
$ echo 1 > /sys/kernel/debug/tracing/events/visl_hevc_controls/v4l2_ctrl_hevc_sps/enable
$ echo 1 > /sys/kernel/tracing/events/visl_hevc_controls/v4l2_ctrl_hevc_sps/enable
The SPS data will be dumped to the trace buffer, i.e.:
.. code-block:: bash
$ cat /sys/kernel/debug/tracing/trace
$ cat /sys/kernel/tracing/trace
video_parameter_set_id 0
seq_parameter_set_id 0
pic_width_in_luma_samples 1920

View File

@ -15,7 +15,10 @@ description:
properties:
compatible:
const: pwm-ir-tx
oneOf:
- const: pwm-ir-tx
- const: nokia,n900-ir
deprecated: true
pwms:
maxItems: 1

View File

@ -19,6 +19,7 @@ properties:
- amlogic,meson6-ir
- amlogic,meson8b-ir
- amlogic,meson-gxbb-ir
- amlogic,meson-s4-ir
- items:
- const: amlogic,meson-gx-ir
- const: amlogic,meson-gxbb-ir

View File

@ -18,6 +18,7 @@ properties:
items:
- enum:
- starfive,jh7110-csi2rx
- ti,j721e-csi2rx
- const: cdns,csi2rx
reg:

View File

@ -14,6 +14,9 @@ description: |-
interface and CCI (I2C compatible) control bus. The output format
is raw Bayer.
allOf:
- $ref: /schemas/media/video-interface-devices.yaml#
properties:
compatible:
const: hynix,hi846
@ -86,7 +89,7 @@ required:
- vddd-supply
- port
additionalProperties: false
unevaluatedProperties: false
examples:
- |
@ -109,6 +112,8 @@ examples:
vddio-supply = <&reg_camera_vddio>;
reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
shutdown-gpios = <&gpio5 4 GPIO_ACTIVE_LOW>;
orientation = <0>;
rotation = <0>;
port {
camera_out: endpoint {

View File

@ -0,0 +1,114 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/media/i2c/onnn,mt9m114.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: onsemi 1/6-inch 720p CMOS Digital Image Sensor
maintainers:
- Laurent Pinchart <laurent.pinchart@ideasonboard.com>
description: |-
The onsemi MT9M114 is a 1/6-inch 720p (1.26 Mp) CMOS digital image sensor
with an active pixel-array size of 1296H x 976V. It is programmable through
an I2C interface and outputs image data over a 8-bit parallel or 1-lane MIPI
CSI-2 connection.
properties:
compatible:
const: onnn,mt9m114
reg:
description: I2C device address
enum:
- 0x48
- 0x5d
clocks:
description: EXTCLK clock signal
maxItems: 1
vdd-supply:
description:
Core digital voltage supply, 1.8V
vddio-supply:
description:
I/O digital voltage supply, 1.8V or 2.8V
vaa-supply:
description:
Analog voltage supply, 2.8V
reset-gpios:
description: |-
Reference to the GPIO connected to the RESET_BAR pin, if any (active
low).
port:
$ref: /schemas/graph.yaml#/$defs/port-base
additionalProperties: false
properties:
endpoint:
$ref: /schemas/media/video-interfaces.yaml#
additionalProperties: false
properties:
bus-type:
enum: [4, 5, 6]
link-frequencies: true
remote-endpoint: true
# The number and mapping of lanes (for CSI-2), and the bus width and
# signal polarities (for parallel and BT.656) are fixed and must not
# be specified.
required:
- bus-type
- link-frequencies
required:
- compatible
- reg
- clocks
- vdd-supply
- vddio-supply
- vaa-supply
- port
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/media/video-interfaces.h>
i2c0 {
#address-cells = <1>;
#size-cells = <0>;
sensor@48 {
compatible = "onnn,mt9m114";
reg = <0x48>;
clocks = <&clk24m 0>;
reset-gpios = <&gpio5 21 GPIO_ACTIVE_LOW>;
vddio-supply = <&reg_cam_1v8>;
vdd-supply = <&reg_cam_1v8>;
vaa-supply = <&reg_2p8v>;
port {
endpoint {
bus-type = <MEDIA_BUS_TYPE_CSI2_DPHY>;
link-frequencies = /bits/ 64 <384000000>;
remote-endpoint = <&mipi_csi_in>;
};
};
};
};
...

View File

@ -68,12 +68,6 @@ properties:
marked GPIO_ACTIVE_LOW.
maxItems: 1
rotation:
enum:
- 0 # Sensor Mounted Upright
- 180 # Sensor Mounted Upside Down
default: 0
port:
$ref: /schemas/graph.yaml#/$defs/port-base
additionalProperties: false
@ -114,7 +108,7 @@ required:
- reset-gpios
- port
additionalProperties: false
unevaluatedProperties: false
examples:
- |

View File

@ -52,10 +52,6 @@ properties:
description:
GPIO connected to the reset pin (active low)
orientation: true
rotation: true
port:
$ref: /schemas/graph.yaml#/$defs/port-base
additionalProperties: false
@ -95,7 +91,7 @@ required:
- dvdd-supply
- port
additionalProperties: false
unevaluatedProperties: false
examples:
- |

View File

@ -44,11 +44,6 @@ properties:
description: >
Reference to the GPIO connected to the reset pin, if any.
rotation:
enum:
- 0
- 180
port:
description: Digital Output Port
$ref: /schemas/graph.yaml#/$defs/port-base
@ -85,7 +80,7 @@ required:
- DOVDD-supply
- port
additionalProperties: false
unevaluatedProperties: false
examples:
- |

View File

@ -0,0 +1,141 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/media/i2c/ovti,ov5642.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: OmniVision OV5642 Image Sensor
maintainers:
- Fabio Estevam <festevam@gmail.com>
allOf:
- $ref: /schemas/media/video-interface-devices.yaml#
properties:
compatible:
const: ovti,ov5642
reg:
maxItems: 1
clocks:
description: XCLK Input Clock
AVDD-supply:
description: Analog voltage supply, 2.8V.
DVDD-supply:
description: Digital core voltage supply, 1.5V.
DOVDD-supply:
description: Digital I/O voltage supply, 1.8V.
powerdown-gpios:
maxItems: 1
description: Reference to the GPIO connected to the powerdown pin, if any.
reset-gpios:
maxItems: 1
description: Reference to the GPIO connected to the reset pin, if any.
port:
$ref: /schemas/graph.yaml#/$defs/port-base
description: |
Video output port.
properties:
endpoint:
$ref: /schemas/media/video-interfaces.yaml#
unevaluatedProperties: false
properties:
bus-type:
enum: [5, 6]
bus-width:
enum: [8, 10]
default: 10
data-shift:
enum: [0, 2]
default: 0
hsync-active:
enum: [0, 1]
default: 1
vsync-active:
enum: [0, 1]
default: 1
pclk-sample:
enum: [0, 1]
default: 1
allOf:
- if:
properties:
bus-type:
const: 6
then:
properties:
hsync-active: false
vsync-active: false
- if:
properties:
bus-width:
const: 10
then:
properties:
data-shift:
const: 0
required:
- bus-type
additionalProperties: false
required:
- compatible
- reg
- clocks
- port
additionalProperties: false
examples:
- |
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/media/video-interfaces.h>
i2c {
#address-cells = <1>;
#size-cells = <0>;
camera@3c {
compatible = "ovti,ov5642";
reg = <0x3c>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_ov5642>;
clocks = <&clk_ext_camera>;
DOVDD-supply = <&vgen4_reg>;
AVDD-supply = <&vgen3_reg>;
DVDD-supply = <&vgen2_reg>;
powerdown-gpios = <&gpio1 19 GPIO_ACTIVE_HIGH>;
reset-gpios = <&gpio1 20 GPIO_ACTIVE_LOW>;
port {
ov5642_to_parallel: endpoint {
bus-type = <MEDIA_BUS_TYPE_PARALLEL>;
remote-endpoint = <&parallel_from_ov5642>;
bus-width = <8>;
data-shift = <2>; /* lines 9:2 are used */
hsync-active = <0>;
vsync-active = <0>;
pclk-sample = <1>;
};
};
};
};

View File

@ -8,7 +8,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Omnivision OV5693/OV5695 CMOS Sensors
maintainers:
- Tommaso Merciai <tommaso.merciai@amarulasolutions.com>
- Tommaso Merciai <tomm.merciai@gmail.com>
description: |
The Omnivision OV5693/OV5695 are high performance, 1/4-inch, 5 megapixel, CMOS

View File

@ -91,7 +91,7 @@ required:
- vddd-supply
- port
additionalProperties: false
unevaluatedProperties: false
examples:
- |

View File

@ -44,14 +44,6 @@ properties:
description: Sensor reset (XCLR) GPIO
maxItems: 1
flash-leds: true
lens-focus: true
orientation: true
rotation: true
port:
$ref: /schemas/graph.yaml#/$defs/port-base
unevaluatedProperties: false
@ -89,7 +81,7 @@ required:
- ovdd-supply
- port
additionalProperties: false
unevaluatedProperties: false
examples:
- |

View File

@ -1,20 +0,0 @@
Device-Tree bindings for LIRC TX driver for Nokia N900(RX51)
Required properties:
- compatible: should be "nokia,n900-ir".
- pwms: specifies PWM used for IR signal transmission.
Example node:
pwm9: dmtimer-pwm@9 {
compatible = "ti,omap-dmtimer-pwm";
ti,timers = <&timer9>;
ti,clock-source = <0x00>; /* timer_sys_ck */
#pwm-cells = <3>;
};
ir: n900-ir {
compatible = "nokia,n900-ir";
pwms = <&pwm9 0 26316 0>; /* 38000 Hz */
};

View File

@ -0,0 +1,43 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/media/nuvoton,npcm-ece.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Nuvoton NPCM Encoding Compression Engine
maintainers:
- Joseph Liu <kwliu@nuvoton.com>
- Marvin Lin <kflin@nuvoton.com>
description: |
Video Encoding Compression Engine (ECE) present on Nuvoton NPCM SoCs.
properties:
compatible:
enum:
- nuvoton,npcm750-ece
- nuvoton,npcm845-ece
reg:
maxItems: 1
resets:
maxItems: 1
required:
- compatible
- reg
- resets
additionalProperties: false
examples:
- |
#include <dt-bindings/reset/nuvoton,npcm7xx-reset.h>
ece: video-codec@f0820000 {
compatible = "nuvoton,npcm750-ece";
reg = <0xf0820000 0x2000>;
resets = <&rstc NPCM7XX_RESET_IPSRST2 NPCM7XX_RESET_ECE>;
};

View File

@ -0,0 +1,72 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/media/nuvoton,npcm-vcd.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Nuvoton NPCM Video Capture/Differentiation Engine
maintainers:
- Joseph Liu <kwliu@nuvoton.com>
- Marvin Lin <kflin@nuvoton.com>
description: |
Video Capture/Differentiation Engine (VCD) present on Nuvoton NPCM SoCs.
properties:
compatible:
enum:
- nuvoton,npcm750-vcd
- nuvoton,npcm845-vcd
reg:
maxItems: 1
interrupts:
maxItems: 1
resets:
maxItems: 1
nuvoton,sysgcr:
$ref: /schemas/types.yaml#/definitions/phandle
description: phandle to access GCR (Global Control Register) registers.
nuvoton,sysgfxi:
$ref: /schemas/types.yaml#/definitions/phandle
description: phandle to access GFXI (Graphics Core Information) registers.
nuvoton,ece:
$ref: /schemas/types.yaml#/definitions/phandle
description: phandle to access ECE (Encoding Compression Engine) registers.
memory-region:
maxItems: 1
description:
CMA pool to use for buffers allocation instead of the default CMA pool.
required:
- compatible
- reg
- interrupts
- resets
- nuvoton,sysgcr
- nuvoton,sysgfxi
- nuvoton,ece
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/reset/nuvoton,npcm7xx-reset.h>
vcd: vcd@f0810000 {
compatible = "nuvoton,npcm750-vcd";
reg = <0xf0810000 0x10000>;
interrupts = <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>;
resets = <&rstc NPCM7XX_RESET_IPSRST2 NPCM7XX_RESET_VCD>;
nuvoton,sysgcr = <&gcr>;
nuvoton,sysgfxi = <&gfxi>;
nuvoton,ece = <&ece>;
};

View File

@ -48,6 +48,14 @@ properties:
iommus:
maxItems: 2
interconnects:
maxItems: 2
interconnect-names:
items:
- const: video-mem
- const: cpu-cfg
operating-points-v2: true
opp-table:
type: object

View File

@ -68,6 +68,13 @@ properties:
iommus:
maxItems: 1
resets:
items:
- description: AXI reset line
- description: AXI bus interface unit reset line
- description: APB reset line
- description: APB bus interface unit reset line
required:
- compatible
- reg

View File

@ -75,13 +75,20 @@ properties:
power-domains:
maxItems: 1
samsung,pmu-syscon:
$ref: /schemas/types.yaml#/definitions/phandle
description:
Power Management Unit (PMU) system controller interface, used to
power/start the ISP.
patternProperties:
"^pmu@[0-9a-f]+$":
type: object
additionalProperties: false
deprecated: true
description:
Node representing the SoC's Power Management Unit (duplicated with the
correct PMU node in the SoC).
correct PMU node in the SoC). Deprecated, use samsung,pmu-syscon.
properties:
reg:
@ -131,6 +138,7 @@ required:
- clock-names
- interrupts
- ranges
- samsung,pmu-syscon
- '#size-cells'
additionalProperties: false
@ -179,15 +187,12 @@ examples:
<&sysmmu_fimc_fd>, <&sysmmu_fimc_mcuctl>;
iommu-names = "isp", "drc", "fd", "mcuctl";
power-domains = <&pd_isp>;
samsung,pmu-syscon = <&pmu_system_controller>;
#address-cells = <1>;
#size-cells = <1>;
ranges;
pmu@10020000 {
reg = <0x10020000 0x3000>;
};
i2c-isp@12140000 {
compatible = "samsung,exynos4212-i2c-isp";
reg = <0x12140000 0x100>;

View File

@ -118,7 +118,7 @@ examples:
#clock-cells = <1>;
#address-cells = <1>;
#size-cells = <1>;
ranges = <0x0 0x0 0x18000000>;
ranges = <0x0 0x0 0xba1000>;
clocks = <&clock CLK_SCLK_CAM0>, <&clock CLK_SCLK_CAM1>,
<&clock CLK_PIXELASYNCM0>, <&clock CLK_PIXELASYNCM1>;
@ -133,9 +133,9 @@ examples:
pinctrl-0 = <&cam_port_a_clk_active &cam_port_b_clk_active>;
pinctrl-names = "default";
fimc@11800000 {
fimc@0 {
compatible = "samsung,exynos4212-fimc";
reg = <0x11800000 0x1000>;
reg = <0x00000000 0x1000>;
interrupts = <GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clock CLK_FIMC0>,
<&clock CLK_SCLK_FIMC0>;
@ -152,9 +152,9 @@ examples:
/* ... FIMC 1-3 */
csis@11880000 {
csis@80000 {
compatible = "samsung,exynos4210-csis";
reg = <0x11880000 0x4000>;
reg = <0x00080000 0x4000>;
interrupts = <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clock CLK_CSIS0>,
<&clock CLK_SCLK_CSIS0>;
@ -187,9 +187,9 @@ examples:
/* ... CSIS 1 */
fimc-lite@12390000 {
fimc-lite@b90000 {
compatible = "samsung,exynos4212-fimc-lite";
reg = <0x12390000 0x1000>;
reg = <0xb90000 0x1000>;
interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>;
power-domains = <&pd_isp>;
clocks = <&isp_clock CLK_ISP_FIMC_LITE0>;
@ -199,9 +199,9 @@ examples:
/* ... FIMC-LITE 1 */
fimc-is@12000000 {
fimc-is@800000 {
compatible = "samsung,exynos4212-fimc-is";
reg = <0x12000000 0x260000>;
reg = <0x00800000 0x260000>;
interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 95 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&isp_clock CLK_ISP_FIMC_LITE0>,
@ -237,18 +237,15 @@ examples:
<&sysmmu_fimc_fd>, <&sysmmu_fimc_mcuctl>;
iommu-names = "isp", "drc", "fd", "mcuctl";
power-domains = <&pd_isp>;
samsung,pmu-syscon = <&pmu_system_controller>;
#address-cells = <1>;
#size-cells = <1>;
ranges;
pmu@10020000 {
reg = <0x10020000 0x3000>;
};
i2c-isp@12140000 {
i2c-isp@940000 {
compatible = "samsung,exynos4212-i2c-isp";
reg = <0x12140000 0x100>;
reg = <0x00940000 0x100>;
clocks = <&isp_clock CLK_ISP_I2C1_ISP>;
clock-names = "i2c_isp";
pinctrl-0 = <&fimc_is_i2c1>;

View File

@ -0,0 +1,100 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/media/ti,j721e-csi2rx-shim.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: TI J721E CSI2RX Shim
description: |
The TI J721E CSI2RX Shim is a wrapper around Cadence CSI2RX bridge that
enables sending captured frames to memory over PSI-L DMA. In the J721E
Technical Reference Manual (SPRUIL1B) it is referred to as "SHIM" under the
CSI_RX_IF section.
maintainers:
- Jai Luthra <j-luthra@ti.com>
properties:
compatible:
const: ti,j721e-csi2rx-shim
dmas:
maxItems: 1
dma-names:
items:
- const: rx0
reg:
maxItems: 1
power-domains:
maxItems: 1
ranges: true
"#address-cells": true
"#size-cells": true
patternProperties:
"^csi-bridge@":
type: object
description: CSI2 bridge node.
$ref: cdns,csi2rx.yaml#
required:
- compatible
- reg
- dmas
- dma-names
- power-domains
- ranges
- "#address-cells"
- "#size-cells"
additionalProperties: false
examples:
- |
#include <dt-bindings/soc/ti,sci_pm_domain.h>
ti_csi2rx0: ticsi2rx@4500000 {
compatible = "ti,j721e-csi2rx-shim";
dmas = <&main_udmap 0x4940>;
dma-names = "rx0";
reg = <0x4500000 0x1000>;
power-domains = <&k3_pds 26 TI_SCI_PD_EXCLUSIVE>;
#address-cells = <1>;
#size-cells = <1>;
ranges;
cdns_csi2rx: csi-bridge@4504000 {
compatible = "ti,j721e-csi2rx", "cdns,csi2rx";
reg = <0x4504000 0x1000>;
clocks = <&k3_clks 26 2>, <&k3_clks 26 0>, <&k3_clks 26 2>,
<&k3_clks 26 2>, <&k3_clks 26 3>, <&k3_clks 26 3>;
clock-names = "sys_clk", "p_clk", "pixel_if0_clk",
"pixel_if1_clk", "pixel_if2_clk", "pixel_if3_clk";
phys = <&dphy0>;
phy-names = "dphy";
ports {
#address-cells = <1>;
#size-cells = <0>;
csi2_0: port@0 {
reg = <0>;
csi2rx0_in_sensor: endpoint {
remote-endpoint = <&csi2_cam0>;
bus-type = <4>; /* CSI2 DPHY. */
clock-lanes = <0>;
data-lanes = <1 2>;
};
};
};
};
};

View File

@ -160,6 +160,7 @@ properties:
$ref: /schemas/types.yaml#/definitions/uint32-array
minItems: 1
maxItems: 8
uniqueItems: true
items:
# Assume up to 9 physical lane indices
maximum: 8

View File

@ -0,0 +1,39 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/soc/nuvoton/nuvoton,gfxi.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Graphics Core Information block in Nuvoton SoCs
maintainers:
- Joseph Liu <kwliu@nuvoton.com>
- Marvin Lin <kflin@nuvoton.com>
description:
The Graphics Core Information (GFXI) are a block of registers in Nuvoton SoCs
that analyzes Graphics core behavior and provides information in registers.
properties:
compatible:
items:
- enum:
- nuvoton,npcm750-gfxi
- nuvoton,npcm845-gfxi
- const: syscon
reg:
maxItems: 1
required:
- compatible
- reg
additionalProperties: false
examples:
- |
gfxi: gfxi@e000 {
compatible = "nuvoton,npcm750-gfxi", "syscon";
reg = <0xe000 0x100>;
};

View File

@ -309,8 +309,6 @@ properties:
- nuvoton,w83773g
# OKI ML86V7667 video decoder
- oki,ml86v7667
# OV5642: Color CMOS QSXGA (5-megapixel) Image Sensor with OmniBSI and Embedded TrueFocus
- ovti,ov5642
# 48-Lane, 12-Port PCI Express Gen 2 (5.0 GT/s) Switch
- plx,pex8648
# Pulsedlight LIDAR range-finding sensor

View File

@ -1,8 +1,14 @@
.. SPDX-License-Identifier: GPL-2.0
.. _media_writing_camera_sensor_drivers:
Writing camera sensor drivers
=============================
This document covers the in-kernel APIs only. For the best practices on
userspace API implementation in camera sensor drivers, please see
:ref:`media_using_camera_sensor_drivers`.
CSI-2 and parallel (BT.601 and BT.656) busses
---------------------------------------------
@ -13,7 +19,7 @@ Handling clocks
Camera sensors have an internal clock tree including a PLL and a number of
divisors. The clock tree is generally configured by the driver based on a few
input parameters that are specific to the hardware:: the external clock frequency
input parameters that are specific to the hardware: the external clock frequency
and the link frequency. The two parameters generally are obtained from system
firmware. **No other frequencies should be used in any circumstances.**
@ -32,110 +38,61 @@ can rely on this frequency being used.
Devicetree
~~~~~~~~~~
The currently preferred way to achieve this is using ``assigned-clocks``,
``assigned-clock-parents`` and ``assigned-clock-rates`` properties. See
``Documentation/devicetree/bindings/clock/clock-bindings.txt`` for more
information. The driver then gets the frequency using ``clk_get_rate()``.
The preferred way to achieve this is using ``assigned-clocks``,
``assigned-clock-parents`` and ``assigned-clock-rates`` properties. See the
`clock device tree bindings
<https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/clock/clock.yaml>`_
for more information. The driver then gets the frequency using
``clk_get_rate()``.
This approach has the drawback that there's no guarantee that the frequency
hasn't been modified directly or indirectly by another driver, or supported by
the board's clock tree to begin with. Changes to the Common Clock Framework API
are required to ensure reliability.
Frame size
----------
There are two distinct ways to configure the frame size produced by camera
sensors.
Freely configurable camera sensor drivers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Freely configurable camera sensor drivers expose the device's internal
processing pipeline as one or more sub-devices with different cropping and
scaling configurations. The output size of the device is the result of a series
of cropping and scaling operations from the device's pixel array's size.
An example of such a driver is the CCS driver (see ``drivers/media/i2c/ccs``).
Register list based drivers
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Register list based drivers generally, instead of able to configure the device
they control based on user requests, are limited to a number of preset
configurations that combine a number of different parameters that on hardware
level are independent. How a driver picks such configuration is based on the
format set on a source pad at the end of the device's internal pipeline.
Most sensor drivers are implemented this way, see e.g.
``drivers/media/i2c/imx319.c`` for an example.
Frame interval configuration
----------------------------
There are two different methods for obtaining possibilities for different frame
intervals as well as configuring the frame interval. Which one to implement
depends on the type of the device.
Raw camera sensors
~~~~~~~~~~~~~~~~~~
Instead of a high level parameter such as frame interval, the frame interval is
a result of the configuration of a number of camera sensor implementation
specific parameters. Luckily, these parameters tend to be the same for more or
less all modern raw camera sensors.
The frame interval is calculated using the following equation::
frame interval = (analogue crop width + horizontal blanking) *
(analogue crop height + vertical blanking) / pixel rate
The formula is bus independent and is applicable for raw timing parameters on
large variety of devices beyond camera sensors. Devices that have no analogue
crop, use the full source image size, i.e. pixel array size.
Horizontal and vertical blanking are specified by ``V4L2_CID_HBLANK`` and
``V4L2_CID_VBLANK``, respectively. The unit of the ``V4L2_CID_HBLANK`` control
is pixels and the unit of the ``V4L2_CID_VBLANK`` is lines. The pixel rate in
the sensor's **pixel array** is specified by ``V4L2_CID_PIXEL_RATE`` in the same
sub-device. The unit of that control is pixels per second.
Register list based drivers need to implement read-only sub-device nodes for the
purpose. Devices that are not register list based need these to configure the
device's internal processing pipeline.
The first entity in the linear pipeline is the pixel array. The pixel array may
be followed by other entities that are there to allow configuring binning,
skipping, scaling or digital crop :ref:`v4l2-subdev-selections`.
USB cameras etc. devices
~~~~~~~~~~~~~~~~~~~~~~~~
USB video class hardware, as well as many cameras offering a similar higher
level interface natively, generally use the concept of frame interval (or frame
rate) on device level in firmware or hardware. This means lower level controls
implemented by raw cameras may not be used on uAPI (or even kAPI) to control the
frame interval on these devices.
Power management
----------------
Always use runtime PM to manage the power states of your device. Camera sensor
drivers are in no way special in this respect: they are responsible for
controlling the power state of the device they otherwise control as well. In
general, the device must be powered on at least when its registers are being
accessed and when it is streaming.
Camera sensors are used in conjunction with other devices to form a camera
pipeline. They must obey the rules listed herein to ensure coherent power
management over the pipeline.
Existing camera sensor drivers may rely on the old
struct v4l2_subdev_core_ops->s_power() callback for bridge or ISP drivers to
manage their power state. This is however **deprecated**. If you feel you need
to begin calling an s_power from an ISP or a bridge driver, instead please add
runtime PM support to the sensor driver you are using. Likewise, new drivers
should not use s_power.
Camera sensor drivers are responsible for controlling the power state of the
device they otherwise control as well. They shall use runtime PM to manage
power states. Runtime PM shall be enabled at probe time and disabled at remove
time. Drivers should enable runtime PM autosuspend.
Please see examples in e.g. ``drivers/media/i2c/ov8856.c`` and
``drivers/media/i2c/ccs/ccs-core.c``. The two drivers work in both ACPI
and DT based systems.
The runtime PM handlers shall handle clocks, regulators, GPIOs, and other
system resources required to power the sensor up and down. For drivers that
don't use any of those resources (such as drivers that support ACPI systems
only), the runtime PM handlers may be left unimplemented.
In general, the device shall be powered on at least when its registers are
being accessed and when it is streaming. Drivers should use
``pm_runtime_resume_and_get()`` when starting streaming and
``pm_runtime_put()`` or ``pm_runtime_put_autosuspend()`` when stopping
streaming. They may power the device up at probe time (for example to read
identification registers), but should not keep it powered unconditionally after
probe.
At system suspend time, the whole camera pipeline must stop streaming, and
restart when the system is resumed. This requires coordination between the
camera sensor and the rest of the camera pipeline. Bridge drivers are
responsible for this coordination, and instruct camera sensors to stop and
restart streaming by calling the appropriate subdev operations
(``.s_stream()``, ``.enable_streams()`` or ``.disable_streams()``). Camera
sensor drivers shall therefore **not** keep track of the streaming state to
stop streaming in the PM suspend handler and restart it in the resume handler.
Drivers should in general not implement the system PM handlers.
Camera sensor drivers shall **not** implement the subdev ``.s_power()``
operation, as it is deprecated. While this operation is implemented in some
existing drivers as they predate the deprecation, new drivers shall use runtime
PM instead. If you feel you need to begin calling ``.s_power()`` from an ISP or
a bridge driver, instead add runtime PM support to the sensor driver you are
using and drop its ``.s_power()`` handler.
Please also see :ref:`examples <media-camera-sensor-examples>`.
Control framework
~~~~~~~~~~~~~~~~~
@ -155,21 +112,36 @@ access the device.
Rotation, orientation and flipping
----------------------------------
Some systems have the camera sensor mounted upside down compared to its natural
mounting rotation. In such cases, drivers shall expose the information to
userspace with the :ref:`V4L2_CID_CAMERA_SENSOR_ROTATION
<v4l2-camera-sensor-rotation>` control.
Sensor drivers shall also report the sensor's mounting orientation with the
:ref:`V4L2_CID_CAMERA_SENSOR_ORIENTATION <v4l2-camera-sensor-orientation>`.
Use ``v4l2_fwnode_device_parse()`` to obtain rotation and orientation
information from system firmware and ``v4l2_ctrl_new_fwnode_properties()`` to
register the appropriate controls.
Sensor drivers that have any vertical or horizontal flips embedded in the
register programming sequences shall initialize the V4L2_CID_HFLIP and
V4L2_CID_VFLIP controls with the values programmed by the register sequences.
The default values of these controls shall be 0 (disabled). Especially these
controls shall not be inverted, independently of the sensor's mounting
rotation.
.. _media-camera-sensor-examples:
Example drivers
---------------
Features implemented by sensor drivers vary, and depending on the set of
supported features and other qualities, particular sensor drivers better serve
the purpose of an example. The following drivers are known to be good examples:
.. flat-table:: Example sensor drivers
:header-rows: 0
:widths: 1 1 1 2
* - Driver name
- File(s)
- Driver type
- Example topic
* - CCS
- ``drivers/media/i2c/ccs/``
- Freely configurable
- Power management (ACPI and DT), UAPI
* - imx219
- ``drivers/media/i2c/imx219.c``
- Register list based
- Power management (DT), UAPI, mode selection
* - imx319
- ``drivers/media/i2c/imx319.c``
- Register list based
- Power management (ACPI and DT)

View File

@ -30,7 +30,7 @@ that purpose, selection target ``V4L2_SEL_TGT_COMPOSE`` is supported on the
sink pad (0).
Additionally, if a device has no scaler or digital crop functionality, the
source pad (1) expses another digital crop selection rectangle that can only
source pad (1) exposes another digital crop selection rectangle that can only
crop at the end of the lines and frames.
Scaler
@ -78,6 +78,14 @@ For SMIA (non-++) compliant devices the static data file name is
vvvv or vv denotes MIPI and SMIA manufacturer IDs respectively, mmmm model ID
and rrrr or rr revision number.
CCS tools
~~~~~~~~~
`CCS tools <https://github.com/MIPI-Alliance/ccs-tools/>`_ is a set of
tools for working with CCS static data files. CCS tools includes a
definition of the human-readable CCS static data YAML format and includes a
program to convert it to a binary.
Register definition generator
-----------------------------

View File

@ -13,7 +13,6 @@ Video4Linux devices
v4l2-subdev
v4l2-event
v4l2-controls
v4l2-videobuf
v4l2-videobuf2
v4l2-dv-timings
v4l2-flash-led-class

View File

@ -157,14 +157,6 @@ changing the e.g. exposure of the webcam.
Of course, you can always do all the locking yourself by leaving both lock
pointers at ``NULL``.
If you use the old :ref:`videobuf framework <vb_framework>` then you must
pass the :c:type:`video_device`->lock to the videobuf queue initialize
function: if videobuf has to wait for a frame to arrive, then it will
temporarily unlock the lock and relock it afterwards. If your driver also
waits in the code, then you should do the same to allow other
processes to access the device node while the first process is waiting for
something.
In the case of :ref:`videobuf2 <vb2_framework>` you will need to implement the
``wait_prepare()`` and ``wait_finish()`` callbacks to unlock/lock if applicable.
If you use the ``queue->lock`` pointer, then you can use the helper functions

View File

@ -1,403 +0,0 @@
.. SPDX-License-Identifier: GPL-2.0
.. _vb_framework:
Videobuf Framework
==================
Author: Jonathan Corbet <corbet@lwn.net>
Current as of 2.6.33
.. note::
The videobuf framework was deprecated in favor of videobuf2. Shouldn't
be used on new drivers.
Introduction
------------
The videobuf layer functions as a sort of glue layer between a V4L2 driver
and user space. It handles the allocation and management of buffers for
the storage of video frames. There is a set of functions which can be used
to implement many of the standard POSIX I/O system calls, including read(),
poll(), and, happily, mmap(). Another set of functions can be used to
implement the bulk of the V4L2 ioctl() calls related to streaming I/O,
including buffer allocation, queueing and dequeueing, and streaming
control. Using videobuf imposes a few design decisions on the driver
author, but the payback comes in the form of reduced code in the driver and
a consistent implementation of the V4L2 user-space API.
Buffer types
------------
Not all video devices use the same kind of buffers. In fact, there are (at
least) three common variations:
- Buffers which are scattered in both the physical and (kernel) virtual
address spaces. (Almost) all user-space buffers are like this, but it
makes great sense to allocate kernel-space buffers this way as well when
it is possible. Unfortunately, it is not always possible; working with
this kind of buffer normally requires hardware which can do
scatter/gather DMA operations.
- Buffers which are physically scattered, but which are virtually
contiguous; buffers allocated with vmalloc(), in other words. These
buffers are just as hard to use for DMA operations, but they can be
useful in situations where DMA is not available but virtually-contiguous
buffers are convenient.
- Buffers which are physically contiguous. Allocation of this kind of
buffer can be unreliable on fragmented systems, but simpler DMA
controllers cannot deal with anything else.
Videobuf can work with all three types of buffers, but the driver author
must pick one at the outset and design the driver around that decision.
[It's worth noting that there's a fourth kind of buffer: "overlay" buffers
which are located within the system's video memory. The overlay
functionality is considered to be deprecated for most use, but it still
shows up occasionally in system-on-chip drivers where the performance
benefits merit the use of this technique. Overlay buffers can be handled
as a form of scattered buffer, but there are very few implementations in
the kernel and a description of this technique is currently beyond the
scope of this document.]
Data structures, callbacks, and initialization
----------------------------------------------
Depending on which type of buffers are being used, the driver should
include one of the following files:
.. code-block:: none
<media/videobuf-dma-sg.h> /* Physically scattered */
<media/videobuf-vmalloc.h> /* vmalloc() buffers */
<media/videobuf-dma-contig.h> /* Physically contiguous */
The driver's data structure describing a V4L2 device should include a
struct videobuf_queue instance for the management of the buffer queue,
along with a list_head for the queue of available buffers. There will also
need to be an interrupt-safe spinlock which is used to protect (at least)
the queue.
The next step is to write four simple callbacks to help videobuf deal with
the management of buffers:
.. code-block:: none
struct videobuf_queue_ops {
int (*buf_setup)(struct videobuf_queue *q,
unsigned int *count, unsigned int *size);
int (*buf_prepare)(struct videobuf_queue *q,
struct videobuf_buffer *vb,
enum v4l2_field field);
void (*buf_queue)(struct videobuf_queue *q,
struct videobuf_buffer *vb);
void (*buf_release)(struct videobuf_queue *q,
struct videobuf_buffer *vb);
};
buf_setup() is called early in the I/O process, when streaming is being
initiated; its purpose is to tell videobuf about the I/O stream. The count
parameter will be a suggested number of buffers to use; the driver should
check it for rationality and adjust it if need be. As a practical rule, a
minimum of two buffers are needed for proper streaming, and there is
usually a maximum (which cannot exceed 32) which makes sense for each
device. The size parameter should be set to the expected (maximum) size
for each frame of data.
Each buffer (in the form of a struct videobuf_buffer pointer) will be
passed to buf_prepare(), which should set the buffer's size, width, height,
and field fields properly. If the buffer's state field is
VIDEOBUF_NEEDS_INIT, the driver should pass it to:
.. code-block:: none
int videobuf_iolock(struct videobuf_queue* q, struct videobuf_buffer *vb,
struct v4l2_framebuffer *fbuf);
Among other things, this call will usually allocate memory for the buffer.
Finally, the buf_prepare() function should set the buffer's state to
VIDEOBUF_PREPARED.
When a buffer is queued for I/O, it is passed to buf_queue(), which should
put it onto the driver's list of available buffers and set its state to
VIDEOBUF_QUEUED. Note that this function is called with the queue spinlock
held; if it tries to acquire it as well things will come to a screeching
halt. Yes, this is the voice of experience. Note also that videobuf may
wait on the first buffer in the queue; placing other buffers in front of it
could again gum up the works. So use list_add_tail() to enqueue buffers.
Finally, buf_release() is called when a buffer is no longer intended to be
used. The driver should ensure that there is no I/O active on the buffer,
then pass it to the appropriate free routine(s):
.. code-block:: none
/* Scatter/gather drivers */
int videobuf_dma_unmap(struct videobuf_queue *q,
struct videobuf_dmabuf *dma);
int videobuf_dma_free(struct videobuf_dmabuf *dma);
/* vmalloc drivers */
void videobuf_vmalloc_free (struct videobuf_buffer *buf);
/* Contiguous drivers */
void videobuf_dma_contig_free(struct videobuf_queue *q,
struct videobuf_buffer *buf);
One way to ensure that a buffer is no longer under I/O is to pass it to:
.. code-block:: none
int videobuf_waiton(struct videobuf_buffer *vb, int non_blocking, int intr);
Here, vb is the buffer, non_blocking indicates whether non-blocking I/O
should be used (it should be zero in the buf_release() case), and intr
controls whether an interruptible wait is used.
File operations
---------------
At this point, much of the work is done; much of the rest is slipping
videobuf calls into the implementation of the other driver callbacks. The
first step is in the open() function, which must initialize the
videobuf queue. The function to use depends on the type of buffer used:
.. code-block:: none
void videobuf_queue_sg_init(struct videobuf_queue *q,
struct videobuf_queue_ops *ops,
struct device *dev,
spinlock_t *irqlock,
enum v4l2_buf_type type,
enum v4l2_field field,
unsigned int msize,
void *priv);
void videobuf_queue_vmalloc_init(struct videobuf_queue *q,
struct videobuf_queue_ops *ops,
struct device *dev,
spinlock_t *irqlock,
enum v4l2_buf_type type,
enum v4l2_field field,
unsigned int msize,
void *priv);
void videobuf_queue_dma_contig_init(struct videobuf_queue *q,
struct videobuf_queue_ops *ops,
struct device *dev,
spinlock_t *irqlock,
enum v4l2_buf_type type,
enum v4l2_field field,
unsigned int msize,
void *priv);
In each case, the parameters are the same: q is the queue structure for the
device, ops is the set of callbacks as described above, dev is the device
structure for this video device, irqlock is an interrupt-safe spinlock to
protect access to the data structures, type is the buffer type used by the
device (cameras will use V4L2_BUF_TYPE_VIDEO_CAPTURE, for example), field
describes which field is being captured (often V4L2_FIELD_NONE for
progressive devices), msize is the size of any containing structure used
around struct videobuf_buffer, and priv is a private data pointer which
shows up in the priv_data field of struct videobuf_queue. Note that these
are void functions which, evidently, are immune to failure.
V4L2 capture drivers can be written to support either of two APIs: the
read() system call and the rather more complicated streaming mechanism. As
a general rule, it is necessary to support both to ensure that all
applications have a chance of working with the device. Videobuf makes it
easy to do that with the same code. To implement read(), the driver need
only make a call to one of:
.. code-block:: none
ssize_t videobuf_read_one(struct videobuf_queue *q,
char __user *data, size_t count,
loff_t *ppos, int nonblocking);
ssize_t videobuf_read_stream(struct videobuf_queue *q,
char __user *data, size_t count,
loff_t *ppos, int vbihack, int nonblocking);
Either one of these functions will read frame data into data, returning the
amount actually read; the difference is that videobuf_read_one() will only
read a single frame, while videobuf_read_stream() will read multiple frames
if they are needed to satisfy the count requested by the application. A
typical driver read() implementation will start the capture engine, call
one of the above functions, then stop the engine before returning (though a
smarter implementation might leave the engine running for a little while in
anticipation of another read() call happening in the near future).
The poll() function can usually be implemented with a direct call to:
.. code-block:: none
unsigned int videobuf_poll_stream(struct file *file,
struct videobuf_queue *q,
poll_table *wait);
Note that the actual wait queue eventually used will be the one associated
with the first available buffer.
When streaming I/O is done to kernel-space buffers, the driver must support
the mmap() system call to enable user space to access the data. In many
V4L2 drivers, the often-complex mmap() implementation simplifies to a
single call to:
.. code-block:: none
int videobuf_mmap_mapper(struct videobuf_queue *q,
struct vm_area_struct *vma);
Everything else is handled by the videobuf code.
The release() function requires two separate videobuf calls:
.. code-block:: none
void videobuf_stop(struct videobuf_queue *q);
int videobuf_mmap_free(struct videobuf_queue *q);
The call to videobuf_stop() terminates any I/O in progress - though it is
still up to the driver to stop the capture engine. The call to
videobuf_mmap_free() will ensure that all buffers have been unmapped; if
so, they will all be passed to the buf_release() callback. If buffers
remain mapped, videobuf_mmap_free() returns an error code instead. The
purpose is clearly to cause the closing of the file descriptor to fail if
buffers are still mapped, but every driver in the 2.6.32 kernel cheerfully
ignores its return value.
ioctl() operations
------------------
The V4L2 API includes a very long list of driver callbacks to respond to
the many ioctl() commands made available to user space. A number of these
- those associated with streaming I/O - turn almost directly into videobuf
calls. The relevant helper functions are:
.. code-block:: none
int videobuf_reqbufs(struct videobuf_queue *q,
struct v4l2_requestbuffers *req);
int videobuf_querybuf(struct videobuf_queue *q, struct v4l2_buffer *b);
int videobuf_qbuf(struct videobuf_queue *q, struct v4l2_buffer *b);
int videobuf_dqbuf(struct videobuf_queue *q, struct v4l2_buffer *b,
int nonblocking);
int videobuf_streamon(struct videobuf_queue *q);
int videobuf_streamoff(struct videobuf_queue *q);
So, for example, a VIDIOC_REQBUFS call turns into a call to the driver's
vidioc_reqbufs() callback which, in turn, usually only needs to locate the
proper struct videobuf_queue pointer and pass it to videobuf_reqbufs().
These support functions can replace a great deal of buffer management
boilerplate in a lot of V4L2 drivers.
The vidioc_streamon() and vidioc_streamoff() functions will be a bit more
complex, of course, since they will also need to deal with starting and
stopping the capture engine.
Buffer allocation
-----------------
Thus far, we have talked about buffers, but have not looked at how they are
allocated. The scatter/gather case is the most complex on this front. For
allocation, the driver can leave buffer allocation entirely up to the
videobuf layer; in this case, buffers will be allocated as anonymous
user-space pages and will be very scattered indeed. If the application is
using user-space buffers, no allocation is needed; the videobuf layer will
take care of calling get_user_pages() and filling in the scatterlist array.
If the driver needs to do its own memory allocation, it should be done in
the vidioc_reqbufs() function, *after* calling videobuf_reqbufs(). The
first step is a call to:
.. code-block:: none
struct videobuf_dmabuf *videobuf_to_dma(struct videobuf_buffer *buf);
The returned videobuf_dmabuf structure (defined in
<media/videobuf-dma-sg.h>) includes a couple of relevant fields:
.. code-block:: none
struct scatterlist *sglist;
int sglen;
The driver must allocate an appropriately-sized scatterlist array and
populate it with pointers to the pieces of the allocated buffer; sglen
should be set to the length of the array.
Drivers using the vmalloc() method need not (and cannot) concern themselves
with buffer allocation at all; videobuf will handle those details. The
same is normally true of contiguous-DMA drivers as well; videobuf will
allocate the buffers (with dma_alloc_coherent()) when it sees fit. That
means that these drivers may be trying to do high-order allocations at any
time, an operation which is not always guaranteed to work. Some drivers
play tricks by allocating DMA space at system boot time; videobuf does not
currently play well with those drivers.
As of 2.6.31, contiguous-DMA drivers can work with a user-supplied buffer,
as long as that buffer is physically contiguous. Normal user-space
allocations will not meet that criterion, but buffers obtained from other
kernel drivers, or those contained within huge pages, will work with these
drivers.
Filling the buffers
-------------------
The final part of a videobuf implementation has no direct callback - it's
the portion of the code which actually puts frame data into the buffers,
usually in response to interrupts from the device. For all types of
drivers, this process works approximately as follows:
- Obtain the next available buffer and make sure that somebody is actually
waiting for it.
- Get a pointer to the memory and put video data there.
- Mark the buffer as done and wake up the process waiting for it.
Step (1) above is done by looking at the driver-managed list_head structure
- the one which is filled in the buf_queue() callback. Because starting
the engine and enqueueing buffers are done in separate steps, it's possible
for the engine to be running without any buffers available - in the
vmalloc() case especially. So the driver should be prepared for the list
to be empty. It is equally possible that nobody is yet interested in the
buffer; the driver should not remove it from the list or fill it until a
process is waiting on it. That test can be done by examining the buffer's
done field (a wait_queue_head_t structure) with waitqueue_active().
A buffer's state should be set to VIDEOBUF_ACTIVE before being mapped for
DMA; that ensures that the videobuf layer will not try to do anything with
it while the device is transferring data.
For scatter/gather drivers, the needed memory pointers will be found in the
scatterlist structure described above. Drivers using the vmalloc() method
can get a memory pointer with:
.. code-block:: none
void *videobuf_to_vmalloc(struct videobuf_buffer *buf);
For contiguous DMA drivers, the function to use is:
.. code-block:: none
dma_addr_t videobuf_to_dma_contig(struct videobuf_buffer *buf);
The contiguous DMA API goes out of its way to hide the kernel-space address
of the DMA buffer from drivers.
The final step is to set the size field of the relevant videobuf_buffer
structure to the actual size of the captured image, set state to
VIDEOBUF_DONE, then call wake_up() on the done queue. At this point, the
buffer is owned by the videobuf layer and the driver should not touch it
again.
Developers who are interested in more information can go into the relevant
header files; there are a few low-level functions declared there which have
not been talked about here. Note also that all of these calls are exported
GPL-only, so they will not be available to non-GPL kernel modules.

View File

@ -768,18 +768,6 @@ const char *video_device_node_name(struct video_device *vdev);
此功能,而非访问 video_device::num 和 video_device::minor 域。
视频缓冲辅助函数
---------------
v4l2 核心 API 提供了一个处理视频缓冲的标准方法(称为“videobuf”)。
这些方法使驱动可以通过统一的方式实现 read()、mmap() 和 overlay()。
目前在设备上支持视频缓冲的方法有分散/聚集 DMA(videobuf-dma-sg)、
线性 DMA(videobuf-dma-contig)以及大多用于 USB 设备的用 vmalloc
分配的缓冲(videobuf-vmalloc)。
请参阅 Documentation/driver-api/media/v4l2-videobuf.rst以获得更多关于 videobuf
层的使用信息。
v4l2_fh 结构体
-------------

View File

@ -0,0 +1,104 @@
.. SPDX-License-Identifier: GPL-2.0
.. _media_using_camera_sensor_drivers:
Using camera sensor drivers
===========================
This section describes common practices for how the V4L2 sub-device interface is
used to control the camera sensor drivers.
You may also find :ref:`media_writing_camera_sensor_drivers` useful.
Frame size
----------
There are two distinct ways to configure the frame size produced by camera
sensors.
Freely configurable camera sensor drivers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Freely configurable camera sensor drivers expose the device's internal
processing pipeline as one or more sub-devices with different cropping and
scaling configurations. The output size of the device is the result of a series
of cropping and scaling operations from the device's pixel array's size.
An example of such a driver is the CCS driver.
Register list based drivers
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Register list based drivers generally, instead of able to configure the device
they control based on user requests, are limited to a number of preset
configurations that combine a number of different parameters that on hardware
level are independent. How a driver picks such configuration is based on the
format set on a source pad at the end of the device's internal pipeline.
Most sensor drivers are implemented this way.
Frame interval configuration
----------------------------
There are two different methods for obtaining possibilities for different frame
intervals as well as configuring the frame interval. Which one to implement
depends on the type of the device.
Raw camera sensors
~~~~~~~~~~~~~~~~~~
Instead of a high level parameter such as frame interval, the frame interval is
a result of the configuration of a number of camera sensor implementation
specific parameters. Luckily, these parameters tend to be the same for more or
less all modern raw camera sensors.
The frame interval is calculated using the following equation::
frame interval = (analogue crop width + horizontal blanking) *
(analogue crop height + vertical blanking) / pixel rate
The formula is bus independent and is applicable for raw timing parameters on
large variety of devices beyond camera sensors. Devices that have no analogue
crop, use the full source image size, i.e. pixel array size.
Horizontal and vertical blanking are specified by ``V4L2_CID_HBLANK`` and
``V4L2_CID_VBLANK``, respectively. The unit of the ``V4L2_CID_HBLANK`` control
is pixels and the unit of the ``V4L2_CID_VBLANK`` is lines. The pixel rate in
the sensor's **pixel array** is specified by ``V4L2_CID_PIXEL_RATE`` in the same
sub-device. The unit of that control is pixels per second.
Register list based drivers need to implement read-only sub-device nodes for the
purpose. Devices that are not register list based need these to configure the
device's internal processing pipeline.
The first entity in the linear pipeline is the pixel array. The pixel array may
be followed by other entities that are there to allow configuring binning,
skipping, scaling or digital crop, see :ref:`VIDIOC_SUBDEV_G_SELECTION
<VIDIOC_SUBDEV_G_SELECTION>`.
USB cameras etc. devices
~~~~~~~~~~~~~~~~~~~~~~~~
USB video class hardware, as well as many cameras offering a similar higher
level interface natively, generally use the concept of frame interval (or frame
rate) on device level in firmware or hardware. This means lower level controls
implemented by raw cameras may not be used on uAPI (or even kAPI) to control the
frame interval on these devices.
Rotation, orientation and flipping
----------------------------------
Some systems have the camera sensor mounted upside down compared to its natural
mounting rotation. In such cases, drivers shall expose the information to
userspace with the :ref:`V4L2_CID_CAMERA_SENSOR_ROTATION
<v4l2-camera-sensor-rotation>` control.
Sensor drivers shall also report the sensor's mounting orientation with the
:ref:`V4L2_CID_CAMERA_SENSOR_ORIENTATION <v4l2-camera-sensor-orientation>`.
Sensor drivers that have any vertical or horizontal flips embedded in the
register programming sequences shall initialize the :ref:`V4L2_CID_HFLIP
<v4l2-cid-hflip>` and :ref:`V4L2_CID_VFLIP <v4l2-cid-vflip>` controls with the
values programmed by the register sequences. The default values of these
controls shall be 0 (disabled). Especially these controls shall not be inverted,
independently of the sensor's mounting rotation.

View File

@ -32,11 +32,13 @@ For more details see the file COPYING in the source distribution of Linux.
:numbered:
aspeed-video
camera-sensor
ccs
cx2341x-uapi
dw100
imx-uapi
max2175
npcm-video
omap3isp-uapi
st-vgxy61
uvcvideo

View File

@ -0,0 +1,66 @@
.. SPDX-License-Identifier: GPL-2.0
.. include:: <isonum.txt>
NPCM video driver
=================
This driver is used to control the Video Capture/Differentiation (VCD) engine
and Encoding Compression Engine (ECE) present on Nuvoton NPCM SoCs. The VCD can
capture a frame from digital video input and compare two frames in memory, and
the ECE can compress the frame data into HEXTILE format.
Driver-specific Controls
------------------------
V4L2_CID_NPCM_CAPTURE_MODE
~~~~~~~~~~~~~~~~~~~~~~~~~~
The VCD engine supports two modes:
- COMPLETE mode:
Capture the next complete frame into memory.
- DIFF mode:
Compare the incoming frame with the frame stored in memory, and updates the
differentiated frame in memory.
Application can use ``V4L2_CID_NPCM_CAPTURE_MODE`` control to set the VCD mode
with different control values (enum v4l2_npcm_capture_mode):
- ``V4L2_NPCM_CAPTURE_MODE_COMPLETE``: will set VCD to COMPLETE mode.
- ``V4L2_NPCM_CAPTURE_MODE_DIFF``: will set VCD to DIFF mode.
V4L2_CID_NPCM_RECT_COUNT
~~~~~~~~~~~~~~~~~~~~~~~~
If using V4L2_PIX_FMT_HEXTILE format, VCD will capture frame data and then ECE
will compress the data into HEXTILE rectangles and store them in V4L2 video
buffer with the layout defined in Remote Framebuffer Protocol:
::
(RFC 6143, https://www.rfc-editor.org/rfc/rfc6143.html#section-7.6.1)
+--------------+--------------+-------------------+
| No. of bytes | Type [Value] | Description |
+--------------+--------------+-------------------+
| 2 | U16 | x-position |
| 2 | U16 | y-position |
| 2 | U16 | width |
| 2 | U16 | height |
| 4 | S32 | encoding-type (5) |
+--------------+--------------+-------------------+
| HEXTILE rectangle data |
+-------------------------------------------------+
Application can get the video buffer through VIDIOC_DQBUF, and followed by
calling ``V4L2_CID_NPCM_RECT_COUNT`` control to get the number of HEXTILE
rectangles in this buffer.
References
----------
include/uapi/linux/npcm-video.h
**Copyright** |copy| 2022 Nuvoton Technologies

View File

@ -59,9 +59,7 @@ Generic Error Codes
- - ``ENOTTY``
- The ioctl is not supported by the driver, actually meaning that
the required functionality is not available, or the file
descriptor is not for a media device.
- The ioctl is not supported by the file descriptor.
- - ``ENOSPC``

View File

@ -549,9 +549,9 @@ Buffer Flags
- 0x00000400
- The buffer has been prepared for I/O and can be queued by the
application. Drivers set or clear this flag when the
:ref:`VIDIOC_QUERYBUF`,
:ref:`VIDIOC_QUERYBUF <VIDIOC_QUERYBUF>`,
:ref:`VIDIOC_PREPARE_BUF <VIDIOC_QBUF>`,
:ref:`VIDIOC_QBUF` or
:ref:`VIDIOC_QBUF <VIDIOC_QBUF>` or
:ref:`VIDIOC_DQBUF <VIDIOC_QBUF>` ioctl is called.
* .. _`V4L2-BUF-FLAG-NO-CACHE-INVALIDATE`:

View File

@ -143,9 +143,13 @@ Control IDs
recognise the difference between digital and analogue gain use
controls ``V4L2_CID_DIGITAL_GAIN`` and ``V4L2_CID_ANALOGUE_GAIN``.
.. _v4l2-cid-hflip:
``V4L2_CID_HFLIP`` ``(boolean)``
Mirror the picture horizontally.
.. _v4l2-cid-vflip:
``V4L2_CID_VFLIP`` ``(boolean)``
Mirror the picture vertically.

View File

@ -579,20 +579,19 @@ is started.
There are three steps in configuring the streams:
1) Set up links. Connect the pads between sub-devices using the :ref:`Media
Controller API <media_controller>`
1. Set up links. Connect the pads between sub-devices using the
:ref:`Media Controller API <media_controller>`
2) Streams. Streams are declared and their routing is configured by
setting the routing table for the sub-device using
:ref:`VIDIOC_SUBDEV_S_ROUTING <VIDIOC_SUBDEV_G_ROUTING>` ioctl. Note that
setting the routing table will reset formats and selections in the
sub-device to default values.
2. Streams. Streams are declared and their routing is configured by setting the
routing table for the sub-device using :ref:`VIDIOC_SUBDEV_S_ROUTING
<VIDIOC_SUBDEV_G_ROUTING>` ioctl. Note that setting the routing table will
reset formats and selections in the sub-device to default values.
3) Configure formats and selections. Formats and selections of each stream
are configured separately as documented for plain sub-devices in
:ref:`format-propagation`. The stream ID is set to the same stream ID
associated with either sink or source pads of routes configured using the
:ref:`VIDIOC_SUBDEV_S_ROUTING <VIDIOC_SUBDEV_G_ROUTING>` ioctl.
3. Configure formats and selections. Formats and selections of each stream are
configured separately as documented for plain sub-devices in
:ref:`format-propagation`. The stream ID is set to the same stream ID
associated with either sink or source pads of routes configured using the
:ref:`VIDIOC_SUBDEV_S_ROUTING <VIDIOC_SUBDEV_G_ROUTING>` ioctl.
Multiplexed streams setup example
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -618,11 +617,11 @@ modeled as V4L2 devices, exposed to userspace via /dev/videoX nodes.
To configure this pipeline, the userspace must take the following steps:
1) Set up media links between entities: connect the sensors to the bridge,
bridge to the receiver, and the receiver to the DMA engines. This step does
not differ from normal non-multiplexed media controller setup.
1. Set up media links between entities: connect the sensors to the bridge,
bridge to the receiver, and the receiver to the DMA engines. This step does
not differ from normal non-multiplexed media controller setup.
2) Configure routing
2. Configure routing
.. flat-table:: Bridge routing table
:header-rows: 1
@ -656,14 +655,14 @@ not differ from normal non-multiplexed media controller setup.
- V4L2_SUBDEV_ROUTE_FL_ACTIVE
- Pixel data stream from Sensor B
3) Configure formats and selections
3. Configure formats and selections
After configuring routing, the next step is configuring the formats and
selections for the streams. This is similar to performing this step without
streams, with just one exception: the ``stream`` field needs to be assigned
to the value of the stream ID.
After configuring routing, the next step is configuring the formats and
selections for the streams. This is similar to performing this step without
streams, with just one exception: the ``stream`` field needs to be assigned
to the value of the stream ID.
A common way to accomplish this is to start from the sensors and propagate the
configurations along the stream towards the receiver,
using :ref:`VIDIOC_SUBDEV_S_FMT <VIDIOC_SUBDEV_G_FMT>` ioctls to configure each
stream endpoint in each sub-device.
A common way to accomplish this is to start from the sensors and propagate
the configurations along the stream towards the receiver, using
:ref:`VIDIOC_SUBDEV_S_FMT <VIDIOC_SUBDEV_G_FMT>` ioctls to configure each
stream endpoint in each sub-device.

View File

@ -33,6 +33,27 @@ current DV timings they use the
the DV timings as seen by the video receiver applications use the
:ref:`VIDIOC_QUERY_DV_TIMINGS` ioctl.
When the hardware detects a video source change (e.g. the video
signal appears or disappears, or the video resolution changes), then
it will issue a `V4L2_EVENT_SOURCE_CHANGE` event. Use the
:ref:`ioctl VIDIOC_SUBSCRIBE_EVENT <VIDIOC_SUBSCRIBE_EVENT>` and the
:ref:`VIDIOC_DQEVENT` to check if this event was reported.
If the video signal changed, then the application has to stop
streaming, free all buffers, and call the :ref:`VIDIOC_QUERY_DV_TIMINGS`
to obtain the new video timings, and if they are valid, it can set
those by calling the :ref:`ioctl VIDIOC_S_DV_TIMINGS <VIDIOC_G_DV_TIMINGS>`.
This will also update the format, so use the :ref:`ioctl VIDIOC_G_FMT <VIDIOC_G_FMT>`
to obtain the new format. Now the application can allocate new buffers
and start streaming again.
The :ref:`VIDIOC_QUERY_DV_TIMINGS` will just report what the
hardware detects, it will never change the configuration. If the
currently set timings and the actually detected timings differ, then
typically this will mean that you will not be able to capture any
video. The correct approach is to rely on the `V4L2_EVENT_SOURCE_CHANGE`
event so you know when something changed.
Applications can make use of the :ref:`input-capabilities` and
:ref:`output-capabilities` flags to determine whether the digital
video ioctls can be used with the given input or output.

View File

@ -288,6 +288,13 @@ please make a proposal on the linux-media mailing list.
- 'MT2110R'
- This format is two-planar 10-Bit raster mode and having similitude with
``V4L2_PIX_FMT_MM21`` in term of alignment and tiling. Used for AVC.
* .. _V4L2-PIX-FMT-HEXTILE:
- ``V4L2_PIX_FMT_HEXTILE``
- 'HXTL'
- Compressed format used by Nuvoton NPCM video driver. This format is
defined in Remote Framebuffer Protocol (RFC 6143, chapter 7.7.4 Hextile
Encoding).
.. raw:: latex
\normalsize

View File

@ -60,7 +60,7 @@ Each cell is one byte.
G\ :sub:`10low`\ (bits 3--0)
- G\ :sub:`12high`
- R\ :sub:`13high`
- R\ :sub:`13low`\ (bits 3--2)
- R\ :sub:`13low`\ (bits 7--4)
G\ :sub:`12low`\ (bits 3--0)
- - start + 12:
@ -82,6 +82,6 @@ Each cell is one byte.
G\ :sub:`30low`\ (bits 3--0)
- G\ :sub:`32high`
- R\ :sub:`33high`
- R\ :sub:`33low`\ (bits 3--2)
- R\ :sub:`33low`\ (bits 7--4)
G\ :sub:`32low`\ (bits 3--0)

View File

@ -2510,6 +2510,18 @@ F: drivers/rtc/rtc-nct3018y.c
F: include/dt-bindings/clock/nuvoton,npcm7xx-clock.h
F: include/dt-bindings/clock/nuvoton,npcm845-clk.h
ARM/NUVOTON NPCM VIDEO ENGINE DRIVER
M: Joseph Liu <kwliu@nuvoton.com>
M: Marvin Lin <kflin@nuvoton.com>
L: linux-media@vger.kernel.org
L: openbmc@lists.ozlabs.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/media/nuvoton,npcm-ece.yaml
F: Documentation/devicetree/bindings/media/nuvoton,npcm-vcd.yaml
F: Documentation/userspace-api/media/drivers/npcm-video.rst
F: drivers/media/platform/nuvoton/
F: include/uapi/linux/npcm-video.h
ARM/NUVOTON WPCM450 ARCHITECTURE
M: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
L: openbmc@lists.ozlabs.org (moderated for non-subscribers)
@ -6143,6 +6155,13 @@ L: linux-gpio@vger.kernel.org
S: Maintained
F: drivers/gpio/gpio-gpio-mm.c
DIGITEQ AUTOMOTIVE MGB4 V4L2 DRIVER
M: Martin Tuma <martin.tuma@digiteqautomotive.com>
L: linux-media@vger.kernel.org
S: Maintained
F: Documentation/admin-guide/media/mgb4.rst
F: drivers/media/pci/mgb4/
DIOLAN U2C-12 I2C DRIVER
M: Guenter Roeck <linux@roeck-us.net>
L: linux-i2c@vger.kernel.org
@ -14684,6 +14703,14 @@ L: linux-mtd@lists.infradead.org
S: Maintained
F: drivers/mtd/devices/docg3*
MT9M114 ONSEMI SENSOR DRIVER
M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
L: linux-media@vger.kernel.org
S: Maintained
T: git git://linuxtv.org/media_tree.git
F: Documentation/devicetree/bindings/media/i2c/onnn,mt9m114.yaml
F: drivers/media/i2c/mt9m114.c
MT9P031 APTINA CAMERA SENSOR
M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
L: linux-media@vger.kernel.org
@ -15926,7 +15953,7 @@ L: linux-media@vger.kernel.org
S: Maintained
T: git git://linuxtv.org/media_tree.git
F: Documentation/devicetree/bindings/media/i2c/ovti,ov4689.yaml
F: drivers/media/i2c/ov5647.c
F: drivers/media/i2c/ov4689.c
OMNIVISION OV5640 SENSOR DRIVER
M: Steve Longerbeam <slongerbeam@gmail.com>
@ -16016,8 +16043,7 @@ F: Documentation/devicetree/bindings/media/i2c/ovti,ov8858.yaml
F: drivers/media/i2c/ov8858.c
OMNIVISION OV9282 SENSOR DRIVER
M: Paul J. Murphy <paul.j.murphy@intel.com>
M: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
M: Dave Stevenson <dave.stevenson@raspberrypi.com>
L: linux-media@vger.kernel.org
S: Maintained
T: git git://linuxtv.org/media_tree.git
@ -18666,6 +18692,7 @@ F: sound/soc/rockchip/rockchip_i2s_tdm.*
ROCKCHIP ISP V1 DRIVER
M: Dafna Hirschfeld <dafna@fastmail.com>
M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
L: linux-media@vger.kernel.org
L: linux-rockchip@lists.infradead.org
S: Maintained
@ -20160,19 +20187,15 @@ T: git git://linuxtv.org/media_tree.git
F: drivers/media/i2c/imx319.c
SONY IMX334 SENSOR DRIVER
M: Paul J. Murphy <paul.j.murphy@intel.com>
M: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
L: linux-media@vger.kernel.org
S: Maintained
S: Orphan
T: git git://linuxtv.org/media_tree.git
F: Documentation/devicetree/bindings/media/i2c/sony,imx334.yaml
F: drivers/media/i2c/imx334.c
SONY IMX335 SENSOR DRIVER
M: Paul J. Murphy <paul.j.murphy@intel.com>
M: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
L: linux-media@vger.kernel.org
S: Maintained
S: Orphan
T: git git://linuxtv.org/media_tree.git
F: Documentation/devicetree/bindings/media/i2c/sony,imx335.yaml
F: drivers/media/i2c/imx335.c
@ -20185,10 +20208,8 @@ T: git git://linuxtv.org/media_tree.git
F: drivers/media/i2c/imx355.c
SONY IMX412 SENSOR DRIVER
M: Paul J. Murphy <paul.j.murphy@intel.com>
M: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
L: linux-media@vger.kernel.org
S: Maintained
S: Orphan
T: git git://linuxtv.org/media_tree.git
F: Documentation/devicetree/bindings/media/i2c/sony,imx412.yaml
F: drivers/media/i2c/imx412.c
@ -21752,6 +21773,13 @@ F: Documentation/devicetree/bindings/media/i2c/ti,ds90*
F: drivers/media/i2c/ds90*
F: include/media/i2c/ds90*
TI J721E CSI2RX DRIVER
M: Jai Luthra <j-luthra@ti.com>
L: linux-media@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/media/ti,j721e-csi2rx-shim.yaml
F: drivers/media/platform/ti/j721e-csi2rx/
TI KEYSTONE MULTICORE NAVIGATOR DRIVERS
M: Nishanth Menon <nm@ti.com>
M: Santosh Shilimkar <ssantosh@kernel.org>

View File

@ -477,7 +477,6 @@ CONFIG_LIRC=y
CONFIG_RC_DEVICES=y
CONFIG_IR_GPIO_TX=m
CONFIG_IR_PWM_TX=m
CONFIG_IR_RX51=m
CONFIG_IR_SPI=m
CONFIG_MEDIA_SUPPORT=m
CONFIG_V4L_PLATFORM_DRIVERS=y

View File

@ -6,7 +6,7 @@
# Please keep it in alphabetic order
obj-$(CONFIG_CEC_CROS_EC) += cros-ec/
obj-$(CONFIG_CEC_GPIO) += cec-gpio/
obj-$(CONFIG_CEC_MESON_AO) += meson/
obj-y += meson/
obj-$(CONFIG_CEC_SAMSUNG_S5P) += s5p/
obj-$(CONFIG_CEC_SECO) += seco/
obj-$(CONFIG_CEC_STI) += sti/

View File

@ -21,51 +21,125 @@
#define DRV_NAME "cros-ec-cec"
/**
* struct cros_ec_cec_port - Driver data for a single EC CEC port
*
* @port_num: port number
* @adap: CEC adapter
* @notify: CEC notifier pointer
* @rx_msg: storage for a received message
* @cros_ec_cec: pointer to the parent struct
*/
struct cros_ec_cec_port {
int port_num;
struct cec_adapter *adap;
struct cec_notifier *notify;
struct cec_msg rx_msg;
struct cros_ec_cec *cros_ec_cec;
};
/**
* struct cros_ec_cec - Driver data for EC CEC
*
* @cros_ec: Pointer to EC device
* @notifier: Notifier info for responding to EC events
* @adap: CEC adapter
* @notify: CEC notifier pointer
* @rx_msg: storage for a received message
* @write_cmd_version: Highest supported version of EC_CMD_CEC_WRITE_MSG.
* @num_ports: Number of CEC ports
* @ports: Array of ports
*/
struct cros_ec_cec {
struct cros_ec_device *cros_ec;
struct notifier_block notifier;
struct cec_adapter *adap;
struct cec_notifier *notify;
struct cec_msg rx_msg;
int write_cmd_version;
int num_ports;
struct cros_ec_cec_port *ports[EC_CEC_MAX_PORTS];
};
static void cros_ec_cec_received_message(struct cros_ec_cec_port *port,
uint8_t *msg, uint8_t len)
{
if (len > CEC_MAX_MSG_SIZE)
len = CEC_MAX_MSG_SIZE;
port->rx_msg.len = len;
memcpy(port->rx_msg.msg, msg, len);
cec_received_msg(port->adap, &port->rx_msg);
}
static void handle_cec_message(struct cros_ec_cec *cros_ec_cec)
{
struct cros_ec_device *cros_ec = cros_ec_cec->cros_ec;
uint8_t *cec_message = cros_ec->event_data.data.cec_message;
unsigned int len = cros_ec->event_size;
struct cros_ec_cec_port *port;
/*
* There are two ways of receiving CEC messages:
* 1. Old EC firmware which only supports one port sends the data in a
* cec_message MKBP event.
* 2. New EC firmware which supports multiple ports uses
* EC_MKBP_CEC_HAVE_DATA to notify that data is ready and
* EC_CMD_CEC_READ_MSG to read it.
* Check that the EC only has one CEC port, and then we can assume the
* message is from port 0.
*/
if (cros_ec_cec->num_ports != 1) {
dev_err(cros_ec->dev,
"received cec_message on device with %d ports\n",
cros_ec_cec->num_ports);
return;
}
port = cros_ec_cec->ports[0];
if (len > CEC_MAX_MSG_SIZE)
len = CEC_MAX_MSG_SIZE;
cros_ec_cec->rx_msg.len = len;
memcpy(cros_ec_cec->rx_msg.msg, cec_message, len);
cros_ec_cec_received_message(port, cec_message, len);
}
cec_received_msg(cros_ec_cec->adap, &cros_ec_cec->rx_msg);
static void cros_ec_cec_read_message(struct cros_ec_cec_port *port)
{
struct cros_ec_device *cros_ec = port->cros_ec_cec->cros_ec;
struct ec_params_cec_read params = {
.port = port->port_num,
};
struct ec_response_cec_read response;
int ret;
ret = cros_ec_cmd(cros_ec, 0, EC_CMD_CEC_READ_MSG, &params,
sizeof(params), &response, sizeof(response));
if (ret < 0) {
dev_err(cros_ec->dev,
"error reading CEC message on EC: %d\n", ret);
return;
}
cros_ec_cec_received_message(port, response.msg, response.msg_len);
}
static void handle_cec_event(struct cros_ec_cec *cros_ec_cec)
{
struct cros_ec_device *cros_ec = cros_ec_cec->cros_ec;
uint32_t events = cros_ec->event_data.data.cec_events;
uint32_t cec_events = cros_ec->event_data.data.cec_events;
uint32_t port_num = EC_MKBP_EVENT_CEC_GET_PORT(cec_events);
uint32_t events = EC_MKBP_EVENT_CEC_GET_EVENTS(cec_events);
struct cros_ec_cec_port *port;
if (port_num >= cros_ec_cec->num_ports) {
dev_err(cros_ec->dev,
"received CEC event for invalid port %d\n", port_num);
return;
}
port = cros_ec_cec->ports[port_num];
if (events & EC_MKBP_CEC_SEND_OK)
cec_transmit_attempt_done(cros_ec_cec->adap,
CEC_TX_STATUS_OK);
cec_transmit_attempt_done(port->adap, CEC_TX_STATUS_OK);
/* FW takes care of all retries, tell core to avoid more retries */
if (events & EC_MKBP_CEC_SEND_FAILED)
cec_transmit_attempt_done(cros_ec_cec->adap,
cec_transmit_attempt_done(port->adap,
CEC_TX_STATUS_MAX_RETRIES |
CEC_TX_STATUS_NACK);
if (events & EC_MKBP_CEC_HAVE_DATA)
cros_ec_cec_read_message(port);
}
static int cros_ec_cec_event(struct notifier_block *nb,
@ -93,20 +167,18 @@ static int cros_ec_cec_event(struct notifier_block *nb,
static int cros_ec_cec_set_log_addr(struct cec_adapter *adap, u8 logical_addr)
{
struct cros_ec_cec *cros_ec_cec = adap->priv;
struct cros_ec_cec_port *port = adap->priv;
struct cros_ec_cec *cros_ec_cec = port->cros_ec_cec;
struct cros_ec_device *cros_ec = cros_ec_cec->cros_ec;
struct {
struct cros_ec_command msg;
struct ec_params_cec_set data;
} __packed msg = {};
struct ec_params_cec_set params = {
.cmd = CEC_CMD_LOGICAL_ADDRESS,
.port = port->port_num,
.val = logical_addr,
};
int ret;
msg.msg.command = EC_CMD_CEC_SET;
msg.msg.outsize = sizeof(msg.data);
msg.data.cmd = CEC_CMD_LOGICAL_ADDRESS;
msg.data.val = logical_addr;
ret = cros_ec_cmd_xfer_status(cros_ec, &msg.msg);
ret = cros_ec_cmd(cros_ec, 0, EC_CMD_CEC_SET, &params, sizeof(params),
NULL, 0);
if (ret < 0) {
dev_err(cros_ec->dev,
"error setting CEC logical address on EC: %d\n", ret);
@ -119,19 +191,26 @@ static int cros_ec_cec_set_log_addr(struct cec_adapter *adap, u8 logical_addr)
static int cros_ec_cec_transmit(struct cec_adapter *adap, u8 attempts,
u32 signal_free_time, struct cec_msg *cec_msg)
{
struct cros_ec_cec *cros_ec_cec = adap->priv;
struct cros_ec_cec_port *port = adap->priv;
struct cros_ec_cec *cros_ec_cec = port->cros_ec_cec;
struct cros_ec_device *cros_ec = cros_ec_cec->cros_ec;
struct {
struct cros_ec_command msg;
struct ec_params_cec_write data;
} __packed msg = {};
struct ec_params_cec_write params;
struct ec_params_cec_write_v1 params_v1;
int ret;
msg.msg.command = EC_CMD_CEC_WRITE_MSG;
msg.msg.outsize = cec_msg->len;
memcpy(msg.data.msg, cec_msg->msg, cec_msg->len);
if (cros_ec_cec->write_cmd_version == 0) {
memcpy(params.msg, cec_msg->msg, cec_msg->len);
ret = cros_ec_cmd(cros_ec, 0, EC_CMD_CEC_WRITE_MSG, &params,
cec_msg->len, NULL, 0);
} else {
params_v1.port = port->port_num;
params_v1.msg_len = cec_msg->len;
memcpy(params_v1.msg, cec_msg->msg, cec_msg->len);
ret = cros_ec_cmd(cros_ec, cros_ec_cec->write_cmd_version,
EC_CMD_CEC_WRITE_MSG, &params_v1,
sizeof(params_v1), NULL, 0);
}
ret = cros_ec_cmd_xfer_status(cros_ec, &msg.msg);
if (ret < 0) {
dev_err(cros_ec->dev,
"error writing CEC msg on EC: %d\n", ret);
@ -143,20 +222,18 @@ static int cros_ec_cec_transmit(struct cec_adapter *adap, u8 attempts,
static int cros_ec_cec_adap_enable(struct cec_adapter *adap, bool enable)
{
struct cros_ec_cec *cros_ec_cec = adap->priv;
struct cros_ec_cec_port *port = adap->priv;
struct cros_ec_cec *cros_ec_cec = port->cros_ec_cec;
struct cros_ec_device *cros_ec = cros_ec_cec->cros_ec;
struct {
struct cros_ec_command msg;
struct ec_params_cec_set data;
} __packed msg = {};
struct ec_params_cec_set params = {
.cmd = CEC_CMD_ENABLE,
.port = port->port_num,
.val = enable,
};
int ret;
msg.msg.command = EC_CMD_CEC_SET;
msg.msg.outsize = sizeof(msg.data);
msg.data.cmd = CEC_CMD_ENABLE;
msg.data.val = enable;
ret = cros_ec_cmd_xfer_status(cros_ec, &msg.msg);
ret = cros_ec_cmd(cros_ec, 0, EC_CMD_CEC_SET, &params, sizeof(params),
NULL, 0);
if (ret < 0) {
dev_err(cros_ec->dev,
"error %sabling CEC on EC: %d\n",
@ -203,38 +280,54 @@ static SIMPLE_DEV_PM_OPS(cros_ec_cec_pm_ops,
#if IS_ENABLED(CONFIG_PCI) && IS_ENABLED(CONFIG_DMI)
/*
* The Firmware only handles a single CEC interface tied to a single HDMI
* connector we specify along with the DRM device name handling the HDMI output
* Specify the DRM device name handling the HDMI output and the HDMI connector
* corresponding to each CEC port. The order of connectors must match the order
* in the EC (first connector is EC port 0, ...), and the number of connectors
* must match the number of ports in the EC (which can be queried using the
* EC_CMD_CEC_PORT_COUNT host command).
*/
struct cec_dmi_match {
const char *sys_vendor;
const char *product_name;
const char *devname;
const char *conn;
const char *const *conns;
};
static const char *const port_b_conns[] = { "Port B", NULL };
static const char *const port_db_conns[] = { "Port D", "Port B", NULL };
static const char *const port_ba_conns[] = { "Port B", "Port A", NULL };
static const char *const port_d_conns[] = { "Port D", NULL };
static const struct cec_dmi_match cec_dmi_match_table[] = {
/* Google Fizz */
{ "Google", "Fizz", "0000:00:02.0", "Port B" },
{ "Google", "Fizz", "0000:00:02.0", port_b_conns },
/* Google Brask */
{ "Google", "Brask", "0000:00:02.0", "Port B" },
{ "Google", "Brask", "0000:00:02.0", port_b_conns },
/* Google Moli */
{ "Google", "Moli", "0000:00:02.0", "Port B" },
{ "Google", "Moli", "0000:00:02.0", port_b_conns },
/* Google Kinox */
{ "Google", "Kinox", "0000:00:02.0", "Port B" },
{ "Google", "Kinox", "0000:00:02.0", port_b_conns },
/* Google Kuldax */
{ "Google", "Kuldax", "0000:00:02.0", "Port B" },
{ "Google", "Kuldax", "0000:00:02.0", port_b_conns },
/* Google Aurash */
{ "Google", "Aurash", "0000:00:02.0", "Port B" },
{ "Google", "Aurash", "0000:00:02.0", port_b_conns },
/* Google Gladios */
{ "Google", "Gladios", "0000:00:02.0", "Port B" },
{ "Google", "Gladios", "0000:00:02.0", port_b_conns },
/* Google Lisbon */
{ "Google", "Lisbon", "0000:00:02.0", "Port B" },
{ "Google", "Lisbon", "0000:00:02.0", port_b_conns },
/* Google Dibbi */
{ "Google", "Dibbi", "0000:00:02.0", port_db_conns },
/* Google Constitution */
{ "Google", "Constitution", "0000:00:02.0", port_ba_conns },
/* Google Boxy */
{ "Google", "Boxy", "0000:00:02.0", port_d_conns },
/* Google Taranza */
{ "Google", "Taranza", "0000:00:02.0", port_db_conns },
};
static struct device *cros_ec_cec_find_hdmi_dev(struct device *dev,
const char **conn)
const char * const **conns)
{
int i;
@ -251,7 +344,7 @@ static struct device *cros_ec_cec_find_hdmi_dev(struct device *dev,
if (!d)
return ERR_PTR(-EPROBE_DEFER);
put_device(d);
*conn = m->conn;
*conns = m->conns;
return d;
}
}
@ -265,23 +358,137 @@ static struct device *cros_ec_cec_find_hdmi_dev(struct device *dev,
#else
static struct device *cros_ec_cec_find_hdmi_dev(struct device *dev,
const char **conn)
const char * const **conns)
{
return ERR_PTR(-ENODEV);
}
#endif
static int cros_ec_cec_get_num_ports(struct cros_ec_cec *cros_ec_cec)
{
struct ec_response_cec_port_count response;
int ret;
ret = cros_ec_cmd(cros_ec_cec->cros_ec, 0, EC_CMD_CEC_PORT_COUNT, NULL,
0, &response, sizeof(response));
if (ret < 0) {
/*
* Old EC firmware only supports one port and does not support
* the port count command, so fall back to assuming one port.
*/
cros_ec_cec->num_ports = 1;
return 0;
}
if (response.port_count == 0) {
dev_err(cros_ec_cec->cros_ec->dev,
"EC reports 0 CEC ports\n");
return -ENODEV;
}
if (response.port_count > EC_CEC_MAX_PORTS) {
dev_err(cros_ec_cec->cros_ec->dev,
"EC reports too many ports: %d\n", response.port_count);
return -EINVAL;
}
cros_ec_cec->num_ports = response.port_count;
return 0;
}
static int cros_ec_cec_get_write_cmd_version(struct cros_ec_cec *cros_ec_cec)
{
struct cros_ec_device *cros_ec = cros_ec_cec->cros_ec;
struct ec_params_get_cmd_versions_v1 params = {
.cmd = EC_CMD_CEC_WRITE_MSG,
};
struct ec_response_get_cmd_versions response;
int ret;
ret = cros_ec_cmd(cros_ec, 1, EC_CMD_GET_CMD_VERSIONS, &params,
sizeof(params), &response, sizeof(response));
if (ret < 0) {
dev_err(cros_ec->dev,
"error getting CEC write command version: %d\n", ret);
return ret;
}
if (response.version_mask & EC_VER_MASK(1)) {
cros_ec_cec->write_cmd_version = 1;
} else {
if (cros_ec_cec->num_ports != 1) {
dev_err(cros_ec->dev,
"v0 write command only supports 1 port, %d reported\n",
cros_ec_cec->num_ports);
return -EINVAL;
}
cros_ec_cec->write_cmd_version = 0;
}
return 0;
}
static int cros_ec_cec_init_port(struct device *dev,
struct cros_ec_cec *cros_ec_cec,
int port_num, struct device *hdmi_dev,
const char * const *conns)
{
struct cros_ec_cec_port *port;
int ret;
port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
if (!port)
return -ENOMEM;
port->cros_ec_cec = cros_ec_cec;
port->port_num = port_num;
port->adap = cec_allocate_adapter(&cros_ec_cec_ops, port, DRV_NAME,
CEC_CAP_DEFAULTS |
CEC_CAP_CONNECTOR_INFO, 1);
if (IS_ERR(port->adap))
return PTR_ERR(port->adap);
if (!conns[port_num]) {
dev_err(dev, "no conn for port %d\n", port_num);
ret = -ENODEV;
goto out_probe_adapter;
}
port->notify = cec_notifier_cec_adap_register(hdmi_dev, conns[port_num],
port->adap);
if (!port->notify) {
ret = -ENOMEM;
goto out_probe_adapter;
}
ret = cec_register_adapter(port->adap, dev);
if (ret < 0)
goto out_probe_notify;
cros_ec_cec->ports[port_num] = port;
return 0;
out_probe_notify:
cec_notifier_cec_adap_unregister(port->notify, port->adap);
out_probe_adapter:
cec_delete_adapter(port->adap);
return ret;
}
static int cros_ec_cec_probe(struct platform_device *pdev)
{
struct cros_ec_dev *ec_dev = dev_get_drvdata(pdev->dev.parent);
struct cros_ec_device *cros_ec = ec_dev->ec_dev;
struct cros_ec_cec *cros_ec_cec;
struct cros_ec_cec_port *port;
struct device *hdmi_dev;
const char *conn = NULL;
const char * const *conns = NULL;
int ret;
hdmi_dev = cros_ec_cec_find_hdmi_dev(&pdev->dev, &conn);
hdmi_dev = cros_ec_cec_find_hdmi_dev(&pdev->dev, &conns);
if (IS_ERR(hdmi_dev))
return PTR_ERR(hdmi_dev);
@ -295,18 +502,19 @@ static int cros_ec_cec_probe(struct platform_device *pdev)
device_init_wakeup(&pdev->dev, 1);
cros_ec_cec->adap = cec_allocate_adapter(&cros_ec_cec_ops, cros_ec_cec,
DRV_NAME,
CEC_CAP_DEFAULTS |
CEC_CAP_CONNECTOR_INFO, 1);
if (IS_ERR(cros_ec_cec->adap))
return PTR_ERR(cros_ec_cec->adap);
ret = cros_ec_cec_get_num_ports(cros_ec_cec);
if (ret)
return ret;
cros_ec_cec->notify = cec_notifier_cec_adap_register(hdmi_dev, conn,
cros_ec_cec->adap);
if (!cros_ec_cec->notify) {
ret = -ENOMEM;
goto out_probe_adapter;
ret = cros_ec_cec_get_write_cmd_version(cros_ec_cec);
if (ret)
return ret;
for (int i = 0; i < cros_ec_cec->num_ports; i++) {
ret = cros_ec_cec_init_port(&pdev->dev, cros_ec_cec, i,
hdmi_dev, conns);
if (ret)
goto unregister_ports;
}
/* Get CEC events from the EC. */
@ -315,20 +523,24 @@ static int cros_ec_cec_probe(struct platform_device *pdev)
&cros_ec_cec->notifier);
if (ret) {
dev_err(&pdev->dev, "failed to register notifier\n");
goto out_probe_notify;
goto unregister_ports;
}
ret = cec_register_adapter(cros_ec_cec->adap, &pdev->dev);
if (ret < 0)
goto out_probe_notify;
return 0;
out_probe_notify:
cec_notifier_cec_adap_unregister(cros_ec_cec->notify,
cros_ec_cec->adap);
out_probe_adapter:
cec_delete_adapter(cros_ec_cec->adap);
unregister_ports:
/*
* Unregister any adapters which have been registered. We don't add the
* port to the array until the adapter has been registered successfully,
* so any non-NULL ports must have been registered.
*/
for (int i = 0; i < cros_ec_cec->num_ports; i++) {
port = cros_ec_cec->ports[i];
if (!port)
break;
cec_notifier_cec_adap_unregister(port->notify, port->adap);
cec_unregister_adapter(port->adap);
}
return ret;
}
@ -336,6 +548,7 @@ static void cros_ec_cec_remove(struct platform_device *pdev)
{
struct cros_ec_cec *cros_ec_cec = platform_get_drvdata(pdev);
struct device *dev = &pdev->dev;
struct cros_ec_cec_port *port;
int ret;
/*
@ -349,9 +562,11 @@ static void cros_ec_cec_remove(struct platform_device *pdev)
if (ret)
dev_err(dev, "failed to unregister notifier\n");
cec_notifier_cec_adap_unregister(cros_ec_cec->notify,
cros_ec_cec->adap);
cec_unregister_adapter(cros_ec_cec->adap);
for (int i = 0; i < cros_ec_cec->num_ports; i++) {
port = cros_ec_cec->ports[i];
cec_notifier_cec_adap_unregister(port->notify, port->adap);
cec_unregister_adapter(port->adap);
}
}
static struct platform_driver cros_ec_cec_driver = {

View File

@ -353,31 +353,21 @@ static const struct file_operations debugfs_stats_ops = {
int smsdvb_debugfs_create(struct smsdvb_client_t *client)
{
struct smscore_device_t *coredev = client->coredev;
struct dentry *d;
struct smsdvb_debugfs *debug_data;
if (!smsdvb_debugfs_usb_root || !coredev->is_usb_device)
return -ENODEV;
client->debugfs = debugfs_create_dir(coredev->devpath,
smsdvb_debugfs_usb_root);
if (IS_ERR_OR_NULL(client->debugfs)) {
pr_info("Unable to create debugfs %s directory.\n",
coredev->devpath);
return -ENODEV;
}
d = debugfs_create_file("stats", S_IRUGO | S_IWUSR, client->debugfs,
client, &debugfs_stats_ops);
if (!d) {
debugfs_remove(client->debugfs);
return -ENOMEM;
}
debug_data = kzalloc(sizeof(*client->debug_data), GFP_KERNEL);
if (!debug_data)
return -ENOMEM;
client->debugfs = debugfs_create_dir(coredev->devpath,
smsdvb_debugfs_usb_root);
debugfs_create_file("stats", S_IRUGO | S_IWUSR, client->debugfs,
client, &debugfs_stats_ops);
client->debug_data = debug_data;
client->prt_dvb_stats = smsdvb_print_dvb_stats;
client->prt_isdb_stats = smsdvb_print_isdb_stats;

View File

@ -159,7 +159,7 @@ EXPORT_SYMBOL(frame_vector_to_pfns);
struct frame_vector *frame_vector_create(unsigned int nr_frames)
{
struct frame_vector *vec;
int size = sizeof(struct frame_vector) + sizeof(void *) * nr_frames;
int size = struct_size(vec, ptrs, nr_frames);
if (WARN_ON_ONCE(nr_frames == 0))
return NULL;

View File

@ -2890,7 +2890,7 @@ static size_t __vb2_perform_fileio(struct vb2_queue *q, char __user *data, size_
if (copy_timestamp)
b->timestamp = ktime_get_ns();
ret = vb2_core_qbuf(q, index, NULL, NULL);
dprintk(q, 5, "vb2_dbuf result: %d\n", ret);
dprintk(q, 5, "vb2_qbuf result: %d\n", ret);
if (ret)
return ret;

View File

@ -542,13 +542,14 @@ static void vb2_dc_put_userptr(void *buf_priv)
*/
dma_unmap_sgtable(buf->dev, sgt, buf->dma_dir,
DMA_ATTR_SKIP_CPU_SYNC);
pages = frame_vector_pages(buf->vec);
/* sgt should exist only if vector contains pages... */
BUG_ON(IS_ERR(pages));
if (buf->dma_dir == DMA_FROM_DEVICE ||
buf->dma_dir == DMA_BIDIRECTIONAL)
for (i = 0; i < frame_vector_count(buf->vec); i++)
set_page_dirty_lock(pages[i]);
buf->dma_dir == DMA_BIDIRECTIONAL) {
pages = frame_vector_pages(buf->vec);
/* sgt should exist only if vector contains pages... */
if (!WARN_ON_ONCE(IS_ERR(pages)))
for (i = 0; i < frame_vector_count(buf->vec); i++)
set_page_dirty_lock(pages[i]);
}
sg_free_table(sgt);
kfree(sgt);
} else {

View File

@ -133,13 +133,15 @@ static void vb2_vmalloc_put_userptr(void *buf_priv)
if (!buf->vec->is_pfns) {
n_pages = frame_vector_count(buf->vec);
pages = frame_vector_pages(buf->vec);
if (vaddr)
vm_unmap_ram((void *)vaddr, n_pages);
if (buf->dma_dir == DMA_FROM_DEVICE ||
buf->dma_dir == DMA_BIDIRECTIONAL)
for (i = 0; i < n_pages; i++)
set_page_dirty_lock(pages[i]);
buf->dma_dir == DMA_BIDIRECTIONAL) {
pages = frame_vector_pages(buf->vec);
if (!WARN_ON_ONCE(IS_ERR(pages)))
for (i = 0; i < n_pages; i++)
set_page_dirty_lock(pages[i]);
}
} else {
iounmap((__force void __iomem *)buf->vaddr);
}

View File

@ -4779,8 +4779,8 @@ set_frequency(struct drx_demod_instance *demod,
bool image_to_select;
s32 fm_frequency_shift = 0;
rf_mirror = (ext_attr->mirror == DRX_MIRROR_YES) ? true : false;
tuner_mirror = demod->my_common_attr->mirror_freq_spect ? false : true;
rf_mirror = ext_attr->mirror == DRX_MIRROR_YES;
tuner_mirror = !demod->my_common_attr->mirror_freq_spect;
/*
Program frequency shifter
No need to account for mirroring on RF
@ -8765,7 +8765,7 @@ static int qam_flip_spec(struct drx_demod_instance *demod, struct drx_channel *c
goto rw_error;
}
ext_attr->iqm_fs_rate_ofs = iqm_fs_rate_ofs;
ext_attr->pos_image = (ext_attr->pos_image) ? false : true;
ext_attr->pos_image = !ext_attr->pos_image;
/* freeze dq/fq updating */
rc = drxj_dap_read_reg16(dev_addr, QAM_DQ_MODE__A, &data, 0);

View File

@ -1920,8 +1920,7 @@ static void m88ds3103_remove(struct i2c_client *client)
dev_dbg(&client->dev, "\n");
if (dev->dt_client)
i2c_unregister_device(dev->dt_client);
i2c_unregister_device(dev->dt_client);
i2c_mux_del_adapters(dev->muxc);

View File

@ -99,6 +99,7 @@ config VIDEO_IMX214
config VIDEO_IMX219
tristate "Sony IMX219 sensor support"
select V4L2_CCI_I2C
help
This is a Video4Linux2 sensor driver for the Sony
IMX219 camera.
@ -215,6 +216,16 @@ config VIDEO_MT9M111
This driver supports MT9M111, MT9M112 and MT9M131 cameras from
Micron/Aptina
config VIDEO_MT9M114
tristate "onsemi MT9M114 sensor support"
select V4L2_CCI_I2C
help
This is a Video4Linux2 sensor-level driver for the onsemi MT9M114
camera.
To compile this driver as a module, choose M here: the
module will be called mt9m114.
config VIDEO_MT9P031
tristate "Aptina MT9P031 support"
select VIDEO_APTINA_PLL

View File

@ -65,6 +65,7 @@ obj-$(CONFIG_VIDEO_ML86V7667) += ml86v7667.o
obj-$(CONFIG_VIDEO_MSP3400) += msp3400.o
obj-$(CONFIG_VIDEO_MT9M001) += mt9m001.o
obj-$(CONFIG_VIDEO_MT9M111) += mt9m111.o
obj-$(CONFIG_VIDEO_MT9M114) += mt9m114.o
obj-$(CONFIG_VIDEO_MT9P031) += mt9p031.o
obj-$(CONFIG_VIDEO_MT9T112) += mt9t112.o
obj-$(CONFIG_VIDEO_MT9V011) += mt9v011.o

View File

@ -411,43 +411,44 @@ static int adp1653_of_init(struct i2c_client *client,
struct device_node *node)
{
struct adp1653_platform_data *pd;
struct device_node *child;
struct device_node *node_indicator = NULL;
struct device_node *node_flash;
pd = devm_kzalloc(&client->dev, sizeof(*pd), GFP_KERNEL);
if (!pd)
return -ENOMEM;
flash->platform_data = pd;
child = of_get_child_by_name(node, "flash");
if (!child)
node_flash = of_get_child_by_name(node, "flash");
if (!node_flash)
return -EINVAL;
if (of_property_read_u32(child, "flash-timeout-us",
if (of_property_read_u32(node_flash, "flash-timeout-us",
&pd->max_flash_timeout))
goto err;
if (of_property_read_u32(child, "flash-max-microamp",
if (of_property_read_u32(node_flash, "flash-max-microamp",
&pd->max_flash_intensity))
goto err;
pd->max_flash_intensity /= 1000;
if (of_property_read_u32(child, "led-max-microamp",
if (of_property_read_u32(node_flash, "led-max-microamp",
&pd->max_torch_intensity))
goto err;
pd->max_torch_intensity /= 1000;
of_node_put(child);
child = of_get_child_by_name(node, "indicator");
if (!child)
return -EINVAL;
node_indicator = of_get_child_by_name(node, "indicator");
if (!node_indicator)
goto err;
if (of_property_read_u32(child, "led-max-microamp",
if (of_property_read_u32(node_indicator, "led-max-microamp",
&pd->max_indicator_intensity))
goto err;
of_node_put(child);
of_node_put(node_flash);
of_node_put(node_indicator);
pd->enable_gpio = devm_gpiod_get(&client->dev, "enable", GPIOD_OUT_LOW);
if (IS_ERR(pd->enable_gpio)) {
@ -458,7 +459,8 @@ static int adp1653_of_init(struct i2c_client *client,
return 0;
err:
dev_err(&client->dev, "Required property not found\n");
of_node_put(child);
of_node_put(node_flash);
of_node_put(node_indicator);
return -EINVAL;
}

View File

@ -5,6 +5,7 @@
* Copyright (C) 2013 Cogent Embedded, Inc.
* Copyright (C) 2013 Renesas Solutions Corp.
*/
#include <linux/mod_devicetable.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/errno.h>
@ -1395,7 +1396,6 @@ out_unlock:
static int adv7180_probe(struct i2c_client *client)
{
const struct i2c_device_id *id = i2c_client_get_device_id(client);
struct device_node *np = client->dev.of_node;
struct adv7180_state *state;
struct v4l2_subdev *sd;
@ -1411,7 +1411,7 @@ static int adv7180_probe(struct i2c_client *client)
state->client = client;
state->field = V4L2_FIELD_ALTERNATE;
state->chip_info = (struct adv7180_chip_info *)id->driver_data;
state->chip_info = i2c_get_match_data(client);
state->pwdn_gpio = devm_gpiod_get_optional(&client->dev, "powerdown",
GPIOD_OUT_HIGH);
@ -1536,22 +1536,6 @@ static void adv7180_remove(struct i2c_client *client)
mutex_destroy(&state->mutex);
}
static const struct i2c_device_id adv7180_id[] = {
{ "adv7180", (kernel_ulong_t)&adv7180_info },
{ "adv7180cp", (kernel_ulong_t)&adv7180_info },
{ "adv7180st", (kernel_ulong_t)&adv7180_info },
{ "adv7182", (kernel_ulong_t)&adv7182_info },
{ "adv7280", (kernel_ulong_t)&adv7280_info },
{ "adv7280-m", (kernel_ulong_t)&adv7280_m_info },
{ "adv7281", (kernel_ulong_t)&adv7281_info },
{ "adv7281-m", (kernel_ulong_t)&adv7281_m_info },
{ "adv7281-ma", (kernel_ulong_t)&adv7281_ma_info },
{ "adv7282", (kernel_ulong_t)&adv7282_info },
{ "adv7282-m", (kernel_ulong_t)&adv7282_m_info },
{},
};
MODULE_DEVICE_TABLE(i2c, adv7180_id);
#ifdef CONFIG_PM_SLEEP
static int adv7180_suspend(struct device *dev)
{
@ -1585,30 +1569,43 @@ static SIMPLE_DEV_PM_OPS(adv7180_pm_ops, adv7180_suspend, adv7180_resume);
#define ADV7180_PM_OPS NULL
#endif
#ifdef CONFIG_OF
static const struct of_device_id adv7180_of_id[] = {
{ .compatible = "adi,adv7180", },
{ .compatible = "adi,adv7180cp", },
{ .compatible = "adi,adv7180st", },
{ .compatible = "adi,adv7182", },
{ .compatible = "adi,adv7280", },
{ .compatible = "adi,adv7280-m", },
{ .compatible = "adi,adv7281", },
{ .compatible = "adi,adv7281-m", },
{ .compatible = "adi,adv7281-ma", },
{ .compatible = "adi,adv7282", },
{ .compatible = "adi,adv7282-m", },
{ },
static const struct i2c_device_id adv7180_id[] = {
{ "adv7180", (kernel_ulong_t)&adv7180_info },
{ "adv7180cp", (kernel_ulong_t)&adv7180_info },
{ "adv7180st", (kernel_ulong_t)&adv7180_info },
{ "adv7182", (kernel_ulong_t)&adv7182_info },
{ "adv7280", (kernel_ulong_t)&adv7280_info },
{ "adv7280-m", (kernel_ulong_t)&adv7280_m_info },
{ "adv7281", (kernel_ulong_t)&adv7281_info },
{ "adv7281-m", (kernel_ulong_t)&adv7281_m_info },
{ "adv7281-ma", (kernel_ulong_t)&adv7281_ma_info },
{ "adv7282", (kernel_ulong_t)&adv7282_info },
{ "adv7282-m", (kernel_ulong_t)&adv7282_m_info },
{}
};
MODULE_DEVICE_TABLE(i2c, adv7180_id);
static const struct of_device_id adv7180_of_id[] = {
{ .compatible = "adi,adv7180", &adv7180_info },
{ .compatible = "adi,adv7180cp", &adv7180_info },
{ .compatible = "adi,adv7180st", &adv7180_info },
{ .compatible = "adi,adv7182", &adv7182_info },
{ .compatible = "adi,adv7280", &adv7280_info },
{ .compatible = "adi,adv7280-m", &adv7280_m_info },
{ .compatible = "adi,adv7281", &adv7281_info },
{ .compatible = "adi,adv7281-m", &adv7281_m_info },
{ .compatible = "adi,adv7281-ma", &adv7281_ma_info },
{ .compatible = "adi,adv7282", &adv7282_info },
{ .compatible = "adi,adv7282-m", &adv7282_m_info },
{}
};
MODULE_DEVICE_TABLE(of, adv7180_of_id);
#endif
static struct i2c_driver adv7180_driver = {
.driver = {
.name = KBUILD_MODNAME,
.pm = ADV7180_PM_OPS,
.of_match_table = of_match_ptr(adv7180_of_id),
.of_match_table = adv7180_of_id,
},
.probe = adv7180_probe,
.remove = adv7180_remove,

View File

@ -133,8 +133,6 @@ struct ar0521_dev {
u16 mult2;
u16 vt_pix;
} pll;
bool streaming;
};
static inline struct ar0521_dev *to_ar0521_dev(struct v4l2_subdev *sd)
@ -991,12 +989,9 @@ static int ar0521_s_stream(struct v4l2_subdev *sd, int enable)
int ret;
mutex_lock(&sensor->lock);
ret = ar0521_set_stream(sensor, enable);
if (!ret)
sensor->streaming = enable;
mutex_unlock(&sensor->lock);
return ret;
}
@ -1023,28 +1018,6 @@ static const struct v4l2_subdev_ops ar0521_subdev_ops = {
.pad = &ar0521_pad_ops,
};
static int __maybe_unused ar0521_suspend(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct ar0521_dev *sensor = to_ar0521_dev(sd);
if (sensor->streaming)
ar0521_set_stream(sensor, 0);
return 0;
}
static int __maybe_unused ar0521_resume(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct ar0521_dev *sensor = to_ar0521_dev(sd);
if (sensor->streaming)
return ar0521_set_stream(sensor, 1);
return 0;
}
static int ar0521_probe(struct i2c_client *client)
{
struct v4l2_fwnode_endpoint ep = {
@ -1183,7 +1156,6 @@ static void ar0521_remove(struct i2c_client *client)
}
static const struct dev_pm_ops ar0521_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(ar0521_suspend, ar0521_resume)
SET_RUNTIME_PM_OPS(ar0521_power_off, ar0521_power_on, NULL)
};
static const struct of_device_id ar0521_dt_ids[] = {

View File

@ -508,9 +508,8 @@ static void __ccs_update_exposure_limits(struct ccs_sensor *sensor)
struct v4l2_ctrl *ctrl = sensor->exposure;
int max;
max = sensor->pixel_array->crop[CCS_PA_PAD_SRC].height
+ sensor->vblank->val
- CCS_LIM(sensor, COARSE_INTEGRATION_TIME_MAX_MARGIN);
max = sensor->pa_src.height + sensor->vblank->val -
CCS_LIM(sensor, COARSE_INTEGRATION_TIME_MAX_MARGIN);
__v4l2_ctrl_modify_range(ctrl, ctrl->minimum, max, ctrl->step, max);
}
@ -728,15 +727,12 @@ static int ccs_set_ctrl(struct v4l2_ctrl *ctrl)
break;
case V4L2_CID_VBLANK:
rval = ccs_write(sensor, FRAME_LENGTH_LINES,
sensor->pixel_array->crop[
CCS_PA_PAD_SRC].height
+ ctrl->val);
sensor->pa_src.height + ctrl->val);
break;
case V4L2_CID_HBLANK:
rval = ccs_write(sensor, LINE_LENGTH_PCK,
sensor->pixel_array->crop[CCS_PA_PAD_SRC].width
+ ctrl->val);
sensor->pa_src.width + ctrl->val);
break;
case V4L2_CID_TEST_PATTERN:
@ -1214,15 +1210,13 @@ static void ccs_update_blanking(struct ccs_sensor *sensor)
min = max_t(int,
CCS_LIM(sensor, MIN_FRAME_BLANKING_LINES),
min_fll - sensor->pixel_array->crop[CCS_PA_PAD_SRC].height);
max = max_fll - sensor->pixel_array->crop[CCS_PA_PAD_SRC].height;
min_fll - sensor->pa_src.height);
max = max_fll - sensor->pa_src.height;
__v4l2_ctrl_modify_range(vblank, min, max, vblank->step, min);
min = max_t(int,
min_llp - sensor->pixel_array->crop[CCS_PA_PAD_SRC].width,
min_lbp);
max = max_llp - sensor->pixel_array->crop[CCS_PA_PAD_SRC].width;
min = max_t(int, min_llp - sensor->pa_src.width, min_lbp);
max = max_llp - sensor->pa_src.width;
__v4l2_ctrl_modify_range(hblank, min, max, hblank->step, min);
@ -1246,10 +1240,8 @@ static int ccs_pll_blanking_update(struct ccs_sensor *sensor)
dev_dbg(&client->dev, "real timeperframe\t100/%d\n",
sensor->pll.pixel_rate_pixel_array /
((sensor->pixel_array->crop[CCS_PA_PAD_SRC].width
+ sensor->hblank->val) *
(sensor->pixel_array->crop[CCS_PA_PAD_SRC].height
+ sensor->vblank->val) / 100));
((sensor->pa_src.width + sensor->hblank->val) *
(sensor->pa_src.height + sensor->vblank->val) / 100));
return 0;
}
@ -1756,28 +1748,22 @@ static int ccs_start_streaming(struct ccs_sensor *sensor)
goto out;
/* Analog crop start coordinates */
rval = ccs_write(sensor, X_ADDR_START,
sensor->pixel_array->crop[CCS_PA_PAD_SRC].left);
rval = ccs_write(sensor, X_ADDR_START, sensor->pa_src.left);
if (rval < 0)
goto out;
rval = ccs_write(sensor, Y_ADDR_START,
sensor->pixel_array->crop[CCS_PA_PAD_SRC].top);
rval = ccs_write(sensor, Y_ADDR_START, sensor->pa_src.top);
if (rval < 0)
goto out;
/* Analog crop end coordinates */
rval = ccs_write(
sensor, X_ADDR_END,
sensor->pixel_array->crop[CCS_PA_PAD_SRC].left
+ sensor->pixel_array->crop[CCS_PA_PAD_SRC].width - 1);
rval = ccs_write(sensor, X_ADDR_END,
sensor->pa_src.left + sensor->pa_src.width - 1);
if (rval < 0)
goto out;
rval = ccs_write(
sensor, Y_ADDR_END,
sensor->pixel_array->crop[CCS_PA_PAD_SRC].top
+ sensor->pixel_array->crop[CCS_PA_PAD_SRC].height - 1);
rval = ccs_write(sensor, Y_ADDR_END,
sensor->pa_src.top + sensor->pa_src.height - 1);
if (rval < 0)
goto out;
@ -1789,27 +1775,23 @@ static int ccs_start_streaming(struct ccs_sensor *sensor)
/* Digital crop */
if (CCS_LIM(sensor, DIGITAL_CROP_CAPABILITY)
== CCS_DIGITAL_CROP_CAPABILITY_INPUT_CROP) {
rval = ccs_write(
sensor, DIGITAL_CROP_X_OFFSET,
sensor->scaler->crop[CCS_PAD_SINK].left);
rval = ccs_write(sensor, DIGITAL_CROP_X_OFFSET,
sensor->scaler_sink.left);
if (rval < 0)
goto out;
rval = ccs_write(
sensor, DIGITAL_CROP_Y_OFFSET,
sensor->scaler->crop[CCS_PAD_SINK].top);
rval = ccs_write(sensor, DIGITAL_CROP_Y_OFFSET,
sensor->scaler_sink.top);
if (rval < 0)
goto out;
rval = ccs_write(
sensor, DIGITAL_CROP_IMAGE_WIDTH,
sensor->scaler->crop[CCS_PAD_SINK].width);
rval = ccs_write(sensor, DIGITAL_CROP_IMAGE_WIDTH,
sensor->scaler_sink.width);
if (rval < 0)
goto out;
rval = ccs_write(
sensor, DIGITAL_CROP_IMAGE_HEIGHT,
sensor->scaler->crop[CCS_PAD_SINK].height);
rval = ccs_write(sensor, DIGITAL_CROP_IMAGE_HEIGHT,
sensor->scaler_sink.height);
if (rval < 0)
goto out;
}
@ -1827,12 +1809,10 @@ static int ccs_start_streaming(struct ccs_sensor *sensor)
}
/* Output size from sensor */
rval = ccs_write(sensor, X_OUTPUT_SIZE,
sensor->src->crop[CCS_PAD_SRC].width);
rval = ccs_write(sensor, X_OUTPUT_SIZE, sensor->src_src.width);
if (rval < 0)
goto out;
rval = ccs_write(sensor, Y_OUTPUT_SIZE,
sensor->src->crop[CCS_PAD_SRC].height);
rval = ccs_write(sensor, Y_OUTPUT_SIZE, sensor->src_src.height);
if (rval < 0)
goto out;
@ -1923,9 +1903,6 @@ static int ccs_set_stream(struct v4l2_subdev *subdev, int enable)
struct i2c_client *client = v4l2_get_subdevdata(&sensor->src->sd);
int rval;
if (sensor->streaming == enable)
return 0;
if (!enable) {
ccs_stop_streaming(sensor);
sensor->streaming = false;
@ -2053,24 +2030,8 @@ static int __ccs_get_format(struct v4l2_subdev *subdev,
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_format *fmt)
{
struct ccs_subdev *ssd = to_ccs_subdev(subdev);
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
fmt->format = *v4l2_subdev_get_try_format(subdev, sd_state,
fmt->pad);
} else {
struct v4l2_rect *r;
if (fmt->pad == ssd->source_pad)
r = &ssd->crop[ssd->source_pad];
else
r = &ssd->sink_fmt;
fmt->format.code = __ccs_get_mbus_code(subdev, fmt->pad);
fmt->format.width = r->width;
fmt->format.height = r->height;
fmt->format.field = V4L2_FIELD_NONE;
}
fmt->format = *v4l2_subdev_get_pad_format(subdev, sd_state, fmt->pad);
fmt->format.code = __ccs_get_mbus_code(subdev, fmt->pad);
return 0;
}
@ -2092,28 +2053,18 @@ static int ccs_get_format(struct v4l2_subdev *subdev,
static void ccs_get_crop_compose(struct v4l2_subdev *subdev,
struct v4l2_subdev_state *sd_state,
struct v4l2_rect **crops,
struct v4l2_rect **comps, int which)
struct v4l2_rect **comps)
{
struct ccs_subdev *ssd = to_ccs_subdev(subdev);
unsigned int i;
if (which == V4L2_SUBDEV_FORMAT_ACTIVE) {
if (crops)
for (i = 0; i < subdev->entity.num_pads; i++)
crops[i] = &ssd->crop[i];
if (comps)
*comps = &ssd->compose;
} else {
if (crops) {
for (i = 0; i < subdev->entity.num_pads; i++)
crops[i] = v4l2_subdev_get_try_crop(subdev,
sd_state,
i);
}
if (comps)
*comps = v4l2_subdev_get_try_compose(subdev, sd_state,
CCS_PAD_SINK);
}
if (crops)
for (i = 0; i < subdev->entity.num_pads; i++)
crops[i] =
v4l2_subdev_get_pad_crop(subdev, sd_state, i);
if (comps)
*comps = v4l2_subdev_get_pad_compose(subdev, sd_state,
ssd->sink_pad);
}
/* Changes require propagation only on sink pad. */
@ -2124,8 +2075,9 @@ static void ccs_propagate(struct v4l2_subdev *subdev,
struct ccs_sensor *sensor = to_ccs_sensor(subdev);
struct ccs_subdev *ssd = to_ccs_subdev(subdev);
struct v4l2_rect *comp, *crops[CCS_PADS];
struct v4l2_mbus_framefmt *fmt;
ccs_get_crop_compose(subdev, sd_state, crops, &comp, which);
ccs_get_crop_compose(subdev, sd_state, crops, &comp);
switch (target) {
case V4L2_SEL_TGT_CROP:
@ -2136,6 +2088,7 @@ static void ccs_propagate(struct v4l2_subdev *subdev,
sensor->scale_m = CCS_LIM(sensor, SCALER_N_MIN);
sensor->scaling_mode =
CCS_SCALING_MODE_NO_SCALING;
sensor->scaler_sink = *comp;
} else if (ssd == sensor->binner) {
sensor->binning_horizontal = 1;
sensor->binning_vertical = 1;
@ -2144,6 +2097,11 @@ static void ccs_propagate(struct v4l2_subdev *subdev,
fallthrough;
case V4L2_SEL_TGT_COMPOSE:
*crops[CCS_PAD_SRC] = *comp;
fmt = v4l2_subdev_get_pad_format(subdev, sd_state, CCS_PAD_SRC);
fmt->width = comp->width;
fmt->height = comp->height;
if (which == V4L2_SUBDEV_FORMAT_ACTIVE && ssd == sensor->src)
sensor->src_src = *crops[CCS_PAD_SRC];
break;
default:
WARN_ON_ONCE(1);
@ -2252,14 +2210,12 @@ static int ccs_set_format(struct v4l2_subdev *subdev,
CCS_LIM(sensor, MIN_Y_OUTPUT_SIZE),
CCS_LIM(sensor, MAX_Y_OUTPUT_SIZE));
ccs_get_crop_compose(subdev, sd_state, crops, NULL, fmt->which);
ccs_get_crop_compose(subdev, sd_state, crops, NULL);
crops[ssd->sink_pad]->left = 0;
crops[ssd->sink_pad]->top = 0;
crops[ssd->sink_pad]->width = fmt->format.width;
crops[ssd->sink_pad]->height = fmt->format.height;
if (fmt->which == V4L2_SUBDEV_FORMAT_ACTIVE)
ssd->sink_fmt = *crops[ssd->sink_pad];
ccs_propagate(subdev, sd_state, fmt->which, V4L2_SEL_TGT_CROP);
mutex_unlock(&sensor->mutex);
@ -2482,7 +2438,7 @@ static int ccs_set_compose(struct v4l2_subdev *subdev,
struct ccs_subdev *ssd = to_ccs_subdev(subdev);
struct v4l2_rect *comp, *crops[CCS_PADS];
ccs_get_crop_compose(subdev, sd_state, crops, &comp, sel->which);
ccs_get_crop_compose(subdev, sd_state, crops, &comp);
sel->r.top = 0;
sel->r.left = 0;
@ -2501,8 +2457,8 @@ static int ccs_set_compose(struct v4l2_subdev *subdev,
return 0;
}
static int __ccs_sel_supported(struct v4l2_subdev *subdev,
struct v4l2_subdev_selection *sel)
static int ccs_sel_supported(struct v4l2_subdev *subdev,
struct v4l2_subdev_selection *sel)
{
struct ccs_sensor *sensor = to_ccs_sensor(subdev);
struct ccs_subdev *ssd = to_ccs_subdev(subdev);
@ -2545,33 +2501,18 @@ static int ccs_set_crop(struct v4l2_subdev *subdev,
{
struct ccs_sensor *sensor = to_ccs_sensor(subdev);
struct ccs_subdev *ssd = to_ccs_subdev(subdev);
struct v4l2_rect *src_size, *crops[CCS_PADS];
struct v4l2_rect _r;
struct v4l2_rect src_size = { 0 }, *crops[CCS_PADS], *comp;
ccs_get_crop_compose(subdev, sd_state, crops, NULL, sel->which);
ccs_get_crop_compose(subdev, sd_state, crops, &comp);
if (sel->which == V4L2_SUBDEV_FORMAT_ACTIVE) {
if (sel->pad == ssd->sink_pad)
src_size = &ssd->sink_fmt;
else
src_size = &ssd->compose;
if (sel->pad == ssd->sink_pad) {
struct v4l2_mbus_framefmt *mfmt =
v4l2_subdev_get_pad_format(subdev, sd_state, sel->pad);
src_size.width = mfmt->width;
src_size.height = mfmt->height;
} else {
if (sel->pad == ssd->sink_pad) {
_r.left = 0;
_r.top = 0;
_r.width = v4l2_subdev_get_try_format(subdev,
sd_state,
sel->pad)
->width;
_r.height = v4l2_subdev_get_try_format(subdev,
sd_state,
sel->pad)
->height;
src_size = &_r;
} else {
src_size = v4l2_subdev_get_try_compose(
subdev, sd_state, ssd->sink_pad);
}
src_size = *comp;
}
if (ssd == sensor->src && sel->pad == CCS_PAD_SRC) {
@ -2579,16 +2520,19 @@ static int ccs_set_crop(struct v4l2_subdev *subdev,
sel->r.top = 0;
}
sel->r.width = min(sel->r.width, src_size->width);
sel->r.height = min(sel->r.height, src_size->height);
sel->r.width = min(sel->r.width, src_size.width);
sel->r.height = min(sel->r.height, src_size.height);
sel->r.left = min_t(int, sel->r.left, src_size->width - sel->r.width);
sel->r.top = min_t(int, sel->r.top, src_size->height - sel->r.height);
sel->r.left = min_t(int, sel->r.left, src_size.width - sel->r.width);
sel->r.top = min_t(int, sel->r.top, src_size.height - sel->r.height);
*crops[sel->pad] = sel->r;
if (ssd != sensor->pixel_array && sel->pad == CCS_PAD_SINK)
ccs_propagate(subdev, sd_state, sel->which, V4L2_SEL_TGT_CROP);
else if (sel->which == V4L2_SUBDEV_FORMAT_ACTIVE &&
ssd == sensor->pixel_array)
sensor->pa_src = sel->r;
return 0;
}
@ -2601,44 +2545,36 @@ static void ccs_get_native_size(struct ccs_subdev *ssd, struct v4l2_rect *r)
r->height = CCS_LIM(ssd->sensor, Y_ADDR_MAX) + 1;
}
static int __ccs_get_selection(struct v4l2_subdev *subdev,
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_selection *sel)
static int ccs_get_selection(struct v4l2_subdev *subdev,
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_selection *sel)
{
struct ccs_sensor *sensor = to_ccs_sensor(subdev);
struct ccs_subdev *ssd = to_ccs_subdev(subdev);
struct v4l2_rect *comp, *crops[CCS_PADS];
struct v4l2_rect sink_fmt;
int ret;
ret = __ccs_sel_supported(subdev, sel);
ret = ccs_sel_supported(subdev, sel);
if (ret)
return ret;
ccs_get_crop_compose(subdev, sd_state, crops, &comp, sel->which);
if (sel->which == V4L2_SUBDEV_FORMAT_ACTIVE) {
sink_fmt = ssd->sink_fmt;
} else {
struct v4l2_mbus_framefmt *fmt =
v4l2_subdev_get_try_format(subdev, sd_state,
ssd->sink_pad);
sink_fmt.left = 0;
sink_fmt.top = 0;
sink_fmt.width = fmt->width;
sink_fmt.height = fmt->height;
}
ccs_get_crop_compose(subdev, sd_state, crops, &comp);
switch (sel->target) {
case V4L2_SEL_TGT_CROP_BOUNDS:
case V4L2_SEL_TGT_NATIVE_SIZE:
if (ssd == sensor->pixel_array)
if (ssd == sensor->pixel_array) {
ccs_get_native_size(ssd, &sel->r);
else if (sel->pad == ssd->sink_pad)
sel->r = sink_fmt;
else
} else if (sel->pad == ssd->sink_pad) {
struct v4l2_mbus_framefmt *sink_fmt =
v4l2_subdev_get_pad_format(subdev, sd_state,
ssd->sink_pad);
sel->r.top = sel->r.left = 0;
sel->r.width = sink_fmt->width;
sel->r.height = sink_fmt->height;
} else {
sel->r = *comp;
}
break;
case V4L2_SEL_TGT_CROP:
case V4L2_SEL_TGT_COMPOSE_BOUNDS:
@ -2652,20 +2588,6 @@ static int __ccs_get_selection(struct v4l2_subdev *subdev,
return 0;
}
static int ccs_get_selection(struct v4l2_subdev *subdev,
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_selection *sel)
{
struct ccs_sensor *sensor = to_ccs_sensor(subdev);
int rval;
mutex_lock(&sensor->mutex);
rval = __ccs_get_selection(subdev, sd_state, sel);
mutex_unlock(&sensor->mutex);
return rval;
}
static int ccs_set_selection(struct v4l2_subdev *subdev,
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_selection *sel)
@ -2673,7 +2595,7 @@ static int ccs_set_selection(struct v4l2_subdev *subdev,
struct ccs_sensor *sensor = to_ccs_sensor(subdev);
int ret;
ret = __ccs_sel_supported(subdev, sel);
ret = ccs_sel_supported(subdev, sel);
if (ret)
return ret;
@ -2945,7 +2867,6 @@ static int ccs_identify_module(struct ccs_sensor *sensor)
}
static const struct v4l2_subdev_ops ccs_ops;
static const struct v4l2_subdev_internal_ops ccs_internal_ops;
static const struct media_entity_operations ccs_entity_ops;
static int ccs_register_subdev(struct ccs_sensor *sensor,
@ -2959,12 +2880,6 @@ static int ccs_register_subdev(struct ccs_sensor *sensor,
if (!sink_ssd)
return 0;
rval = media_entity_pads_init(&ssd->sd.entity, ssd->npads, ssd->pads);
if (rval) {
dev_err(&client->dev, "media_entity_pads_init failed\n");
return rval;
}
rval = v4l2_device_register_subdev(sensor->src->sd.v4l2_dev, &ssd->sd);
if (rval) {
dev_err(&client->dev, "v4l2_device_register_subdev failed\n");
@ -3025,6 +2940,12 @@ out_err:
static void ccs_cleanup(struct ccs_sensor *sensor)
{
struct i2c_client *client = v4l2_get_subdevdata(&sensor->src->sd);
unsigned int i;
for (i = 0; i < sensor->ssds_used; i++) {
v4l2_subdev_cleanup(&sensor->ssds[2].sd);
media_entity_cleanup(&sensor->ssds[i].sd.entity);
}
device_remove_file(&client->dev, &dev_attr_nvm);
device_remove_file(&client->dev, &dev_attr_ident);
@ -3032,14 +2953,17 @@ static void ccs_cleanup(struct ccs_sensor *sensor)
ccs_free_controls(sensor);
}
static void ccs_create_subdev(struct ccs_sensor *sensor,
struct ccs_subdev *ssd, const char *name,
unsigned short num_pads, u32 function)
static int ccs_init_subdev(struct ccs_sensor *sensor,
struct ccs_subdev *ssd, const char *name,
unsigned short num_pads, u32 function,
const char *lock_name,
struct lock_class_key *lock_key)
{
struct i2c_client *client = v4l2_get_subdevdata(&sensor->src->sd);
int rval;
if (!ssd)
return;
return 0;
if (ssd != sensor->src)
v4l2_subdev_init(&ssd->sd, &ccs_ops);
@ -3053,57 +2977,70 @@ static void ccs_create_subdev(struct ccs_sensor *sensor,
v4l2_i2c_subdev_set_name(&ssd->sd, client, sensor->minfo.name, name);
ccs_get_native_size(ssd, &ssd->sink_fmt);
ssd->compose.width = ssd->sink_fmt.width;
ssd->compose.height = ssd->sink_fmt.height;
ssd->crop[ssd->source_pad] = ssd->compose;
ssd->pads[ssd->source_pad].flags = MEDIA_PAD_FL_SOURCE;
if (ssd != sensor->pixel_array) {
ssd->crop[ssd->sink_pad] = ssd->compose;
if (ssd != sensor->pixel_array)
ssd->pads[ssd->sink_pad].flags = MEDIA_PAD_FL_SINK;
}
ssd->sd.entity.ops = &ccs_entity_ops;
if (ssd == sensor->src)
return;
if (ssd != sensor->src) {
ssd->sd.owner = THIS_MODULE;
ssd->sd.dev = &client->dev;
v4l2_set_subdevdata(&ssd->sd, client);
}
ssd->sd.internal_ops = &ccs_internal_ops;
ssd->sd.owner = THIS_MODULE;
ssd->sd.dev = &client->dev;
v4l2_set_subdevdata(&ssd->sd, client);
rval = media_entity_pads_init(&ssd->sd.entity, ssd->npads, ssd->pads);
if (rval) {
dev_err(&client->dev, "media_entity_pads_init failed\n");
return rval;
}
rval = __v4l2_subdev_init_finalize(&ssd->sd, lock_name, lock_key);
if (rval) {
media_entity_cleanup(&ssd->sd.entity);
return rval;
}
return 0;
}
static int ccs_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
static int ccs_init_cfg(struct v4l2_subdev *sd,
struct v4l2_subdev_state *sd_state)
{
struct ccs_subdev *ssd = to_ccs_subdev(sd);
struct ccs_sensor *sensor = ssd->sensor;
unsigned int i;
unsigned int pad = ssd == sensor->pixel_array ?
CCS_PA_PAD_SRC : CCS_PAD_SINK;
struct v4l2_mbus_framefmt *fmt =
v4l2_subdev_get_pad_format(sd, sd_state, pad);
struct v4l2_rect *crop =
v4l2_subdev_get_pad_crop(sd, sd_state, pad);
bool is_active = !sd->active_state || sd->active_state == sd_state;
mutex_lock(&sensor->mutex);
for (i = 0; i < ssd->npads; i++) {
struct v4l2_mbus_framefmt *try_fmt =
v4l2_subdev_get_try_format(sd, fh->state, i);
struct v4l2_rect *try_crop =
v4l2_subdev_get_try_crop(sd, fh->state, i);
struct v4l2_rect *try_comp;
ccs_get_native_size(ssd, crop);
ccs_get_native_size(ssd, try_crop);
fmt->width = crop->width;
fmt->height = crop->height;
fmt->code = sensor->internal_csi_format->code;
fmt->field = V4L2_FIELD_NONE;
try_fmt->width = try_crop->width;
try_fmt->height = try_crop->height;
try_fmt->code = sensor->internal_csi_format->code;
try_fmt->field = V4L2_FIELD_NONE;
if (ssd == sensor->pixel_array) {
if (is_active)
sensor->pa_src = *crop;
if (ssd != sensor->pixel_array)
continue;
try_comp = v4l2_subdev_get_try_compose(sd, fh->state, i);
*try_comp = *try_crop;
mutex_unlock(&sensor->mutex);
return 0;
}
fmt = v4l2_subdev_get_pad_format(sd, sd_state, CCS_PAD_SRC);
fmt->code = ssd == sensor->src ?
sensor->csi_format->code : sensor->internal_csi_format->code;
fmt->field = V4L2_FIELD_NONE;
ccs_propagate(sd, sd_state, is_active, V4L2_SEL_TGT_CROP);
mutex_unlock(&sensor->mutex);
return 0;
@ -3116,6 +3053,7 @@ static const struct v4l2_subdev_video_ops ccs_video_ops = {
};
static const struct v4l2_subdev_pad_ops ccs_pad_ops = {
.init_cfg = ccs_init_cfg,
.enum_mbus_code = ccs_enum_mbus_code,
.get_fmt = ccs_get_format,
.set_fmt = ccs_set_format,
@ -3141,53 +3079,12 @@ static const struct media_entity_operations ccs_entity_ops = {
static const struct v4l2_subdev_internal_ops ccs_internal_src_ops = {
.registered = ccs_registered,
.unregistered = ccs_unregistered,
.open = ccs_open,
};
static const struct v4l2_subdev_internal_ops ccs_internal_ops = {
.open = ccs_open,
};
/* -----------------------------------------------------------------------------
* I2C Driver
*/
static int __maybe_unused ccs_suspend(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *subdev = i2c_get_clientdata(client);
struct ccs_sensor *sensor = to_ccs_sensor(subdev);
bool streaming = sensor->streaming;
int rval;
rval = pm_runtime_resume_and_get(dev);
if (rval < 0)
return rval;
if (sensor->streaming)
ccs_stop_streaming(sensor);
/* save state for resume */
sensor->streaming = streaming;
return 0;
}
static int __maybe_unused ccs_resume(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *subdev = i2c_get_clientdata(client);
struct ccs_sensor *sensor = to_ccs_sensor(subdev);
int rval = 0;
pm_runtime_put(dev);
if (sensor->streaming)
rval = ccs_start_streaming(sensor);
return rval;
}
static int ccs_get_hwconfig(struct ccs_sensor *sensor, struct device *dev)
{
struct ccs_hwconfig *hwcfg = &sensor->hwcfg;
@ -3311,6 +3208,8 @@ static int ccs_firmware_name(struct i2c_client *client,
static int ccs_probe(struct i2c_client *client)
{
static struct lock_class_key pixel_array_lock_key, binner_lock_key,
scaler_lock_key;
const struct ccs_device *ccsdev = device_get_match_data(&client->dev);
struct ccs_sensor *sensor;
const struct firmware *fw;
@ -3587,12 +3486,27 @@ static int ccs_probe(struct i2c_client *client)
sensor->pll.ext_clk_freq_hz = sensor->hwcfg.ext_clk;
sensor->pll.scale_n = CCS_LIM(sensor, SCALER_N_MIN);
ccs_create_subdev(sensor, sensor->scaler, " scaler", 2,
MEDIA_ENT_F_PROC_VIDEO_SCALER);
ccs_create_subdev(sensor, sensor->binner, " binner", 2,
MEDIA_ENT_F_PROC_VIDEO_SCALER);
ccs_create_subdev(sensor, sensor->pixel_array, " pixel_array", 1,
MEDIA_ENT_F_CAM_SENSOR);
rval = ccs_get_mbus_formats(sensor);
if (rval) {
rval = -ENODEV;
goto out_cleanup;
}
rval = ccs_init_subdev(sensor, sensor->scaler, " scaler", 2,
MEDIA_ENT_F_PROC_VIDEO_SCALER,
"ccs scaler mutex", &scaler_lock_key);
if (rval)
goto out_cleanup;
rval = ccs_init_subdev(sensor, sensor->binner, " binner", 2,
MEDIA_ENT_F_PROC_VIDEO_SCALER,
"ccs binner mutex", &binner_lock_key);
if (rval)
goto out_cleanup;
rval = ccs_init_subdev(sensor, sensor->pixel_array, " pixel_array", 1,
MEDIA_ENT_F_CAM_SENSOR, "ccs pixel array mutex",
&pixel_array_lock_key);
if (rval)
goto out_cleanup;
rval = ccs_init_controls(sensor);
if (rval < 0)
@ -3602,12 +3516,6 @@ static int ccs_probe(struct i2c_client *client)
if (rval)
goto out_cleanup;
rval = ccs_get_mbus_formats(sensor);
if (rval) {
rval = -ENODEV;
goto out_cleanup;
}
rval = ccs_init_late_controls(sensor);
if (rval) {
rval = -ENODEV;
@ -3625,14 +3533,9 @@ static int ccs_probe(struct i2c_client *client)
sensor->streaming = false;
sensor->dev_init_done = true;
rval = media_entity_pads_init(&sensor->src->sd.entity, 2,
sensor->src->pads);
if (rval < 0)
goto out_media_entity_cleanup;
rval = ccs_write_msr_regs(sensor);
if (rval)
goto out_media_entity_cleanup;
goto out_cleanup;
pm_runtime_set_active(&client->dev);
pm_runtime_get_noresume(&client->dev);
@ -3652,9 +3555,6 @@ out_disable_runtime_pm:
pm_runtime_put_noidle(&client->dev);
pm_runtime_disable(&client->dev);
out_media_entity_cleanup:
media_entity_cleanup(&sensor->src->sd.entity);
out_cleanup:
ccs_cleanup(sensor);
@ -3687,10 +3587,8 @@ static void ccs_remove(struct i2c_client *client)
ccs_power_off(&client->dev);
pm_runtime_set_suspended(&client->dev);
for (i = 0; i < sensor->ssds_used; i++) {
for (i = 0; i < sensor->ssds_used; i++)
v4l2_device_unregister_subdev(&sensor->ssds[i].sd);
media_entity_cleanup(&sensor->ssds[i].sd.entity);
}
ccs_cleanup(sensor);
mutex_destroy(&sensor->mutex);
kfree(sensor->ccs_limits);
@ -3720,7 +3618,6 @@ static const struct of_device_id ccs_of_table[] = {
MODULE_DEVICE_TABLE(of, ccs_of_table);
static const struct dev_pm_ops ccs_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(ccs_suspend, ccs_resume)
SET_RUNTIME_PM_OPS(ccs_power_off, ccs_power_on, NULL)
};

View File

@ -32,12 +32,10 @@ struct ccs_sensor;
* @reg: Pointer to the register to access
* @value: Register value, set by the caller on write, or
* by the quirk on read
*
* @flags: Quirk flags
*
* @return: 0 on success, -ENOIOCTLCMD if no register
* access may be done by the caller (default read
* value is zero), else negative error code on error
* @flags: Quirk flags
*/
struct ccs_quirk {
int (*limits)(struct ccs_sensor *sensor);

View File

@ -182,9 +182,6 @@ struct ccs_binning_subtype {
struct ccs_subdev {
struct v4l2_subdev sd;
struct media_pad pads[CCS_PADS];
struct v4l2_rect sink_fmt;
struct v4l2_rect crop[CCS_PADS];
struct v4l2_rect compose; /* compose on sink */
unsigned short sink_pad;
unsigned short source_pad;
int npads;
@ -220,6 +217,7 @@ struct ccs_sensor {
u32 mbus_frame_fmts;
const struct ccs_csi_data_format *csi_format;
const struct ccs_csi_data_format *internal_csi_format;
struct v4l2_rect pa_src, scaler_sink, src_src;
u32 default_mbus_frame_fmts;
int default_pixel_order;
struct ccs_data_container sdata, mdata;

File diff suppressed because it is too large Load Diff

View File

@ -362,8 +362,6 @@ static int ub913_get_frame_desc(struct v4l2_subdev *sd, unsigned int pad,
if (ret)
return ret;
memset(fd, 0, sizeof(*fd));
fd->type = V4L2_MBUS_FRAME_DESC_TYPE_PARALLEL;
state = v4l2_subdev_lock_and_get_active_state(sd);

View File

@ -499,8 +499,6 @@ static int ub953_get_frame_desc(struct v4l2_subdev *sd, unsigned int pad,
if (ret)
return ret;
memset(fd, 0, sizeof(*fd));
fd->type = V4L2_MBUS_FRAME_DESC_TYPE_CSI2;
state = v4l2_subdev_lock_and_get_active_state(sd);

View File

@ -2786,8 +2786,6 @@ static int ub960_get_frame_desc(struct v4l2_subdev *sd, unsigned int pad,
if (!ub960_pad_is_source(priv, pad))
return -EINVAL;
memset(fd, 0, sizeof(*fd));
fd->type = V4L2_MBUS_FRAME_DESC_TYPE_CSI2;
state = v4l2_subdev_lock_and_get_active_state(&priv->sd);

View File

@ -477,6 +477,50 @@ static const struct hi556_reg mode_1296x972_regs[] = {
{0x0958, 0xbb80},
};
static const struct hi556_reg mode_1296x722_regs[] = {
{0x0a00, 0x0000},
{0x0b0a, 0x8259},
{0x0f30, 0x5b15},
{0x0f32, 0x7167},
{0x004a, 0x0100},
{0x004c, 0x0000},
{0x004e, 0x0100},
{0x000c, 0x0122},
{0x0008, 0x0b00},
{0x005a, 0x0404},
{0x0012, 0x000c},
{0x0018, 0x0a33},
{0x0022, 0x0008},
{0x0028, 0x0017},
{0x0024, 0x0022},
{0x002a, 0x002b},
{0x0026, 0x012a},
{0x002c, 0x06cf},
{0x002e, 0x3311},
{0x0030, 0x3311},
{0x0032, 0x3311},
{0x0006, 0x0814},
{0x0a22, 0x0000},
{0x0a12, 0x0510},
{0x0a14, 0x02d2},
{0x003e, 0x0000},
{0x0074, 0x0812},
{0x0070, 0x0409},
{0x0804, 0x0308},
{0x0806, 0x0100},
{0x0a04, 0x016a},
{0x090c, 0x09c0},
{0x090e, 0x0010},
{0x0902, 0x4319},
{0x0914, 0xc106},
{0x0916, 0x040e},
{0x0918, 0x0304},
{0x091a, 0x0708},
{0x091c, 0x0e06},
{0x091e, 0x0300},
{0x0958, 0xbb80},
};
static const char * const hi556_test_pattern_menu[] = {
"Disabled",
"Solid Colour",
@ -556,7 +600,25 @@ static const struct hi556_mode supported_modes[] = {
.regs = mode_1296x972_regs,
},
.link_freq_index = HI556_LINK_FREQ_437MHZ_INDEX,
}
},
{
.width = 1296,
.height = 722,
.crop = {
.left = HI556_PIXEL_ARRAY_LEFT,
.top = 250,
.width = HI556_PIXEL_ARRAY_WIDTH,
.height = 1444
},
.fll_def = HI556_FLL_30FPS,
.fll_min = HI556_FLL_30FPS_MIN,
.llp = 0x0b00,
.reg_list = {
.num_of_regs = ARRAY_SIZE(mode_1296x722_regs),
.regs = mode_1296x722_regs,
},
.link_freq_index = HI556_LINK_FREQ_437MHZ_INDEX,
},
};
struct hi556 {
@ -577,9 +639,6 @@ struct hi556 {
/* To serialize asynchronus callbacks */
struct mutex mutex;
/* Streaming on/off */
bool streaming;
/* True if the device has been identified */
bool identified;
};
@ -976,9 +1035,6 @@ static int hi556_set_stream(struct v4l2_subdev *sd, int enable)
struct i2c_client *client = v4l2_get_subdevdata(sd);
int ret = 0;
if (hi556->streaming == enable)
return 0;
mutex_lock(&hi556->mutex);
if (enable) {
ret = pm_runtime_resume_and_get(&client->dev);
@ -998,50 +1054,11 @@ static int hi556_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
hi556->streaming = enable;
mutex_unlock(&hi556->mutex);
return ret;
}
static int __maybe_unused hi556_suspend(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct hi556 *hi556 = to_hi556(sd);
mutex_lock(&hi556->mutex);
if (hi556->streaming)
hi556_stop_streaming(hi556);
mutex_unlock(&hi556->mutex);
return 0;
}
static int __maybe_unused hi556_resume(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct hi556 *hi556 = to_hi556(sd);
int ret;
mutex_lock(&hi556->mutex);
if (hi556->streaming) {
ret = hi556_start_streaming(hi556);
if (ret)
goto error;
}
mutex_unlock(&hi556->mutex);
return 0;
error:
hi556_stop_streaming(hi556);
hi556->streaming = 0;
mutex_unlock(&hi556->mutex);
return ret;
}
static int hi556_set_format(struct v4l2_subdev *sd,
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_format *fmt)
@ -1331,10 +1348,6 @@ probe_error_v4l2_ctrl_handler_free:
return ret;
}
static const struct dev_pm_ops hi556_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(hi556_suspend, hi556_resume)
};
#ifdef CONFIG_ACPI
static const struct acpi_device_id hi556_acpi_ids[] = {
{"INT3537"},
@ -1347,7 +1360,6 @@ MODULE_DEVICE_TABLE(acpi, hi556_acpi_ids);
static struct i2c_driver hi556_i2c_driver = {
.driver = {
.name = "hi556",
.pm = &hi556_pm_ops,
.acpi_match_table = ACPI_PTR(hi556_acpi_ids),
},
.probe = hi556_probe,

View File

@ -1607,17 +1607,12 @@ static int hi846_set_stream(struct v4l2_subdev *sd, int enable)
struct i2c_client *client = v4l2_get_subdevdata(sd);
int ret = 0;
if (hi846->streaming == enable)
return 0;
mutex_lock(&hi846->mutex);
if (enable) {
ret = pm_runtime_get_sync(&client->dev);
if (ret < 0) {
pm_runtime_put_noidle(&client->dev);
ret = pm_runtime_resume_and_get(&client->dev);
if (ret)
goto out;
}
ret = hi846_start_streaming(hi846);
}
@ -1680,9 +1675,6 @@ static int __maybe_unused hi846_suspend(struct device *dev)
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct hi846 *hi846 = to_hi846(sd);
if (hi846->streaming)
hi846_stop_streaming(hi846);
return hi846_power_off(hi846);
}
@ -1691,26 +1683,8 @@ static int __maybe_unused hi846_resume(struct device *dev)
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct hi846 *hi846 = to_hi846(sd);
int ret;
ret = hi846_power_on(hi846);
if (ret)
return ret;
if (hi846->streaming) {
ret = hi846_start_streaming(hi846);
if (ret) {
dev_err(dev, "%s: start streaming failed: %d\n",
__func__, ret);
goto error;
}
}
return 0;
error:
hi846_power_off(hi846);
return ret;
return hi846_power_on(hi846);
}
static int hi846_set_format(struct v4l2_subdev *sd,
@ -2173,8 +2147,6 @@ static void hi846_remove(struct i2c_client *client)
}
static const struct dev_pm_ops hi846_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
pm_runtime_force_resume)
SET_RUNTIME_PM_OPS(hi846_suspend, hi846_resume, NULL)
};

View File

@ -2184,9 +2184,6 @@ struct hi847 {
/* To serialize asynchronus callbacks */
struct mutex mutex;
/* Streaming on/off */
bool streaming;
};
static u64 to_pixel_rate(u32 f_index)
@ -2618,14 +2615,10 @@ static int hi847_set_stream(struct v4l2_subdev *sd, int enable)
struct i2c_client *client = v4l2_get_subdevdata(sd);
int ret = 0;
if (hi847->streaming == enable)
return 0;
mutex_lock(&hi847->mutex);
if (enable) {
ret = pm_runtime_get_sync(&client->dev);
if (ret < 0) {
pm_runtime_put_noidle(&client->dev);
ret = pm_runtime_resume_and_get(&client->dev);
if (ret) {
mutex_unlock(&hi847->mutex);
return ret;
}
@ -2641,52 +2634,11 @@ static int hi847_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
hi847->streaming = enable;
mutex_unlock(&hi847->mutex);
return ret;
}
static int __maybe_unused hi847_suspend(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct hi847 *hi847 = to_hi847(sd);
mutex_lock(&hi847->mutex);
if (hi847->streaming)
hi847_stop_streaming(hi847);
mutex_unlock(&hi847->mutex);
return 0;
}
static int __maybe_unused hi847_resume(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct hi847 *hi847 = to_hi847(sd);
int ret;
mutex_lock(&hi847->mutex);
if (hi847->streaming) {
ret = hi847_start_streaming(hi847);
if (ret)
goto error;
}
mutex_unlock(&hi847->mutex);
return 0;
error:
hi847_stop_streaming(hi847);
hi847->streaming = 0;
mutex_unlock(&hi847->mutex);
return ret;
}
static int hi847_set_format(struct v4l2_subdev *sd,
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_format *fmt)
@ -2980,10 +2932,6 @@ probe_error_v4l2_ctrl_handler_free:
return ret;
}
static const struct dev_pm_ops hi847_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(hi847_suspend, hi847_resume)
};
#ifdef CONFIG_ACPI
static const struct acpi_device_id hi847_acpi_ids[] = {
{"HYV0847"},
@ -2996,7 +2944,6 @@ MODULE_DEVICE_TABLE(acpi, hi847_acpi_ids);
static struct i2c_driver hi847_i2c_driver = {
.driver = {
.name = "hi847",
.pm = &hi847_pm_ops,
.acpi_match_table = ACPI_PTR(hi847_acpi_ids),
},
.probe = hi847_probe,

View File

@ -290,9 +290,6 @@ struct imx208 {
*/
struct mutex imx208_mx;
/* Streaming on/off */
bool streaming;
/* OTP data */
bool otp_read;
char otp_data[IMX208_OTP_SIZE];
@ -714,15 +711,13 @@ static int imx208_set_stream(struct v4l2_subdev *sd, int enable)
int ret = 0;
mutex_lock(&imx208->imx208_mx);
if (imx208->streaming == enable) {
mutex_unlock(&imx208->imx208_mx);
return 0;
}
if (enable) {
ret = pm_runtime_get_sync(&client->dev);
if (ret < 0)
goto err_rpm_put;
ret = pm_runtime_resume_and_get(&client->dev);
if (ret) {
mutex_unlock(&imx208->imx208_mx);
return ret;
}
/*
* Apply default & customized values
@ -736,7 +731,6 @@ static int imx208_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
imx208->streaming = enable;
mutex_unlock(&imx208->imx208_mx);
/* vflip and hflip cannot change during streaming */
@ -752,40 +746,6 @@ err_rpm_put:
return ret;
}
static int __maybe_unused imx208_suspend(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct imx208 *imx208 = to_imx208(sd);
if (imx208->streaming)
imx208_stop_streaming(imx208);
return 0;
}
static int __maybe_unused imx208_resume(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct imx208 *imx208 = to_imx208(sd);
int ret;
if (imx208->streaming) {
ret = imx208_start_streaming(imx208);
if (ret)
goto error;
}
return 0;
error:
imx208_stop_streaming(imx208);
imx208->streaming = 0;
return ret;
}
/* Verify chip ID */
static const struct v4l2_subdev_video_ops imx208_video_ops = {
.s_stream = imx208_set_stream,
@ -819,11 +779,9 @@ static int imx208_read_otp(struct imx208 *imx208)
if (imx208->otp_read)
goto out_unlock;
ret = pm_runtime_get_sync(&client->dev);
if (ret < 0) {
pm_runtime_put_noidle(&client->dev);
ret = pm_runtime_resume_and_get(&client->dev);
if (ret)
goto out_unlock;
}
ret = imx208_identify_module(imx208);
if (ret)
@ -1081,10 +1039,6 @@ static void imx208_remove(struct i2c_client *client)
mutex_destroy(&imx208->imx208_mx);
}
static const struct dev_pm_ops imx208_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(imx208_suspend, imx208_resume)
};
#ifdef CONFIG_ACPI
static const struct acpi_device_id imx208_acpi_ids[] = {
{ "INT3478" },
@ -1097,7 +1051,6 @@ MODULE_DEVICE_TABLE(acpi, imx208_acpi_ids);
static struct i2c_driver imx208_i2c_driver = {
.driver = {
.name = "imx208",
.pm = &imx208_pm_ops,
.acpi_match_table = ACPI_PTR(imx208_acpi_ids),
},
.probe = imx208_probe,

View File

@ -58,8 +58,6 @@ struct imx214 {
* and start streaming.
*/
struct mutex mutex;
bool streaming;
};
struct reg_8 {
@ -775,9 +773,6 @@ static int imx214_s_stream(struct v4l2_subdev *subdev, int enable)
struct imx214 *imx214 = to_imx214(subdev);
int ret;
if (imx214->streaming == enable)
return 0;
if (enable) {
ret = pm_runtime_resume_and_get(imx214->dev);
if (ret < 0)
@ -793,7 +788,6 @@ static int imx214_s_stream(struct v4l2_subdev *subdev, int enable)
pm_runtime_put(imx214->dev);
}
imx214->streaming = enable;
return 0;
err_rpm_put:
@ -909,39 +903,6 @@ done:
return ret;
}
static int __maybe_unused imx214_suspend(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct imx214 *imx214 = to_imx214(sd);
if (imx214->streaming)
imx214_stop_streaming(imx214);
return 0;
}
static int __maybe_unused imx214_resume(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct imx214 *imx214 = to_imx214(sd);
int ret;
if (imx214->streaming) {
ret = imx214_start_streaming(imx214);
if (ret)
goto error;
}
return 0;
error:
imx214_stop_streaming(imx214);
imx214->streaming = 0;
return ret;
}
static int imx214_probe(struct i2c_client *client)
{
struct device *dev = &client->dev;
@ -1102,7 +1063,6 @@ static const struct of_device_id imx214_of_match[] = {
MODULE_DEVICE_TABLE(of, imx214_of_match);
static const struct dev_pm_ops imx214_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(imx214_suspend, imx214_resume)
SET_RUNTIME_PM_OPS(imx214_power_off, imx214_power_on, NULL)
};

File diff suppressed because it is too large Load Diff

View File

@ -622,9 +622,6 @@ struct imx258 {
*/
struct mutex mutex;
/* Streaming on/off */
bool streaming;
struct clk *clk;
};
@ -1035,10 +1032,6 @@ static int imx258_set_stream(struct v4l2_subdev *sd, int enable)
int ret = 0;
mutex_lock(&imx258->mutex);
if (imx258->streaming == enable) {
mutex_unlock(&imx258->mutex);
return 0;
}
if (enable) {
ret = pm_runtime_resume_and_get(&client->dev);
@ -1057,7 +1050,6 @@ static int imx258_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
imx258->streaming = enable;
mutex_unlock(&imx258->mutex);
return ret;
@ -1070,37 +1062,6 @@ err_unlock:
return ret;
}
static int __maybe_unused imx258_suspend(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct imx258 *imx258 = to_imx258(sd);
if (imx258->streaming)
imx258_stop_streaming(imx258);
return 0;
}
static int __maybe_unused imx258_resume(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct imx258 *imx258 = to_imx258(sd);
int ret;
if (imx258->streaming) {
ret = imx258_start_streaming(imx258);
if (ret)
goto error;
}
return 0;
error:
imx258_stop_streaming(imx258);
imx258->streaming = 0;
return ret;
}
/* Verify chip ID */
static int imx258_identify_module(struct imx258 *imx258)
{
@ -1369,7 +1330,6 @@ static void imx258_remove(struct i2c_client *client)
}
static const struct dev_pm_ops imx258_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(imx258_suspend, imx258_resume)
SET_RUNTIME_PM_OPS(imx258_power_off, imx258_power_on, NULL)
};

View File

@ -201,8 +201,6 @@ struct imx296 {
const struct imx296_clk_params *clk_params;
bool mono;
bool streaming;
struct v4l2_subdev subdev;
struct media_pad pad;
@ -321,7 +319,7 @@ static int imx296_s_ctrl(struct v4l2_ctrl *ctrl)
unsigned int vmax;
int ret = 0;
if (!sensor->streaming)
if (!pm_runtime_get_if_in_use(sensor->dev))
return 0;
state = v4l2_subdev_get_locked_active_state(&sensor->subdev);
@ -376,6 +374,8 @@ static int imx296_s_ctrl(struct v4l2_ctrl *ctrl)
break;
}
pm_runtime_put(sensor->dev);
return ret;
}
@ -607,8 +607,6 @@ static int imx296_s_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_mark_last_busy(sensor->dev);
pm_runtime_put_autosuspend(sensor->dev);
sensor->streaming = false;
goto unlock;
}
@ -620,13 +618,6 @@ static int imx296_s_stream(struct v4l2_subdev *sd, int enable)
if (ret < 0)
goto err_pm;
/*
* Set streaming to true to ensure __v4l2_ctrl_handler_setup() will set
* the controls. The flag is reset to false further down if an error
* occurs.
*/
sensor->streaming = true;
ret = __v4l2_ctrl_handler_setup(&sensor->ctrls);
if (ret < 0)
goto err_pm;
@ -646,7 +637,6 @@ err_pm:
* likely has no other chance to recover.
*/
pm_runtime_put_sync(sensor->dev);
sensor->streaming = false;
goto unlock;
}

View File

@ -138,8 +138,6 @@ struct imx319 {
*/
struct mutex mutex;
/* Streaming on/off */
bool streaming;
/* True if the device has been identified */
bool identified;
};
@ -2166,10 +2164,6 @@ static int imx319_set_stream(struct v4l2_subdev *sd, int enable)
int ret = 0;
mutex_lock(&imx319->mutex);
if (imx319->streaming == enable) {
mutex_unlock(&imx319->mutex);
return 0;
}
if (enable) {
ret = pm_runtime_resume_and_get(&client->dev);
@ -2188,8 +2182,6 @@ static int imx319_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
imx319->streaming = enable;
/* vflip and hflip cannot change during streaming */
__v4l2_ctrl_grab(imx319->vflip, enable);
__v4l2_ctrl_grab(imx319->hflip, enable);
@ -2206,37 +2198,6 @@ err_unlock:
return ret;
}
static int __maybe_unused imx319_suspend(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct imx319 *imx319 = to_imx319(sd);
if (imx319->streaming)
imx319_stop_streaming(imx319);
return 0;
}
static int __maybe_unused imx319_resume(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct imx319 *imx319 = to_imx319(sd);
int ret;
if (imx319->streaming) {
ret = imx319_start_streaming(imx319);
if (ret)
goto error;
}
return 0;
error:
imx319_stop_streaming(imx319);
imx319->streaming = 0;
return ret;
}
static const struct v4l2_subdev_core_ops imx319_subdev_core_ops = {
.subscribe_event = v4l2_ctrl_subdev_subscribe_event,
.unsubscribe_event = v4l2_event_subdev_unsubscribe,
@ -2542,10 +2503,6 @@ static void imx319_remove(struct i2c_client *client)
mutex_destroy(&imx319->mutex);
}
static const struct dev_pm_ops imx319_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(imx319_suspend, imx319_resume)
};
static const struct acpi_device_id imx319_acpi_ids[] __maybe_unused = {
{ "SONY319A" },
{ /* sentinel */ }
@ -2555,7 +2512,6 @@ MODULE_DEVICE_TABLE(acpi, imx319_acpi_ids);
static struct i2c_driver imx319_i2c_driver = {
.driver = {
.name = "imx319",
.pm = &imx319_pm_ops,
.acpi_match_table = ACPI_PTR(imx319_acpi_ids),
},
.probe = imx319_probe,

View File

@ -56,6 +56,24 @@
#define IMX334_REG_MIN 0x00
#define IMX334_REG_MAX 0xfffff
/* Test Pattern Control */
#define IMX334_REG_TP 0x329e
#define IMX334_TP_COLOR_HBARS 0xA
#define IMX334_TP_COLOR_VBARS 0xB
#define IMX334_TPG_EN_DOUT 0x329c
#define IMX334_TP_ENABLE 0x1
#define IMX334_TP_DISABLE 0x0
#define IMX334_TPG_COLORW 0x32a0
#define IMX334_TPG_COLORW_120P 0x13
#define IMX334_TP_CLK_EN 0x3148
#define IMX334_TP_CLK_EN_VAL 0x10
#define IMX334_TP_CLK_DIS_VAL 0x0
#define IMX334_DIG_CLP_MODE 0x3280
/**
* struct imx334_reg - imx334 sensor register
* @address: Register address
@ -120,7 +138,6 @@ struct imx334_mode {
* @mutex: Mutex for serializing sensor controls
* @menu_skip_mask: Menu skip mask for link_freq_ctrl
* @cur_code: current selected format code
* @streaming: Flag indicating streaming state
*/
struct imx334 {
struct device *dev;
@ -143,7 +160,6 @@ struct imx334 {
struct mutex mutex;
unsigned long menu_skip_mask;
u32 cur_code;
bool streaming;
};
static const s64 link_freq[] = {
@ -430,6 +446,18 @@ static const struct imx334_reg mode_3840x2160_regs[] = {
{0x3a29, 0x00},
};
static const char * const imx334_test_pattern_menu[] = {
"Disabled",
"Vertical Color Bars",
"Horizontal Color Bars",
};
static const int imx334_test_pattern_val[] = {
IMX334_TP_DISABLE,
IMX334_TP_COLOR_HBARS,
IMX334_TP_COLOR_VBARS,
};
static const struct imx334_reg raw10_framefmt_regs[] = {
{0x3050, 0x00},
{0x319d, 0x00},
@ -716,6 +744,26 @@ static int imx334_set_ctrl(struct v4l2_ctrl *ctrl)
case V4L2_CID_HBLANK:
ret = 0;
break;
case V4L2_CID_TEST_PATTERN:
if (ctrl->val) {
imx334_write_reg(imx334, IMX334_TP_CLK_EN, 1,
IMX334_TP_CLK_EN_VAL);
imx334_write_reg(imx334, IMX334_DIG_CLP_MODE, 1, 0x0);
imx334_write_reg(imx334, IMX334_TPG_COLORW, 1,
IMX334_TPG_COLORW_120P);
imx334_write_reg(imx334, IMX334_REG_TP, 1,
imx334_test_pattern_val[ctrl->val]);
imx334_write_reg(imx334, IMX334_TPG_EN_DOUT, 1,
IMX334_TP_ENABLE);
} else {
imx334_write_reg(imx334, IMX334_DIG_CLP_MODE, 1, 0x1);
imx334_write_reg(imx334, IMX334_TP_CLK_EN, 1,
IMX334_TP_CLK_DIS_VAL);
imx334_write_reg(imx334, IMX334_TPG_EN_DOUT, 1,
IMX334_TP_DISABLE);
}
ret = 0;
break;
default:
dev_err(imx334->dev, "Invalid control %d", ctrl->id);
ret = -EINVAL;
@ -1001,11 +1049,6 @@ static int imx334_set_stream(struct v4l2_subdev *sd, int enable)
mutex_lock(&imx334->mutex);
if (imx334->streaming == enable) {
mutex_unlock(&imx334->mutex);
return 0;
}
if (enable) {
ret = pm_runtime_resume_and_get(imx334->dev);
if (ret < 0)
@ -1019,8 +1062,6 @@ static int imx334_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(imx334->dev);
}
imx334->streaming = enable;
mutex_unlock(&imx334->mutex);
return 0;
@ -1222,7 +1263,7 @@ static int imx334_init_controls(struct imx334 *imx334)
u32 lpfr;
int ret;
ret = v4l2_ctrl_handler_init(ctrl_hdlr, 6);
ret = v4l2_ctrl_handler_init(ctrl_hdlr, 7);
if (ret)
return ret;
@ -1282,6 +1323,11 @@ static int imx334_init_controls(struct imx334 *imx334)
if (imx334->hblank_ctrl)
imx334->hblank_ctrl->flags |= V4L2_CTRL_FLAG_READ_ONLY;
v4l2_ctrl_new_std_menu_items(ctrl_hdlr, &imx334_ctrl_ops,
V4L2_CID_TEST_PATTERN,
ARRAY_SIZE(imx334_test_pattern_menu) - 1,
0, 0, imx334_test_pattern_menu);
if (ctrl_hdlr->error) {
dev_err(imx334->dev, "control init failed: %d",
ctrl_hdlr->error);

View File

@ -119,7 +119,6 @@ struct imx335_mode {
* @vblank: Vertical blanking in lines
* @cur_mode: Pointer to current selected sensor mode
* @mutex: Mutex for serializing sensor controls
* @streaming: Flag indicating streaming state
*/
struct imx335 {
struct device *dev;
@ -140,7 +139,6 @@ struct imx335 {
u32 vblank;
const struct imx335_mode *cur_mode;
struct mutex mutex;
bool streaming;
};
static const s64 link_freq[] = {
@ -705,11 +703,6 @@ static int imx335_set_stream(struct v4l2_subdev *sd, int enable)
mutex_lock(&imx335->mutex);
if (imx335->streaming == enable) {
mutex_unlock(&imx335->mutex);
return 0;
}
if (enable) {
ret = pm_runtime_resume_and_get(imx335->dev);
if (ret)
@ -723,8 +716,6 @@ static int imx335_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(imx335->dev);
}
imx335->streaming = enable;
mutex_unlock(&imx335->mutex);
return 0;

View File

@ -123,9 +123,6 @@ struct imx355 {
* Protect access to sensor v4l2 controls.
*/
struct mutex mutex;
/* Streaming on/off */
bool streaming;
};
static const struct imx355_reg imx355_global_regs[] = {
@ -1436,10 +1433,6 @@ static int imx355_set_stream(struct v4l2_subdev *sd, int enable)
int ret = 0;
mutex_lock(&imx355->mutex);
if (imx355->streaming == enable) {
mutex_unlock(&imx355->mutex);
return 0;
}
if (enable) {
ret = pm_runtime_resume_and_get(&client->dev);
@ -1458,8 +1451,6 @@ static int imx355_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
imx355->streaming = enable;
/* vflip and hflip cannot change during streaming */
__v4l2_ctrl_grab(imx355->vflip, enable);
__v4l2_ctrl_grab(imx355->hflip, enable);
@ -1476,37 +1467,6 @@ err_unlock:
return ret;
}
static int __maybe_unused imx355_suspend(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct imx355 *imx355 = to_imx355(sd);
if (imx355->streaming)
imx355_stop_streaming(imx355);
return 0;
}
static int __maybe_unused imx355_resume(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct imx355 *imx355 = to_imx355(sd);
int ret;
if (imx355->streaming) {
ret = imx355_start_streaming(imx355);
if (ret)
goto error;
}
return 0;
error:
imx355_stop_streaming(imx355);
imx355->streaming = 0;
return ret;
}
/* Verify chip ID */
static int imx355_identify_module(struct imx355 *imx355)
{
@ -1829,10 +1789,6 @@ static void imx355_remove(struct i2c_client *client)
mutex_destroy(&imx355->mutex);
}
static const struct dev_pm_ops imx355_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(imx355_suspend, imx355_resume)
};
static const struct acpi_device_id imx355_acpi_ids[] __maybe_unused = {
{ "SONY355A" },
{ /* sentinel */ }
@ -1842,7 +1798,6 @@ MODULE_DEVICE_TABLE(acpi, imx355_acpi_ids);
static struct i2c_driver imx355_i2c_driver = {
.driver = {
.name = "imx355",
.pm = &imx355_pm_ops,
.acpi_match_table = ACPI_PTR(imx355_acpi_ids),
},
.probe = imx355_probe,

View File

@ -127,7 +127,6 @@ static const char * const imx412_supply_names[] = {
* @vblank: Vertical blanking in lines
* @cur_mode: Pointer to current selected sensor mode
* @mutex: Mutex for serializing sensor controls
* @streaming: Flag indicating streaming state
*/
struct imx412 {
struct device *dev;
@ -149,7 +148,6 @@ struct imx412 {
u32 vblank;
const struct imx412_mode *cur_mode;
struct mutex mutex;
bool streaming;
};
static const s64 link_freq[] = {
@ -857,11 +855,6 @@ static int imx412_set_stream(struct v4l2_subdev *sd, int enable)
mutex_lock(&imx412->mutex);
if (imx412->streaming == enable) {
mutex_unlock(&imx412->mutex);
return 0;
}
if (enable) {
ret = pm_runtime_resume_and_get(imx412->dev);
if (ret)
@ -875,8 +868,6 @@ static int imx412_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(imx412->dev);
}
imx412->streaming = enable;
mutex_unlock(&imx412->mutex);
return 0;

View File

@ -353,8 +353,6 @@ struct imx415 {
const struct imx415_clk_params *clk_params;
bool streaming;
struct v4l2_subdev subdev;
struct media_pad pad;
@ -542,8 +540,9 @@ static int imx415_s_ctrl(struct v4l2_ctrl *ctrl)
struct v4l2_subdev_state *state;
unsigned int vmax;
unsigned int flip;
int ret;
if (!sensor->streaming)
if (!pm_runtime_get_if_in_use(sensor->dev))
return 0;
state = v4l2_subdev_get_locked_active_state(&sensor->subdev);
@ -554,24 +553,33 @@ static int imx415_s_ctrl(struct v4l2_ctrl *ctrl)
/* clamp the exposure value to VMAX. */
vmax = format->height + sensor->vblank->cur.val;
ctrl->val = min_t(int, ctrl->val, vmax);
return imx415_write(sensor, IMX415_SHR0, vmax - ctrl->val);
ret = imx415_write(sensor, IMX415_SHR0, vmax - ctrl->val);
break;
case V4L2_CID_ANALOGUE_GAIN:
/* analogue gain in 0.3 dB step size */
return imx415_write(sensor, IMX415_GAIN_PCG_0, ctrl->val);
ret = imx415_write(sensor, IMX415_GAIN_PCG_0, ctrl->val);
break;
case V4L2_CID_HFLIP:
case V4L2_CID_VFLIP:
flip = (sensor->hflip->val << IMX415_HREVERSE_SHIFT) |
(sensor->vflip->val << IMX415_VREVERSE_SHIFT);
return imx415_write(sensor, IMX415_REVERSE, flip);
ret = imx415_write(sensor, IMX415_REVERSE, flip);
break;
case V4L2_CID_TEST_PATTERN:
return imx415_set_testpattern(sensor, ctrl->val);
ret = imx415_set_testpattern(sensor, ctrl->val);
break;
default:
return -EINVAL;
ret = -EINVAL;
break;
}
pm_runtime_put(sensor->dev);
return ret;
}
static const struct v4l2_ctrl_ops imx415_ctrl_ops = {
@ -766,8 +774,6 @@ static int imx415_s_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_mark_last_busy(sensor->dev);
pm_runtime_put_autosuspend(sensor->dev);
sensor->streaming = false;
goto unlock;
}
@ -779,13 +785,6 @@ static int imx415_s_stream(struct v4l2_subdev *sd, int enable)
if (ret)
goto err_pm;
/*
* Set streaming to true to ensure __v4l2_ctrl_handler_setup() will set
* the controls. The flag is reset to false further down if an error
* occurs.
*/
sensor->streaming = true;
ret = __v4l2_ctrl_handler_setup(&sensor->ctrls);
if (ret < 0)
goto err_pm;
@ -807,7 +806,6 @@ err_pm:
* likely has no other chance to recover.
*/
pm_runtime_put_sync(sensor->dev);
sensor->streaming = false;
goto unlock;
}
@ -842,15 +840,6 @@ static int imx415_enum_frame_size(struct v4l2_subdev *sd,
return 0;
}
static int imx415_get_format(struct v4l2_subdev *sd,
struct v4l2_subdev_state *state,
struct v4l2_subdev_format *fmt)
{
fmt->format = *v4l2_subdev_get_pad_format(sd, state, fmt->pad);
return 0;
}
static int imx415_set_format(struct v4l2_subdev *sd,
struct v4l2_subdev_state *state,
struct v4l2_subdev_format *fmt)
@ -913,7 +902,7 @@ static const struct v4l2_subdev_video_ops imx415_subdev_video_ops = {
static const struct v4l2_subdev_pad_ops imx415_subdev_pad_ops = {
.enum_mbus_code = imx415_enum_mbus_code,
.enum_frame_size = imx415_enum_frame_size,
.get_fmt = imx415_get_format,
.get_fmt = v4l2_subdev_get_fmt,
.set_fmt = imx415_set_format,
.get_selection = imx415_get_selection,
.init_cfg = imx415_init_cfg,

View File

@ -1449,7 +1449,6 @@ static int max9286_parse_dt(struct max9286_priv *priv)
i2c_mux_mask |= BIT(id);
}
of_node_put(node);
of_node_put(i2c_mux);
/* Parse the endpoints */
@ -1513,7 +1512,6 @@ static int max9286_parse_dt(struct max9286_priv *priv)
priv->source_mask |= BIT(ep.port);
priv->nsources++;
}
of_node_put(node);
of_property_read_u32(dev->of_node, "maxim,bus-width", &priv->bus_width);
switch (priv->bus_width) {

View File

@ -561,7 +561,7 @@ static int msp_log_status(struct v4l2_subdev *sd)
struct msp_state *state = to_state(sd);
struct i2c_client *client = v4l2_get_subdevdata(sd);
const char *p;
char prefix[V4L2_SUBDEV_NAME_SIZE + 20];
char prefix[sizeof(sd->name) + 20];
if (state->opmode == OPMODE_AUTOSELECT)
msp_detect_stereo(client);

View File

@ -93,7 +93,6 @@ struct mt9m001 {
struct v4l2_ctrl *autoexposure;
struct v4l2_ctrl *exposure;
};
bool streaming;
struct mutex mutex;
struct v4l2_rect rect; /* Sensor window */
struct clk *clk;
@ -213,9 +212,6 @@ static int mt9m001_s_stream(struct v4l2_subdev *sd, int enable)
mutex_lock(&mt9m001->mutex);
if (mt9m001->streaming == enable)
goto done;
if (enable) {
ret = pm_runtime_resume_and_get(&client->dev);
if (ret < 0)
@ -239,8 +235,6 @@ static int mt9m001_s_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
mt9m001->streaming = enable;
done:
mutex_unlock(&mt9m001->mutex);
return 0;

View File

@ -244,9 +244,7 @@ struct mt9m111 {
bool is_streaming;
/* user point of view - 0: falling 1: rising edge */
unsigned int pclk_sample:1;
#ifdef CONFIG_MEDIA_CONTROLLER
struct media_pad pad;
#endif
};
static const struct mt9m111_mode_info mt9m111_mode_data[MT9M111_NUM_MODES] = {
@ -527,13 +525,9 @@ static int mt9m111_get_fmt(struct v4l2_subdev *sd,
return -EINVAL;
if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
mf = v4l2_subdev_get_try_format(sd, sd_state, format->pad);
format->format = *mf;
return 0;
#else
return -EINVAL;
#endif
}
mf->width = mt9m111->width;
@ -1120,7 +1114,6 @@ static int mt9m111_s_stream(struct v4l2_subdev *sd, int enable)
static int mt9m111_init_cfg(struct v4l2_subdev *sd,
struct v4l2_subdev_state *sd_state)
{
#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
struct v4l2_mbus_framefmt *format =
v4l2_subdev_get_try_format(sd, sd_state, 0);
@ -1132,7 +1125,7 @@ static int mt9m111_init_cfg(struct v4l2_subdev *sd,
format->ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT;
format->quantization = V4L2_QUANTIZATION_DEFAULT;
format->xfer_func = V4L2_XFER_FUNC_DEFAULT;
#endif
return 0;
}
@ -1315,13 +1308,11 @@ static int mt9m111_probe(struct i2c_client *client)
return ret;
}
#ifdef CONFIG_MEDIA_CONTROLLER
mt9m111->pad.flags = MEDIA_PAD_FL_SOURCE;
mt9m111->subdev.entity.function = MEDIA_ENT_F_CAM_SENSOR;
ret = media_entity_pads_init(&mt9m111->subdev.entity, 1, &mt9m111->pad);
if (ret < 0)
goto out_hdlfree;
#endif
mt9m111->current_mode = &mt9m111_mode_data[MT9M111_MODE_SXGA_15FPS];
mt9m111->frame_interval.numerator = 1;
@ -1350,10 +1341,8 @@ static int mt9m111_probe(struct i2c_client *client)
return 0;
out_entityclean:
#ifdef CONFIG_MEDIA_CONTROLLER
media_entity_cleanup(&mt9m111->subdev.entity);
out_hdlfree:
#endif
v4l2_ctrl_handler_free(&mt9m111->hdl);
return ret;

2481
drivers/media/i2c/mt9m114.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -49,9 +49,7 @@ MODULE_PARM_DESC(debug, "Debug level (0-2)");
struct mt9v011 {
struct v4l2_subdev sd;
#ifdef CONFIG_MEDIA_CONTROLLER
struct media_pad pad;
#endif
struct v4l2_ctrl_handler ctrls;
unsigned width, height;
unsigned xtal;
@ -483,9 +481,7 @@ static int mt9v011_probe(struct i2c_client *c)
u16 version;
struct mt9v011 *core;
struct v4l2_subdev *sd;
#ifdef CONFIG_MEDIA_CONTROLLER
int ret;
#endif
/* Check if the adapter supports the needed features */
if (!i2c_check_functionality(c->adapter,
@ -499,14 +495,12 @@ static int mt9v011_probe(struct i2c_client *c)
sd = &core->sd;
v4l2_i2c_subdev_init(sd, c, &mt9v011_ops);
#ifdef CONFIG_MEDIA_CONTROLLER
core->pad.flags = MEDIA_PAD_FL_SOURCE;
sd->entity.function = MEDIA_ENT_F_CAM_SENSOR;
ret = media_entity_pads_init(&sd->entity, 1, &core->pad);
if (ret < 0)
return ret;
#endif
/* Check if the sensor is really a MT9V011 */
version = mt9v011_read(sd, R00_MT9V011_CHIP_VERSION);

View File

@ -14,6 +14,7 @@
#include <linux/gpio/consumer.h>
#include <linux/i2c.h>
#include <linux/log2.h>
#include <linux/mod_devicetable.h>
#include <linux/mutex.h>
#include <linux/of.h>
#include <linux/of_graph.h>
@ -1046,7 +1047,6 @@ done:
static int mt9v032_probe(struct i2c_client *client)
{
const struct i2c_device_id *did = i2c_client_get_device_id(client);
struct mt9v032_platform_data *pdata = mt9v032_get_pdata(client);
struct mt9v032 *mt9v032;
unsigned int i;
@ -1076,7 +1076,7 @@ static int mt9v032_probe(struct i2c_client *client)
mutex_init(&mt9v032->power_lock);
mt9v032->pdata = pdata;
mt9v032->model = (const void *)did->driver_data;
mt9v032->model = i2c_get_match_data(client);
v4l2_ctrl_handler_init(&mt9v032->ctrls, 11 +
ARRAY_SIZE(mt9v032_aegc_controls));
@ -1272,29 +1272,27 @@ static const struct i2c_device_id mt9v032_id[] = {
{ "mt9v032m", (kernel_ulong_t)&mt9v032_models[MT9V032_MODEL_V032_MONO] },
{ "mt9v034", (kernel_ulong_t)&mt9v032_models[MT9V032_MODEL_V034_COLOR] },
{ "mt9v034m", (kernel_ulong_t)&mt9v032_models[MT9V032_MODEL_V034_MONO] },
{ }
{ /* Sentinel */ }
};
MODULE_DEVICE_TABLE(i2c, mt9v032_id);
#if IS_ENABLED(CONFIG_OF)
static const struct of_device_id mt9v032_of_match[] = {
{ .compatible = "aptina,mt9v022" },
{ .compatible = "aptina,mt9v022m" },
{ .compatible = "aptina,mt9v024" },
{ .compatible = "aptina,mt9v024m" },
{ .compatible = "aptina,mt9v032" },
{ .compatible = "aptina,mt9v032m" },
{ .compatible = "aptina,mt9v034" },
{ .compatible = "aptina,mt9v034m" },
{ .compatible = "aptina,mt9v022", .data = &mt9v032_models[MT9V032_MODEL_V022_COLOR] },
{ .compatible = "aptina,mt9v022m", .data = &mt9v032_models[MT9V032_MODEL_V022_MONO] },
{ .compatible = "aptina,mt9v024", .data = &mt9v032_models[MT9V032_MODEL_V024_COLOR] },
{ .compatible = "aptina,mt9v024m", .data = &mt9v032_models[MT9V032_MODEL_V024_MONO] },
{ .compatible = "aptina,mt9v032", .data = &mt9v032_models[MT9V032_MODEL_V032_COLOR] },
{ .compatible = "aptina,mt9v032m", .data = &mt9v032_models[MT9V032_MODEL_V032_MONO] },
{ .compatible = "aptina,mt9v034", .data = &mt9v032_models[MT9V032_MODEL_V034_COLOR] },
{ .compatible = "aptina,mt9v034m", .data = &mt9v032_models[MT9V032_MODEL_V034_MONO] },
{ /* Sentinel */ }
};
MODULE_DEVICE_TABLE(of, mt9v032_of_match);
#endif
static struct i2c_driver mt9v032_driver = {
.driver = {
.name = "mt9v032",
.of_match_table = of_match_ptr(mt9v032_of_match),
.of_match_table = mt9v032_of_match,
},
.probe = mt9v032_probe,
.remove = mt9v032_remove,

View File

@ -121,9 +121,7 @@ struct mt9v111_dev {
u8 addr_space;
struct v4l2_subdev sd;
#if IS_ENABLED(CONFIG_MEDIA_CONTROLLER)
struct media_pad pad;
#endif
struct v4l2_ctrl *auto_awb;
struct v4l2_ctrl *auto_exp;
@ -797,11 +795,7 @@ static struct v4l2_mbus_framefmt *__mt9v111_get_pad_format(
{
switch (which) {
case V4L2_SUBDEV_FORMAT_TRY:
#if IS_ENABLED(CONFIG_VIDEO_V4L2_SUBDEV_API)
return v4l2_subdev_get_try_format(&mt9v111->sd, sd_state, pad);
#else
return &sd_state->pads->try_fmt;
#endif
case V4L2_SUBDEV_FORMAT_ACTIVE:
return &mt9v111->fmt;
default:
@ -987,11 +981,9 @@ static const struct v4l2_subdev_ops mt9v111_ops = {
.pad = &mt9v111_pad_ops,
};
#if IS_ENABLED(CONFIG_MEDIA_CONTROLLER)
static const struct media_entity_operations mt9v111_subdev_entity_ops = {
.link_validate = v4l2_subdev_link_validate,
};
#endif
/* --- V4L2 ctrl --- */
static int mt9v111_s_ctrl(struct v4l2_ctrl *ctrl)
@ -1203,7 +1195,6 @@ static int mt9v111_probe(struct i2c_client *client)
v4l2_i2c_subdev_init(&mt9v111->sd, client, &mt9v111_ops);
#if IS_ENABLED(CONFIG_MEDIA_CONTROLLER)
mt9v111->sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
mt9v111->sd.entity.ops = &mt9v111_subdev_entity_ops;
mt9v111->sd.entity.function = MEDIA_ENT_F_CAM_SENSOR;
@ -1212,7 +1203,6 @@ static int mt9v111_probe(struct i2c_client *client)
ret = media_entity_pads_init(&mt9v111->sd.entity, 1, &mt9v111->pad);
if (ret)
goto error_free_entity;
#endif
ret = mt9v111_chip_probe(mt9v111);
if (ret)
@ -1225,9 +1215,7 @@ static int mt9v111_probe(struct i2c_client *client)
return 0;
error_free_entity:
#if IS_ENABLED(CONFIG_MEDIA_CONTROLLER)
media_entity_cleanup(&mt9v111->sd.entity);
#endif
error_free_ctrls:
v4l2_ctrl_handler_free(&mt9v111->ctrls);
@ -1245,9 +1233,7 @@ static void mt9v111_remove(struct i2c_client *client)
v4l2_async_unregister_subdev(sd);
#if IS_ENABLED(CONFIG_MEDIA_CONTROLLER)
media_entity_cleanup(&sd->entity);
#endif
v4l2_ctrl_handler_free(&mt9v111->ctrls);

View File

@ -434,9 +434,6 @@ struct og01a1b {
/* To serialize asynchronus callbacks */
struct mutex mutex;
/* Streaming on/off */
bool streaming;
};
static u64 to_pixel_rate(u32 f_index)
@ -732,14 +729,10 @@ static int og01a1b_set_stream(struct v4l2_subdev *sd, int enable)
struct i2c_client *client = v4l2_get_subdevdata(sd);
int ret = 0;
if (og01a1b->streaming == enable)
return 0;
mutex_lock(&og01a1b->mutex);
if (enable) {
ret = pm_runtime_get_sync(&client->dev);
if (ret < 0) {
pm_runtime_put_noidle(&client->dev);
ret = pm_runtime_resume_and_get(&client->dev);
if (ret) {
mutex_unlock(&og01a1b->mutex);
return ret;
}
@ -755,50 +748,11 @@ static int og01a1b_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
og01a1b->streaming = enable;
mutex_unlock(&og01a1b->mutex);
return ret;
}
static int __maybe_unused og01a1b_suspend(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct og01a1b *og01a1b = to_og01a1b(sd);
mutex_lock(&og01a1b->mutex);
if (og01a1b->streaming)
og01a1b_stop_streaming(og01a1b);
mutex_unlock(&og01a1b->mutex);
return 0;
}
static int __maybe_unused og01a1b_resume(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct og01a1b *og01a1b = to_og01a1b(sd);
int ret;
mutex_lock(&og01a1b->mutex);
if (og01a1b->streaming) {
ret = og01a1b_start_streaming(og01a1b);
if (ret) {
og01a1b->streaming = false;
og01a1b_stop_streaming(og01a1b);
mutex_unlock(&og01a1b->mutex);
return ret;
}
}
mutex_unlock(&og01a1b->mutex);
return 0;
}
static int og01a1b_set_format(struct v4l2_subdev *sd,
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_format *fmt)
@ -1096,10 +1050,6 @@ probe_error_v4l2_ctrl_handler_free:
return ret;
}
static const struct dev_pm_ops og01a1b_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(og01a1b_suspend, og01a1b_resume)
};
#ifdef CONFIG_ACPI
static const struct acpi_device_id og01a1b_acpi_ids[] = {
{"OVTI01AC"},
@ -1112,7 +1062,6 @@ MODULE_DEVICE_TABLE(acpi, og01a1b_acpi_ids);
static struct i2c_driver og01a1b_i2c_driver = {
.driver = {
.name = "og01a1b",
.pm = &og01a1b_pm_ops,
.acpi_match_table = ACPI_PTR(og01a1b_acpi_ids),
},
.probe = og01a1b_probe,

View File

@ -287,9 +287,6 @@ struct ov01a10 {
struct v4l2_ctrl *exposure;
const struct ov01a10_mode *cur_mode;
/* streaming state */
bool streaming;
};
static inline struct ov01a10 *to_ov01a10(struct v4l2_subdev *subdev)
@ -672,8 +669,6 @@ static int ov01a10_set_stream(struct v4l2_subdev *sd, int enable)
int ret = 0;
state = v4l2_subdev_lock_and_get_active_state(sd);
if (ov01a10->streaming == enable)
goto unlock;
if (enable) {
ret = pm_runtime_resume_and_get(&client->dev);
@ -685,60 +680,17 @@ static int ov01a10_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
goto unlock;
}
goto done;
} else {
ov01a10_stop_streaming(ov01a10);
pm_runtime_put(&client->dev);
}
ov01a10_stop_streaming(ov01a10);
pm_runtime_put(&client->dev);
done:
ov01a10->streaming = enable;
unlock:
v4l2_subdev_unlock_state(state);
return ret;
}
static int __maybe_unused ov01a10_suspend(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct ov01a10 *ov01a10 = to_ov01a10(sd);
struct v4l2_subdev_state *state;
state = v4l2_subdev_lock_and_get_active_state(sd);
if (ov01a10->streaming)
ov01a10_stop_streaming(ov01a10);
v4l2_subdev_unlock_state(state);
return 0;
}
static int __maybe_unused ov01a10_resume(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct ov01a10 *ov01a10 = to_ov01a10(sd);
struct v4l2_subdev_state *state;
int ret = 0;
state = v4l2_subdev_lock_and_get_active_state(sd);
if (!ov01a10->streaming)
goto exit;
ret = ov01a10_start_streaming(ov01a10);
if (ret) {
ov01a10->streaming = false;
ov01a10_stop_streaming(ov01a10);
}
exit:
v4l2_subdev_unlock_state(state);
return ret;
}
static int ov01a10_set_format(struct v4l2_subdev *sd,
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_format *fmt)
@ -973,10 +925,6 @@ err_handler_free:
return ret;
}
static const struct dev_pm_ops ov01a10_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(ov01a10_suspend, ov01a10_resume)
};
#ifdef CONFIG_ACPI
static const struct acpi_device_id ov01a10_acpi_ids[] = {
{ "OVTI01A0" },
@ -989,7 +937,6 @@ MODULE_DEVICE_TABLE(acpi, ov01a10_acpi_ids);
static struct i2c_driver ov01a10_i2c_driver = {
.driver = {
.name = "ov01a10",
.pm = &ov01a10_pm_ops,
.acpi_match_table = ACPI_PTR(ov01a10_acpi_ids),
},
.probe = ov01a10_probe,

View File

@ -570,8 +570,6 @@ unlock_and_return:
}
static const struct dev_pm_ops ov02a10_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
pm_runtime_force_resume)
SET_RUNTIME_PM_OPS(ov02a10_power_off, ov02a10_power_on, NULL)
};

View File

@ -536,9 +536,6 @@ struct ov08d10 {
/* To serialize asynchronus callbacks */
struct mutex mutex;
/* Streaming on/off */
bool streaming;
/* lanes index */
u8 nlanes;
@ -1103,9 +1100,6 @@ static int ov08d10_set_stream(struct v4l2_subdev *sd, int enable)
struct i2c_client *client = v4l2_get_subdevdata(sd);
int ret = 0;
if (ov08d10->streaming == enable)
return 0;
mutex_lock(&ov08d10->mutex);
if (enable) {
ret = pm_runtime_resume_and_get(&client->dev);
@ -1125,8 +1119,6 @@ static int ov08d10_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
ov08d10->streaming = enable;
/* vflip and hflip cannot change during streaming */
__v4l2_ctrl_grab(ov08d10->vflip, enable);
__v4l2_ctrl_grab(ov08d10->hflip, enable);
@ -1136,45 +1128,6 @@ static int ov08d10_set_stream(struct v4l2_subdev *sd, int enable)
return ret;
}
static int __maybe_unused ov08d10_suspend(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct ov08d10 *ov08d10 = to_ov08d10(sd);
mutex_lock(&ov08d10->mutex);
if (ov08d10->streaming)
ov08d10_stop_streaming(ov08d10);
mutex_unlock(&ov08d10->mutex);
return 0;
}
static int __maybe_unused ov08d10_resume(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct ov08d10 *ov08d10 = to_ov08d10(sd);
int ret;
mutex_lock(&ov08d10->mutex);
if (ov08d10->streaming) {
ret = ov08d10_start_streaming(ov08d10);
if (ret) {
ov08d10->streaming = false;
ov08d10_stop_streaming(ov08d10);
mutex_unlock(&ov08d10->mutex);
return ret;
}
}
mutex_unlock(&ov08d10->mutex);
return 0;
}
static int ov08d10_set_format(struct v4l2_subdev *sd,
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_format *fmt)
@ -1501,10 +1454,6 @@ probe_error_v4l2_ctrl_handler_free:
return ret;
}
static const struct dev_pm_ops ov08d10_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(ov08d10_suspend, ov08d10_resume)
};
#ifdef CONFIG_ACPI
static const struct acpi_device_id ov08d10_acpi_ids[] = {
{ "OVTI08D1" },
@ -1517,7 +1466,6 @@ MODULE_DEVICE_TABLE(acpi, ov08d10_acpi_ids);
static struct i2c_driver ov08d10_i2c_driver = {
.driver = {
.name = "ov08d10",
.pm = &ov08d10_pm_ops,
.acpi_match_table = ACPI_PTR(ov08d10_acpi_ids),
},
.probe = ov08d10_probe,

View File

@ -2432,9 +2432,6 @@ struct ov08x40 {
/* Mutex for serialized access */
struct mutex mutex;
/* Streaming on/off */
bool streaming;
};
#define to_ov08x40(_sd) container_of(_sd, struct ov08x40, sd)
@ -2915,10 +2912,6 @@ static int ov08x40_set_stream(struct v4l2_subdev *sd, int enable)
int ret = 0;
mutex_lock(&ov08x->mutex);
if (ov08x->streaming == enable) {
mutex_unlock(&ov08x->mutex);
return 0;
}
if (enable) {
ret = pm_runtime_resume_and_get(&client->dev);
@ -2937,7 +2930,6 @@ static int ov08x40_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
ov08x->streaming = enable;
mutex_unlock(&ov08x->mutex);
return ret;
@ -2950,37 +2942,6 @@ err_unlock:
return ret;
}
static int __maybe_unused ov08x40_suspend(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct ov08x40 *ov08x = to_ov08x40(sd);
if (ov08x->streaming)
ov08x40_stop_streaming(ov08x);
return 0;
}
static int __maybe_unused ov08x40_resume(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct ov08x40 *ov08x = to_ov08x40(sd);
int ret;
if (ov08x->streaming) {
ret = ov08x40_start_streaming(ov08x);
if (ret)
goto error;
}
return 0;
error:
ov08x40_stop_streaming(ov08x);
ov08x->streaming = false;
return ret;
}
/* Verify chip ID */
static int ov08x40_identify_module(struct ov08x40 *ov08x)
{
@ -3294,10 +3255,6 @@ static void ov08x40_remove(struct i2c_client *client)
pm_runtime_set_suspended(&client->dev);
}
static const struct dev_pm_ops ov08x40_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(ov08x40_suspend, ov08x40_resume)
};
#ifdef CONFIG_ACPI
static const struct acpi_device_id ov08x40_acpi_ids[] = {
{"OVTI08F4"},
@ -3310,7 +3267,6 @@ MODULE_DEVICE_TABLE(acpi, ov08x40_acpi_ids);
static struct i2c_driver ov08x40_i2c_driver = {
.driver = {
.name = "ov08x40",
.pm = &ov08x40_pm_ops,
.acpi_match_table = ACPI_PTR(ov08x40_acpi_ids),
},
.probe = ov08x40_probe,

View File

@ -1044,9 +1044,6 @@ struct ov13858 {
/* Mutex for serialized access */
struct mutex mutex;
/* Streaming on/off */
bool streaming;
};
#define to_ov13858(_sd) container_of(_sd, struct ov13858, sd)
@ -1467,10 +1464,6 @@ static int ov13858_set_stream(struct v4l2_subdev *sd, int enable)
int ret = 0;
mutex_lock(&ov13858->mutex);
if (ov13858->streaming == enable) {
mutex_unlock(&ov13858->mutex);
return 0;
}
if (enable) {
ret = pm_runtime_resume_and_get(&client->dev);
@ -1489,7 +1482,6 @@ static int ov13858_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
ov13858->streaming = enable;
mutex_unlock(&ov13858->mutex);
return ret;
@ -1502,37 +1494,6 @@ err_unlock:
return ret;
}
static int __maybe_unused ov13858_suspend(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct ov13858 *ov13858 = to_ov13858(sd);
if (ov13858->streaming)
ov13858_stop_streaming(ov13858);
return 0;
}
static int __maybe_unused ov13858_resume(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct ov13858 *ov13858 = to_ov13858(sd);
int ret;
if (ov13858->streaming) {
ret = ov13858_start_streaming(ov13858);
if (ret)
goto error;
}
return 0;
error:
ov13858_stop_streaming(ov13858);
ov13858->streaming = false;
return ret;
}
/* Verify chip ID */
static int ov13858_identify_module(struct ov13858 *ov13858)
{
@ -1787,10 +1748,6 @@ static const struct i2c_device_id ov13858_id_table[] = {
MODULE_DEVICE_TABLE(i2c, ov13858_id_table);
static const struct dev_pm_ops ov13858_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(ov13858_suspend, ov13858_resume)
};
#ifdef CONFIG_ACPI
static const struct acpi_device_id ov13858_acpi_ids[] = {
{"OVTID858"},
@ -1803,7 +1760,6 @@ MODULE_DEVICE_TABLE(acpi, ov13858_acpi_ids);
static struct i2c_driver ov13858_i2c_driver = {
.driver = {
.name = "ov13858",
.pm = &ov13858_pm_ops,
.acpi_match_table = ACPI_PTR(ov13858_acpi_ids),
},
.probe = ov13858_probe,

View File

@ -31,6 +31,7 @@
#define OV13B10_REG_VTS 0x380e
#define OV13B10_VTS_30FPS 0x0c7c
#define OV13B10_VTS_60FPS 0x063e
#define OV13B10_VTS_120FPS 0x0320
#define OV13B10_VTS_MAX 0x7fff
/* HBLANK control - read only */
@ -468,6 +469,50 @@ static const struct ov13b10_reg mode_2080x1170_regs[] = {
{0x5001, 0x0d},
};
static const struct ov13b10_reg mode_1364x768_120fps_regs[] = {
{0x0305, 0xaf},
{0x3011, 0x7c},
{0x3501, 0x03},
{0x3502, 0x00},
{0x3662, 0x88},
{0x3714, 0x28},
{0x3739, 0x10},
{0x37c2, 0x14},
{0x37d9, 0x06},
{0x37e2, 0x0c},
{0x37e4, 0x00},
{0x3800, 0x02},
{0x3801, 0xe4},
{0x3802, 0x03},
{0x3803, 0x48},
{0x3804, 0x0d},
{0x3805, 0xab},
{0x3806, 0x09},
{0x3807, 0x60},
{0x3808, 0x05},
{0x3809, 0x54},
{0x380a, 0x03},
{0x380b, 0x00},
{0x380c, 0x04},
{0x380d, 0x8e},
{0x380e, 0x03},
{0x380f, 0x20},
{0x3811, 0x07},
{0x3813, 0x07},
{0x3814, 0x03},
{0x3816, 0x03},
{0x3820, 0x8b},
{0x3c8c, 0x18},
{0x4008, 0x00},
{0x4009, 0x05},
{0x4050, 0x00},
{0x4051, 0x05},
{0x4501, 0x08},
{0x4505, 0x04},
{0x5000, 0xfd},
{0x5001, 0x0d},
};
static const char * const ov13b10_test_pattern_menu[] = {
"Disabled",
"Vertical Color Bar Type 1",
@ -568,7 +613,18 @@ static const struct ov13b10_mode supported_modes[] = {
.regs = mode_2080x1170_regs,
},
.link_freq_index = OV13B10_LINK_FREQ_INDEX_0,
}
},
{
.width = 1364,
.height = 768,
.vts_def = OV13B10_VTS_120FPS,
.vts_min = OV13B10_VTS_120FPS,
.link_freq_index = OV13B10_LINK_FREQ_INDEX_0,
.reg_list = {
.num_of_regs = ARRAY_SIZE(mode_1364x768_120fps_regs),
.regs = mode_1364x768_120fps_regs,
},
},
};
struct ov13b10 {
@ -594,9 +650,6 @@ struct ov13b10 {
/* Mutex for serialized access */
struct mutex mutex;
/* Streaming on/off */
bool streaming;
/* True if the device has been identified */
bool identified;
};
@ -1161,10 +1214,6 @@ static int ov13b10_set_stream(struct v4l2_subdev *sd, int enable)
int ret = 0;
mutex_lock(&ov13b->mutex);
if (ov13b->streaming == enable) {
mutex_unlock(&ov13b->mutex);
return 0;
}
if (enable) {
ret = pm_runtime_resume_and_get(&client->dev);
@ -1183,7 +1232,6 @@ static int ov13b10_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
ov13b->streaming = enable;
mutex_unlock(&ov13b->mutex);
return ret;
@ -1198,12 +1246,6 @@ err_unlock:
static int ov13b10_suspend(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct ov13b10 *ov13b = to_ov13b10(sd);
if (ov13b->streaming)
ov13b10_stop_streaming(ov13b);
ov13b10_power_off(dev);
return 0;
@ -1211,29 +1253,7 @@ static int ov13b10_suspend(struct device *dev)
static int ov13b10_resume(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct ov13b10 *ov13b = to_ov13b10(sd);
int ret;
ret = ov13b10_power_on(dev);
if (ret)
goto pm_fail;
if (ov13b->streaming) {
ret = ov13b10_start_streaming(ov13b);
if (ret)
goto stop_streaming;
}
return 0;
stop_streaming:
ov13b10_stop_streaming(ov13b);
ov13b10_power_off(dev);
pm_fail:
ov13b->streaming = false;
return ret;
return ov13b10_power_on(dev);
}
static const struct v4l2_subdev_video_ops ov13b10_video_ops = {
@ -1501,7 +1521,7 @@ static int ov13b10_probe(struct i2c_client *client)
full_power = acpi_dev_state_d0(&client->dev);
if (full_power) {
ov13b10_power_on(&client->dev);
ret = ov13b10_power_on(&client->dev);
if (ret) {
dev_err(&client->dev, "failed to power on\n");
return ret;

View File

@ -293,9 +293,7 @@ struct ov2640_win_size {
struct ov2640_priv {
struct v4l2_subdev subdev;
#if defined(CONFIG_MEDIA_CONTROLLER)
struct media_pad pad;
#endif
struct v4l2_ctrl_handler hdl;
u32 cfmt_code;
struct clk *clk;
@ -922,13 +920,9 @@ static int ov2640_get_fmt(struct v4l2_subdev *sd,
return -EINVAL;
if (format->which == V4L2_SUBDEV_FORMAT_TRY) {
#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
mf = v4l2_subdev_get_try_format(sd, sd_state, 0);
format->format = *mf;
return 0;
#else
return -EINVAL;
#endif
}
mf->width = priv->win->width;
@ -1005,7 +999,6 @@ out:
static int ov2640_init_cfg(struct v4l2_subdev *sd,
struct v4l2_subdev_state *sd_state)
{
#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
struct v4l2_mbus_framefmt *try_fmt =
v4l2_subdev_get_try_format(sd, sd_state, 0);
const struct ov2640_win_size *win =
@ -1019,7 +1012,7 @@ static int ov2640_init_cfg(struct v4l2_subdev *sd,
try_fmt->ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT;
try_fmt->quantization = V4L2_QUANTIZATION_DEFAULT;
try_fmt->xfer_func = V4L2_XFER_FUNC_DEFAULT;
#endif
return 0;
}
@ -1205,17 +1198,14 @@ static int ov2640_probe(struct i2c_client *client)
return -ENOMEM;
if (client->dev.of_node) {
priv->clk = devm_clk_get(&client->dev, "xvclk");
priv->clk = devm_clk_get_enabled(&client->dev, "xvclk");
if (IS_ERR(priv->clk))
return PTR_ERR(priv->clk);
ret = clk_prepare_enable(priv->clk);
if (ret)
return ret;
}
ret = ov2640_probe_dt(client, priv);
if (ret)
goto err_clk;
return ret;
priv->win = ov2640_select_win(SVGA_WIDTH, SVGA_HEIGHT);
priv->cfmt_code = MEDIA_BUS_FMT_UYVY8_2X8;
@ -1239,13 +1229,11 @@ static int ov2640_probe(struct i2c_client *client)
ret = priv->hdl.error;
goto err_hdl;
}
#if defined(CONFIG_MEDIA_CONTROLLER)
priv->pad.flags = MEDIA_PAD_FL_SOURCE;
priv->subdev.entity.function = MEDIA_ENT_F_CAM_SENSOR;
ret = media_entity_pads_init(&priv->subdev.entity, 1, &priv->pad);
if (ret < 0)
goto err_hdl;
#endif
ret = ov2640_video_probe(client);
if (ret < 0)
@ -1264,8 +1252,6 @@ err_videoprobe:
err_hdl:
v4l2_ctrl_handler_free(&priv->hdl);
mutex_destroy(&priv->lock);
err_clk:
clk_disable_unprepare(priv->clk);
return ret;
}
@ -1278,7 +1264,6 @@ static void ov2640_remove(struct i2c_client *client)
mutex_destroy(&priv->lock);
media_entity_cleanup(&priv->subdev.entity);
v4l2_device_unregister_subdev(&priv->subdev);
clk_disable_unprepare(priv->clk);
}
static const struct i2c_device_id ov2640_id[] = {

View File

@ -1031,7 +1031,6 @@ static int ov2659_get_fmt(struct v4l2_subdev *sd,
dev_dbg(&client->dev, "ov2659_get_fmt\n");
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
struct v4l2_mbus_framefmt *mf;
mf = v4l2_subdev_get_try_format(sd, sd_state, 0);
@ -1039,9 +1038,6 @@ static int ov2659_get_fmt(struct v4l2_subdev *sd,
fmt->format = *mf;
mutex_unlock(&ov2659->lock);
return 0;
#else
return -EINVAL;
#endif
}
mutex_lock(&ov2659->lock);
@ -1113,10 +1109,8 @@ static int ov2659_set_fmt(struct v4l2_subdev *sd,
mutex_lock(&ov2659->lock);
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
mf = v4l2_subdev_get_try_format(sd, sd_state, fmt->pad);
*mf = fmt->format;
#endif
} else {
s64 val;
@ -1306,7 +1300,6 @@ static int ov2659_power_on(struct device *dev)
* V4L2 subdev internal operations
*/
#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
static int ov2659_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
struct i2c_client *client = v4l2_get_subdevdata(sd);
@ -1319,7 +1312,6 @@ static int ov2659_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
return 0;
}
#endif
static const struct v4l2_subdev_core_ops ov2659_subdev_core_ops = {
.log_status = v4l2_ctrl_subdev_log_status,
@ -1338,7 +1330,6 @@ static const struct v4l2_subdev_pad_ops ov2659_subdev_pad_ops = {
.set_fmt = ov2659_set_fmt,
};
#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
static const struct v4l2_subdev_ops ov2659_subdev_ops = {
.core = &ov2659_subdev_core_ops,
.video = &ov2659_subdev_video_ops,
@ -1348,7 +1339,6 @@ static const struct v4l2_subdev_ops ov2659_subdev_ops = {
static const struct v4l2_subdev_internal_ops ov2659_subdev_internal_ops = {
.open = ov2659_open,
};
#endif
static int ov2659_detect(struct v4l2_subdev *sd)
{
@ -1489,15 +1479,12 @@ static int ov2659_probe(struct i2c_client *client)
sd = &ov2659->sd;
client->flags |= I2C_CLIENT_SCCB;
#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
v4l2_i2c_subdev_init(sd, client, &ov2659_subdev_ops);
v4l2_i2c_subdev_init(sd, client, &ov2659_subdev_ops);
sd->internal_ops = &ov2659_subdev_internal_ops;
sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE |
V4L2_SUBDEV_FL_HAS_EVENTS;
#endif
#if defined(CONFIG_MEDIA_CONTROLLER)
ov2659->pad.flags = MEDIA_PAD_FL_SOURCE;
sd->entity.function = MEDIA_ENT_F_CAM_SENSOR;
ret = media_entity_pads_init(&sd->entity, 1, &ov2659->pad);
@ -1505,7 +1492,6 @@ static int ov2659_probe(struct i2c_client *client)
v4l2_ctrl_handler_free(&ov2659->ctrls);
return ret;
}
#endif
mutex_init(&ov2659->lock);

View File

@ -91,7 +91,6 @@ struct ov2685 {
struct gpio_desc *reset_gpio;
struct regulator_bulk_data supplies[OV2685_NUM_SUPPLIES];
bool streaming;
struct mutex mutex;
struct v4l2_subdev subdev;
struct media_pad pad;
@ -513,10 +512,6 @@ static int ov2685_s_stream(struct v4l2_subdev *sd, int on)
mutex_lock(&ov2685->mutex);
on = !!on;
if (on == ov2685->streaming)
goto unlock_and_return;
if (on) {
ret = pm_runtime_resume_and_get(&ov2685->client->dev);
if (ret < 0)
@ -539,15 +534,12 @@ static int ov2685_s_stream(struct v4l2_subdev *sd, int on)
pm_runtime_put(&ov2685->client->dev);
}
ov2685->streaming = on;
unlock_and_return:
mutex_unlock(&ov2685->mutex);
return ret;
}
#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
static int ov2685_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
struct ov2685 *ov2685 = to_ov2685(sd);
@ -563,7 +555,6 @@ static int ov2685_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
return 0;
}
#endif
static int __maybe_unused ov2685_runtime_resume(struct device *dev)
{
@ -660,11 +651,9 @@ static const struct v4l2_subdev_ops ov2685_subdev_ops = {
.pad = &ov2685_pad_ops,
};
#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
static const struct v4l2_subdev_internal_ops ov2685_internal_ops = {
.open = ov2685_open,
};
#endif
static const struct v4l2_ctrl_ops ov2685_ctrl_ops = {
.s_ctrl = ov2685_set_ctrl,
@ -833,17 +822,13 @@ static int ov2685_probe(struct i2c_client *client)
if (ret)
goto err_power_off;
#ifdef CONFIG_VIDEO_V4L2_SUBDEV_API
ov2685->subdev.internal_ops = &ov2685_internal_ops;
ov2685->subdev.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
#endif
#if defined(CONFIG_MEDIA_CONTROLLER)
ov2685->pad.flags = MEDIA_PAD_FL_SOURCE;
ov2685->subdev.entity.function = MEDIA_ENT_F_CAM_SENSOR;
ret = media_entity_pads_init(&ov2685->subdev.entity, 1, &ov2685->pad);
if (ret < 0)
goto err_power_off;
#endif
ret = v4l2_async_register_subdev(&ov2685->subdev);
if (ret) {
@ -858,9 +843,7 @@ static int ov2685_probe(struct i2c_client *client)
return 0;
err_clean_entity:
#if defined(CONFIG_MEDIA_CONTROLLER)
media_entity_cleanup(&ov2685->subdev.entity);
#endif
err_power_off:
__ov2685_power_off(ov2685);
err_free_handler:
@ -877,9 +860,7 @@ static void ov2685_remove(struct i2c_client *client)
struct ov2685 *ov2685 = to_ov2685(sd);
v4l2_async_unregister_subdev(sd);
#if defined(CONFIG_MEDIA_CONTROLLER)
media_entity_cleanup(&sd->entity);
#endif
v4l2_ctrl_handler_free(&ov2685->ctrl_handler);
mutex_destroy(&ov2685->mutex);

View File

@ -336,12 +336,6 @@ struct ov2740 {
/* Current mode */
const struct ov2740_mode *cur_mode;
/* To serialize asynchronus callbacks */
struct mutex mutex;
/* Streaming on/off */
bool streaming;
/* NVM data inforamtion */
struct nvm_data *nvm;
@ -582,7 +576,6 @@ static int ov2740_init_controls(struct ov2740 *ov2740)
if (ret)
return ret;
ctrl_hdlr->lock = &ov2740->mutex;
cur_mode = ov2740->cur_mode;
size = ARRAY_SIZE(link_freq_menu_items);
@ -792,18 +785,15 @@ static int ov2740_set_stream(struct v4l2_subdev *sd, int enable)
{
struct ov2740 *ov2740 = to_ov2740(sd);
struct i2c_client *client = v4l2_get_subdevdata(sd);
struct v4l2_subdev_state *sd_state;
int ret = 0;
if (ov2740->streaming == enable)
return 0;
sd_state = v4l2_subdev_lock_and_get_active_state(&ov2740->sd);
mutex_lock(&ov2740->mutex);
if (enable) {
ret = pm_runtime_resume_and_get(&client->dev);
if (ret < 0) {
mutex_unlock(&ov2740->mutex);
return ret;
}
if (ret < 0)
goto out_unlock;
ret = ov2740_start_streaming(ov2740);
if (ret) {
@ -816,47 +806,12 @@ static int ov2740_set_stream(struct v4l2_subdev *sd, int enable)
pm_runtime_put(&client->dev);
}
ov2740->streaming = enable;
mutex_unlock(&ov2740->mutex);
out_unlock:
v4l2_subdev_unlock_state(sd_state);
return ret;
}
static int ov2740_suspend(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct ov2740 *ov2740 = to_ov2740(sd);
mutex_lock(&ov2740->mutex);
if (ov2740->streaming)
ov2740_stop_streaming(ov2740);
mutex_unlock(&ov2740->mutex);
return 0;
}
static int ov2740_resume(struct device *dev)
{
struct v4l2_subdev *sd = dev_get_drvdata(dev);
struct ov2740 *ov2740 = to_ov2740(sd);
int ret = 0;
mutex_lock(&ov2740->mutex);
if (!ov2740->streaming)
goto exit;
ret = ov2740_start_streaming(ov2740);
if (ret) {
ov2740->streaming = false;
ov2740_stop_streaming(ov2740);
}
exit:
mutex_unlock(&ov2740->mutex);
return ret;
}
static int ov2740_set_format(struct v4l2_subdev *sd,
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_format *fmt)
@ -870,48 +825,26 @@ static int ov2740_set_format(struct v4l2_subdev *sd,
height, fmt->format.width,
fmt->format.height);
mutex_lock(&ov2740->mutex);
ov2740_update_pad_format(mode, &fmt->format);
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) {
*v4l2_subdev_get_try_format(sd, sd_state, fmt->pad) = fmt->format;
} else {
ov2740->cur_mode = mode;
__v4l2_ctrl_s_ctrl(ov2740->link_freq, mode->link_freq_index);
__v4l2_ctrl_s_ctrl_int64(ov2740->pixel_rate,
to_pixel_rate(mode->link_freq_index));
*v4l2_subdev_get_pad_format(sd, sd_state, fmt->pad) = fmt->format;
/* Update limits and set FPS to default */
vblank_def = mode->vts_def - mode->height;
__v4l2_ctrl_modify_range(ov2740->vblank,
mode->vts_min - mode->height,
OV2740_VTS_MAX - mode->height, 1,
vblank_def);
__v4l2_ctrl_s_ctrl(ov2740->vblank, vblank_def);
h_blank = to_pixels_per_line(mode->hts, mode->link_freq_index) -
mode->width;
__v4l2_ctrl_modify_range(ov2740->hblank, h_blank, h_blank, 1,
h_blank);
}
mutex_unlock(&ov2740->mutex);
return 0;
}
static int ov2740_get_format(struct v4l2_subdev *sd,
struct v4l2_subdev_state *sd_state,
struct v4l2_subdev_format *fmt)
{
struct ov2740 *ov2740 = to_ov2740(sd);
mutex_lock(&ov2740->mutex);
if (fmt->which == V4L2_SUBDEV_FORMAT_TRY)
fmt->format = *v4l2_subdev_get_try_format(&ov2740->sd,
sd_state,
fmt->pad);
else
ov2740_update_pad_format(ov2740->cur_mode, &fmt->format);
return 0;
mutex_unlock(&ov2740->mutex);
ov2740->cur_mode = mode;
__v4l2_ctrl_s_ctrl(ov2740->link_freq, mode->link_freq_index);
__v4l2_ctrl_s_ctrl_int64(ov2740->pixel_rate,
to_pixel_rate(mode->link_freq_index));
/* Update limits and set FPS to default */
vblank_def = mode->vts_def - mode->height;
__v4l2_ctrl_modify_range(ov2740->vblank,
mode->vts_min - mode->height,
OV2740_VTS_MAX - mode->height, 1, vblank_def);
__v4l2_ctrl_s_ctrl(ov2740->vblank, vblank_def);
h_blank = to_pixels_per_line(mode->hts, mode->link_freq_index) -
mode->width;
__v4l2_ctrl_modify_range(ov2740->hblank, h_blank, h_blank, 1, h_blank);
return 0;
}
@ -946,14 +879,11 @@ static int ov2740_enum_frame_size(struct v4l2_subdev *sd,
return 0;
}
static int ov2740_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
static int ov2740_init_cfg(struct v4l2_subdev *sd,
struct v4l2_subdev_state *sd_state)
{
struct ov2740 *ov2740 = to_ov2740(sd);
mutex_lock(&ov2740->mutex);
ov2740_update_pad_format(&supported_modes[0],
v4l2_subdev_get_try_format(sd, fh->state, 0));
mutex_unlock(&ov2740->mutex);
v4l2_subdev_get_pad_format(sd, sd_state, 0));
return 0;
}
@ -963,10 +893,11 @@ static const struct v4l2_subdev_video_ops ov2740_video_ops = {
};
static const struct v4l2_subdev_pad_ops ov2740_pad_ops = {
.get_fmt = v4l2_subdev_get_fmt,
.set_fmt = ov2740_set_format,
.get_fmt = ov2740_get_format,
.enum_mbus_code = ov2740_enum_mbus_code,
.enum_frame_size = ov2740_enum_frame_size,
.init_cfg = ov2740_init_cfg,
};
static const struct v4l2_subdev_ops ov2740_subdev_ops = {
@ -978,10 +909,6 @@ static const struct media_entity_operations ov2740_subdev_entity_ops = {
.link_validate = v4l2_subdev_link_validate,
};
static const struct v4l2_subdev_internal_ops ov2740_internal_ops = {
.open = ov2740_open,
};
static int ov2740_check_hwcfg(struct device *dev)
{
struct fwnode_handle *ep;
@ -1004,7 +931,7 @@ static int ov2740_check_hwcfg(struct device *dev)
ep = fwnode_graph_get_next_endpoint(fwnode, NULL);
if (!ep)
return -ENXIO;
return -EPROBE_DEFER;
ret = v4l2_fwnode_endpoint_alloc_parse(ep, &bus_cfg);
fwnode_handle_put(ep);
@ -1047,13 +974,12 @@ check_hwcfg_error:
static void ov2740_remove(struct i2c_client *client)
{
struct v4l2_subdev *sd = i2c_get_clientdata(client);
struct ov2740 *ov2740 = to_ov2740(sd);
v4l2_async_unregister_subdev(sd);
media_entity_cleanup(&sd->entity);
v4l2_subdev_cleanup(sd);
v4l2_ctrl_handler_free(sd->ctrl_handler);
pm_runtime_disable(&client->dev);
mutex_destroy(&ov2740->mutex);
}
static int ov2740_nvmem_read(void *priv, unsigned int off, void *val,
@ -1062,9 +988,11 @@ static int ov2740_nvmem_read(void *priv, unsigned int off, void *val,
struct nvm_data *nvm = priv;
struct device *dev = regmap_get_device(nvm->regmap);
struct ov2740 *ov2740 = to_ov2740(dev_get_drvdata(dev));
struct v4l2_subdev_state *sd_state;
int ret = 0;
mutex_lock(&ov2740->mutex);
/* Serialise sensor access */
sd_state = v4l2_subdev_lock_and_get_active_state(&ov2740->sd);
if (nvm->nvm_buffer) {
memcpy(val, nvm->nvm_buffer + off, count);
@ -1082,7 +1010,7 @@ static int ov2740_nvmem_read(void *priv, unsigned int off, void *val,
pm_runtime_put(dev);
exit:
mutex_unlock(&ov2740->mutex);
v4l2_subdev_unlock_state(sd_state);
return ret;
}
@ -1153,7 +1081,6 @@ static int ov2740_probe(struct i2c_client *client)
return dev_err_probe(dev, ret, "failed to find sensor\n");
}
mutex_init(&ov2740->mutex);
ov2740->cur_mode = &supported_modes[0];
ret = ov2740_init_controls(ov2740);
if (ret) {
@ -1161,7 +1088,7 @@ static int ov2740_probe(struct i2c_client *client)
goto probe_error_v4l2_ctrl_handler_free;
}
ov2740->sd.internal_ops = &ov2740_internal_ops;
ov2740->sd.state_lock = ov2740->ctrl_handler.lock;
ov2740->sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
ov2740->sd.entity.ops = &ov2740_subdev_entity_ops;
ov2740->sd.entity.function = MEDIA_ENT_F_CAM_SENSOR;
@ -1172,15 +1099,9 @@ static int ov2740_probe(struct i2c_client *client)
goto probe_error_v4l2_ctrl_handler_free;
}
ret = v4l2_async_register_subdev_sensor(&ov2740->sd);
if (ret < 0) {
dev_err_probe(dev, ret, "failed to register V4L2 subdev\n");
goto probe_error_media_entity_cleanup;
}
ret = ov2740_register_nvmem(client, ov2740);
ret = v4l2_subdev_init_finalize(&ov2740->sd);
if (ret)
dev_warn(&client->dev, "register nvmem failed, ret %d\n", ret);
goto probe_error_media_entity_cleanup;
/* Set the device's state to active if it's in D0 state. */
if (full_power)
@ -1188,20 +1109,32 @@ static int ov2740_probe(struct i2c_client *client)
pm_runtime_enable(&client->dev);
pm_runtime_idle(&client->dev);
ret = v4l2_async_register_subdev_sensor(&ov2740->sd);
if (ret < 0) {
dev_err_probe(dev, ret, "failed to register V4L2 subdev\n");
goto probe_error_v4l2_subdev_cleanup;
}
ret = ov2740_register_nvmem(client, ov2740);
if (ret)
dev_warn(&client->dev, "register nvmem failed, ret %d\n", ret);
return 0;
probe_error_v4l2_subdev_cleanup:
v4l2_subdev_cleanup(&ov2740->sd);
probe_error_media_entity_cleanup:
media_entity_cleanup(&ov2740->sd.entity);
pm_runtime_disable(&client->dev);
pm_runtime_set_suspended(&client->dev);
probe_error_v4l2_ctrl_handler_free:
v4l2_ctrl_handler_free(ov2740->sd.ctrl_handler);
mutex_destroy(&ov2740->mutex);
return ret;
}
static DEFINE_SIMPLE_DEV_PM_OPS(ov2740_pm_ops, ov2740_suspend, ov2740_resume);
static const struct acpi_device_id ov2740_acpi_ids[] = {
{"INT3474"},
{}
@ -1212,7 +1145,6 @@ MODULE_DEVICE_TABLE(acpi, ov2740_acpi_ids);
static struct i2c_driver ov2740_i2c_driver = {
.driver = {
.name = "ov2740",
.pm = pm_sleep_ptr(&ov2740_pm_ops),
.acpi_match_table = ov2740_acpi_ids,
},
.probe = ov2740_probe,

View File

@ -99,8 +99,7 @@ struct ov4689 {
u32 clock_rate;
struct mutex mutex; /* lock to protect streaming, ctrls and cur_mode */
bool streaming;
struct mutex mutex; /* lock to protect ctrls and cur_mode */
struct v4l2_ctrl_handler ctrl_handler;
struct v4l2_ctrl *exposure;
@ -468,10 +467,6 @@ static int ov4689_s_stream(struct v4l2_subdev *sd, int on)
mutex_lock(&ov4689->mutex);
on = !!on;
if (on == ov4689->streaming)
goto unlock_and_return;
if (on) {
ret = pm_runtime_resume_and_get(&client->dev);
if (ret < 0)
@ -504,8 +499,6 @@ static int ov4689_s_stream(struct v4l2_subdev *sd, int on)
pm_runtime_put(&client->dev);
}
ov4689->streaming = on;
unlock_and_return:
mutex_unlock(&ov4689->mutex);

Some files were not shown because too many files have changed in this diff Show More