drm-misc-next for 5.17:

UAPI Changes:
 
  * vmwgfx: Version bump to 2.20
 
 Cross-subsystem Changes:
 
  * of: Create simple-framebuffer devices in of_platform_default_init()
 
 Core Changes:
 
  * Replace include <linux/kernel.h> with more fine-grained includes
  * Document DRM_IOCTL_MODE_GETFB2
  * format-helper: Support XRGB2101010 source buffers
 
 Driver Changes:
 
  * amdgpu: Fix runtime PM on some configs
  * ast: Fix I2C initialization
  * bridge: ti-sn65dsi86: Set regmap max_register
  * panel: Add Team Source Display TST043015CMHX plus DT bindings
  * simpledrm: Add support for Apple M1
  * sprd: Add various drivers plus DT bindings
  * vc4: Support 10-bit YUV 4:2:0 output; Fix clock-rate updates
  * vmwgfx: Implement GEM support; Implement GL 4.3 support
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEchf7rIzpz2NEoWjlaA3BHVMLeiMFAmG7SjsACgkQaA3BHVML
 eiNBbwgAl/o4gpz0ncvwU1eXdC83HPAtYpAxV2+z4VyWtXmPR2A+N7kAlO9K0PrW
 lHvwoH9qSqufHdUkl3wcbkNm4j7W9po/3fU5Yhm2EBNqaEmE7GxKw5K5H5m+z2tv
 JqxBik7l9yUTEKL1uI1j0GgGYiduVPvUrHdxCA9VfbYOaQmknIXHvBC1H5YE0BdJ
 4GrrhTamXW8ltexzmLU0DM8QnDO6mBIq1NZQ7kauhsRRxE/f/22S09MAnMCiNZD9
 cn76fpUZVPcqy7q6nem3OSXO80dzKX0Unc1wufQw0ytBIy84CqtsEcXvJT4ikcZU
 EjP81NJffYmBogXk26cHyX1yqkePwg==
 =Zx2j
 -----END PGP SIGNATURE-----

Merge tag 'drm-misc-next-2021-12-16' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for 5.17:

UAPI Changes:

 * vmwgfx: Version bump to 2.20

Cross-subsystem Changes:

 * of: Create simple-framebuffer devices in of_platform_default_init()

Core Changes:

 * Replace include <linux/kernel.h> with more fine-grained includes
 * Document DRM_IOCTL_MODE_GETFB2
 * format-helper: Support XRGB2101010 source buffers

Driver Changes:

 * amdgpu: Fix runtime PM on some configs
 * ast: Fix I2C initialization
 * bridge: ti-sn65dsi86: Set regmap max_register
 * panel: Add Team Source Display TST043015CMHX plus DT bindings
 * simpledrm: Add support for Apple M1
 * sprd: Add various drivers plus DT bindings
 * vc4: Support 10-bit YUV 4:2:0 output; Fix clock-rate updates
 * vmwgfx: Implement GEM support; Implement GL 4.3 support

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/YbtOaZLvar+9hBOi@linux-uq9g.fritz.box
This commit is contained in:
Dave Airlie 2021-12-17 16:06:14 +10:00
commit 8b70b5fee0
89 changed files with 4314 additions and 2333 deletions

View File

@ -79,6 +79,14 @@ properties:
- port@0
- port@1
pclk-sample:
description:
Data sampling on rising or falling edge.
enum:
- 0 # Falling edge
- 1 # Rising edge
default: 0
powerdown-gpios:
description:
The GPIO used to control the power down line of this device.
@ -102,6 +110,16 @@ then:
properties:
data-mapping: false
if:
not:
properties:
compatible:
contains:
const: lvds-encoder
then:
properties:
pclk-sample: false
required:
- compatible
- ports

View File

@ -290,6 +290,8 @@ properties:
- starry,kr070pe2t
# Starry 12.2" (1920x1200 pixels) TFT LCD panel
- starry,kr122ea0sra
# Team Source Display Technology TST043015CMHX 4.3" WQVGA TFT LCD panel
- team-source-display,tst043015cmhx
# Tianma Micro-electronics TM070JDHG30 7.0" WXGA TFT LCD panel
- tianma,tm070jdhg30
# Tianma Micro-electronics TM070JVHG33 7.0" WXGA TFT LCD panel

View File

@ -0,0 +1,64 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/sprd/sprd,display-subsystem.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Unisoc DRM master device
maintainers:
- Kevin Tang <kevin.tang@unisoc.com>
description: |
The Unisoc DRM master device is a virtual device needed to list all
DPU devices or other display interface nodes that comprise the
graphics subsystem.
Unisoc's display pipeline have several components as below description,
multi display controllers and corresponding physical interfaces.
For different display scenarios, dpu0 and dpu1 maybe binding to different
encoder.
E.g:
dpu0 and dpu1 both binding to DSI for dual mipi-dsi display;
dpu0 binding to DSI for primary display, and dpu1 binding to DP for external display;
+-----------------------------------------+
| |
| +---------+ |
+----+ | +----+ +---------+ |DPHY/CPHY| | +------+
| +----->+dpu0+--->+MIPI|DSI +--->+Combo +----->+Panel0|
|AXI | | +----+ +---------+ +---------+ | +------+
| | | ^ |
| | | | |
| | | +-----------+ |
| | | | |
|APB | | +--+-+ +-----------+ +---+ | +------+
| +----->+dpu1+--->+DisplayPort+--->+PHY+--------->+Panel1|
| | | +----+ +-----------+ +---+ | +------+
+----+ | |
+-----------------------------------------+
properties:
compatible:
const: sprd,display-subsystem
ports:
$ref: /schemas/types.yaml#/definitions/phandle-array
description:
Should contain a list of phandles pointing to display interface port
of DPU devices.
required:
- compatible
- ports
additionalProperties: false
examples:
- |
display-subsystem {
compatible = "sprd,display-subsystem";
ports = <&dpu_out>;
};

View File

@ -0,0 +1,77 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/sprd/sprd,sharkl3-dpu.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Unisoc Sharkl3 Display Processor Unit (DPU)
maintainers:
- Kevin Tang <kevin.tang@unisoc.com>
description: |
DPU (Display Processor Unit) is the Display Controller for the Unisoc SoCs
which transfers the image data from a video memory buffer to an internal
LCD interface.
properties:
compatible:
const: sprd,sharkl3-dpu
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
minItems: 2
clock-names:
items:
- const: clk_src_128m
- const: clk_src_384m
power-domains:
maxItems: 1
iommus:
maxItems: 1
port:
type: object
description:
A port node with endpoint definitions as defined in
Documentation/devicetree/bindings/media/video-interfaces.txt.
That port should be the output endpoint, usually output to
the associated DSI.
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
- port
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/sprd,sc9860-clk.h>
dpu: dpu@63000000 {
compatible = "sprd,sharkl3-dpu";
reg = <0x63000000 0x1000>;
interrupts = <GIC_SPI 46 IRQ_TYPE_LEVEL_HIGH>;
clock-names = "clk_src_128m", "clk_src_384m";
clocks = <&pll CLK_TWPLL_128M>,
<&pll CLK_TWPLL_384M>;
dpu_port: port {
dpu_out: endpoint {
remote-endpoint = <&dsi_in>;
};
};
};

View File

@ -0,0 +1,88 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/sprd/sprd,sharkl3-dsi-host.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Unisoc MIPI DSI Controller
maintainers:
- Kevin Tang <kevin.tang@unisoc.com>
properties:
compatible:
const: sprd,sharkl3-dsi-host
reg:
maxItems: 1
interrupts:
maxItems: 2
clocks:
minItems: 1
clock-names:
items:
- const: clk_src_96m
power-domains:
maxItems: 1
ports:
type: object
properties:
"#address-cells":
const: 1
"#size-cells":
const: 0
port@0:
type: object
description:
A port node with endpoint definitions as defined in
Documentation/devicetree/bindings/media/video-interfaces.txt.
That port should be the input endpoint, usually coming from
the associated DPU.
required:
- "#address-cells"
- "#size-cells"
- port@0
additionalProperties: false
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
- ports
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/sprd,sc9860-clk.h>
dsi: dsi@63100000 {
compatible = "sprd,sharkl3-dsi-host";
reg = <0x63100000 0x1000>;
interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>;
clock-names = "clk_src_96m";
clocks = <&pll CLK_TWPLL_96M>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
dsi_in: endpoint {
remote-endpoint = <&dpu_out>;
};
};
};
};

View File

@ -1236,6 +1236,8 @@ patternProperties:
description: Truly Semiconductors Limited
"^visionox,.*":
description: Visionox
"^team-source-display,.*":
description: Shenzhen Team Source Display Technology Co., Ltd. (TSD)
"^tsd,.*":
description: Theobroma Systems Design und Consulting GmbH
"^tyan,.*":

View File

@ -388,6 +388,8 @@ source "drivers/gpu/drm/xlnx/Kconfig"
source "drivers/gpu/drm/gud/Kconfig"
source "drivers/gpu/drm/sprd/Kconfig"
config DRM_HYPERV
tristate "DRM Support for Hyper-V synthetic video device"
depends on DRM && PCI && MMU && HYPERV

View File

@ -134,3 +134,4 @@ obj-$(CONFIG_DRM_TIDSS) += tidss/
obj-y += xlnx/
obj-y += gud/
obj-$(CONFIG_DRM_HYPERV) += hyperv/
obj-$(CONFIG_DRM_SPRD) += sprd/

View File

@ -61,9 +61,6 @@ static int amdgpu_dma_buf_attach(struct dma_buf *dmabuf,
if (pci_p2pdma_distance_many(adev->pdev, &attach->dev, 1, true) < 0)
attach->peer2peer = false;
if (attach->dev->driver == adev->dev->driver)
return 0;
r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
if (r < 0)
goto out;

View File

@ -3,6 +3,6 @@
# Makefile for the drm device driver. This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.
ast-y := ast_drv.o ast_main.o ast_mm.o ast_mode.o ast_post.o ast_dp501.o
ast-y := ast_drv.o ast_i2c.o ast_main.o ast_mm.o ast_mode.o ast_post.o ast_dp501.o
obj-$(CONFIG_DRM_AST) := ast.o

View File

@ -357,4 +357,7 @@ bool ast_dp501_read_edid(struct drm_device *dev, u8 *ediddata);
u8 ast_get_dp501_max_clk(struct drm_device *dev);
void ast_init_3rdtx(struct drm_device *dev);
/* ast_i2c.c */
struct ast_i2c_chan *ast_i2c_create(struct drm_device *dev);
#endif

View File

@ -0,0 +1,152 @@
// SPDX-License-Identifier: MIT
/*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sub license, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
* USE OR OTHER DEALINGS IN THE SOFTWARE.
*
* The above copyright notice and this permission notice (including the
* next paragraph) shall be included in all copies or substantial portions
* of the Software.
*/
#include <drm/drm_managed.h>
#include <drm/drm_print.h>
#include "ast_drv.h"
static void ast_i2c_setsda(void *i2c_priv, int data)
{
struct ast_i2c_chan *i2c = i2c_priv;
struct ast_private *ast = to_ast_private(i2c->dev);
int i;
u8 ujcrb7, jtemp;
for (i = 0; i < 0x10000; i++) {
ujcrb7 = ((data & 0x01) ? 0 : 1) << 2;
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xf1, ujcrb7);
jtemp = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x04);
if (ujcrb7 == jtemp)
break;
}
}
static void ast_i2c_setscl(void *i2c_priv, int clock)
{
struct ast_i2c_chan *i2c = i2c_priv;
struct ast_private *ast = to_ast_private(i2c->dev);
int i;
u8 ujcrb7, jtemp;
for (i = 0; i < 0x10000; i++) {
ujcrb7 = ((clock & 0x01) ? 0 : 1);
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xf4, ujcrb7);
jtemp = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x01);
if (ujcrb7 == jtemp)
break;
}
}
static int ast_i2c_getsda(void *i2c_priv)
{
struct ast_i2c_chan *i2c = i2c_priv;
struct ast_private *ast = to_ast_private(i2c->dev);
uint32_t val, val2, count, pass;
count = 0;
pass = 0;
val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01;
do {
val2 = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01;
if (val == val2) {
pass++;
} else {
pass = 0;
val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01;
}
} while ((pass < 5) && (count++ < 0x10000));
return val & 1 ? 1 : 0;
}
static int ast_i2c_getscl(void *i2c_priv)
{
struct ast_i2c_chan *i2c = i2c_priv;
struct ast_private *ast = to_ast_private(i2c->dev);
uint32_t val, val2, count, pass;
count = 0;
pass = 0;
val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01;
do {
val2 = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01;
if (val == val2) {
pass++;
} else {
pass = 0;
val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01;
}
} while ((pass < 5) && (count++ < 0x10000));
return val & 1 ? 1 : 0;
}
static void ast_i2c_release(struct drm_device *dev, void *res)
{
struct ast_i2c_chan *i2c = res;
i2c_del_adapter(&i2c->adapter);
kfree(i2c);
}
struct ast_i2c_chan *ast_i2c_create(struct drm_device *dev)
{
struct ast_i2c_chan *i2c;
int ret;
i2c = kzalloc(sizeof(struct ast_i2c_chan), GFP_KERNEL);
if (!i2c)
return NULL;
i2c->adapter.owner = THIS_MODULE;
i2c->adapter.class = I2C_CLASS_DDC;
i2c->adapter.dev.parent = dev->dev;
i2c->dev = dev;
i2c_set_adapdata(&i2c->adapter, i2c);
snprintf(i2c->adapter.name, sizeof(i2c->adapter.name),
"AST i2c bit bus");
i2c->adapter.algo_data = &i2c->bit;
i2c->bit.udelay = 20;
i2c->bit.timeout = 2;
i2c->bit.data = i2c;
i2c->bit.setsda = ast_i2c_setsda;
i2c->bit.setscl = ast_i2c_setscl;
i2c->bit.getsda = ast_i2c_getsda;
i2c->bit.getscl = ast_i2c_getscl;
ret = i2c_bit_add_bus(&i2c->adapter);
if (ret) {
drm_err(dev, "Failed to register bit i2c\n");
goto out_kfree;
}
ret = drmm_add_action_or_reset(dev, ast_i2c_release, i2c);
if (ret)
return NULL;
return i2c;
out_kfree:
kfree(i2c);
return NULL;
}

View File

@ -47,9 +47,6 @@
#include "ast_drv.h"
#include "ast_tables.h"
static struct ast_i2c_chan *ast_i2c_create(struct drm_device *dev);
static void ast_i2c_destroy(struct ast_i2c_chan *i2c);
static inline void ast_load_palette_index(struct ast_private *ast,
u8 index, u8 red, u8 green,
u8 blue)
@ -1210,9 +1207,9 @@ static int ast_get_modes(struct drm_connector *connector)
{
struct ast_connector *ast_connector = to_ast_connector(connector);
struct ast_private *ast = to_ast_private(connector->dev);
struct edid *edid;
int ret;
struct edid *edid = NULL;
bool flags = false;
int ret;
if (ast->tx_chip_type == AST_TX_DP501) {
ast->dp501_maxclk = 0xff;
@ -1226,7 +1223,7 @@ static int ast_get_modes(struct drm_connector *connector)
else
kfree(edid);
}
if (!flags)
if (!flags && ast_connector->i2c)
edid = drm_get_edid(connector, &ast_connector->i2c->adapter);
if (edid) {
drm_connector_update_edid_property(&ast_connector->base, edid);
@ -1300,14 +1297,6 @@ static enum drm_mode_status ast_mode_valid(struct drm_connector *connector,
return flags;
}
static void ast_connector_destroy(struct drm_connector *connector)
{
struct ast_connector *ast_connector = to_ast_connector(connector);
ast_i2c_destroy(ast_connector->i2c);
drm_connector_cleanup(connector);
}
static const struct drm_connector_helper_funcs ast_connector_helper_funcs = {
.get_modes = ast_get_modes,
.mode_valid = ast_mode_valid,
@ -1316,7 +1305,7 @@ static const struct drm_connector_helper_funcs ast_connector_helper_funcs = {
static const struct drm_connector_funcs ast_connector_funcs = {
.reset = drm_atomic_helper_connector_reset,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = ast_connector_destroy,
.destroy = drm_connector_cleanup,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
};
@ -1332,10 +1321,13 @@ static int ast_connector_init(struct drm_device *dev)
if (!ast_connector->i2c)
drm_err(dev, "failed to add ddc bus for connector\n");
drm_connector_init_with_ddc(dev, connector,
&ast_connector_funcs,
DRM_MODE_CONNECTOR_VGA,
&ast_connector->i2c->adapter);
if (ast_connector->i2c)
drm_connector_init_with_ddc(dev, connector, &ast_connector_funcs,
DRM_MODE_CONNECTOR_VGA,
&ast_connector->i2c->adapter);
else
drm_connector_init(dev, connector, &ast_connector_funcs,
DRM_MODE_CONNECTOR_VGA);
drm_connector_helper_add(connector, &ast_connector_helper_funcs);
@ -1413,124 +1405,3 @@ int ast_mode_config_init(struct ast_private *ast)
return 0;
}
static int get_clock(void *i2c_priv)
{
struct ast_i2c_chan *i2c = i2c_priv;
struct ast_private *ast = to_ast_private(i2c->dev);
uint32_t val, val2, count, pass;
count = 0;
pass = 0;
val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01;
do {
val2 = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01;
if (val == val2) {
pass++;
} else {
pass = 0;
val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01;
}
} while ((pass < 5) && (count++ < 0x10000));
return val & 1 ? 1 : 0;
}
static int get_data(void *i2c_priv)
{
struct ast_i2c_chan *i2c = i2c_priv;
struct ast_private *ast = to_ast_private(i2c->dev);
uint32_t val, val2, count, pass;
count = 0;
pass = 0;
val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01;
do {
val2 = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01;
if (val == val2) {
pass++;
} else {
pass = 0;
val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01;
}
} while ((pass < 5) && (count++ < 0x10000));
return val & 1 ? 1 : 0;
}
static void set_clock(void *i2c_priv, int clock)
{
struct ast_i2c_chan *i2c = i2c_priv;
struct ast_private *ast = to_ast_private(i2c->dev);
int i;
u8 ujcrb7, jtemp;
for (i = 0; i < 0x10000; i++) {
ujcrb7 = ((clock & 0x01) ? 0 : 1);
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xf4, ujcrb7);
jtemp = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x01);
if (ujcrb7 == jtemp)
break;
}
}
static void set_data(void *i2c_priv, int data)
{
struct ast_i2c_chan *i2c = i2c_priv;
struct ast_private *ast = to_ast_private(i2c->dev);
int i;
u8 ujcrb7, jtemp;
for (i = 0; i < 0x10000; i++) {
ujcrb7 = ((data & 0x01) ? 0 : 1) << 2;
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xf1, ujcrb7);
jtemp = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x04);
if (ujcrb7 == jtemp)
break;
}
}
static struct ast_i2c_chan *ast_i2c_create(struct drm_device *dev)
{
struct ast_i2c_chan *i2c;
int ret;
i2c = kzalloc(sizeof(struct ast_i2c_chan), GFP_KERNEL);
if (!i2c)
return NULL;
i2c->adapter.owner = THIS_MODULE;
i2c->adapter.class = I2C_CLASS_DDC;
i2c->adapter.dev.parent = dev->dev;
i2c->dev = dev;
i2c_set_adapdata(&i2c->adapter, i2c);
snprintf(i2c->adapter.name, sizeof(i2c->adapter.name),
"AST i2c bit bus");
i2c->adapter.algo_data = &i2c->bit;
i2c->bit.udelay = 20;
i2c->bit.timeout = 2;
i2c->bit.data = i2c;
i2c->bit.setsda = set_data;
i2c->bit.setscl = set_clock;
i2c->bit.getsda = get_data;
i2c->bit.getscl = get_clock;
ret = i2c_bit_add_bus(&i2c->adapter);
if (ret) {
drm_err(dev, "Failed to register bit i2c\n");
goto out_free;
}
return i2c;
out_free:
kfree(i2c);
return NULL;
}
static void ast_i2c_destroy(struct ast_i2c_chan *i2c)
{
if (!i2c)
return;
i2c_del_adapter(&i2c->adapter);
kfree(i2c);
}

View File

@ -21,6 +21,7 @@ struct lvds_codec {
struct device *dev;
struct drm_bridge bridge;
struct drm_bridge *panel_bridge;
struct drm_bridge_timings timings;
struct regulator *vcc;
struct gpio_desc *powerdown_gpio;
u32 connector_type;
@ -119,6 +120,7 @@ static int lvds_codec_probe(struct platform_device *pdev)
struct device_node *bus_node;
struct drm_panel *panel;
struct lvds_codec *lvds_codec;
u32 val;
int ret;
lvds_codec = devm_kzalloc(dev, sizeof(*lvds_codec), GFP_KERNEL);
@ -187,12 +189,25 @@ static int lvds_codec_probe(struct platform_device *pdev)
}
}
/*
* Encoder might sample data on different clock edge than the display,
* for example OnSemi FIN3385 has a dedicated strapping pin to select
* the sampling edge.
*/
if (lvds_codec->connector_type == DRM_MODE_CONNECTOR_LVDS &&
!of_property_read_u32(dev->of_node, "pclk-sample", &val)) {
lvds_codec->timings.input_bus_flags = val ?
DRM_BUS_FLAG_PIXDATA_SAMPLE_POSEDGE :
DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE;
}
/*
* The panel_bridge bridge is attached to the panel's of_node,
* but we need a bridge attached to our of_node for our user
* to look up.
*/
lvds_codec->bridge.of_node = dev->of_node;
lvds_codec->bridge.timings = &lvds_codec->timings;
drm_bridge_add(&lvds_codec->bridge);
platform_set_drvdata(pdev, lvds_codec);

View File

@ -213,6 +213,7 @@ static const struct regmap_config ti_sn65dsi86_regmap_config = {
.val_bits = 8,
.volatile_table = &ti_sn_bridge_volatile_table,
.cache_type = REGCACHE_NONE,
.max_register = 0xFF,
};
static int __maybe_unused ti_sn65dsi86_read_u16(struct ti_sn65dsi86 *pdata,

View File

@ -409,6 +409,61 @@ void drm_fb_xrgb8888_to_rgb888_toio(void __iomem *dst, unsigned int dst_pitch,
}
EXPORT_SYMBOL(drm_fb_xrgb8888_to_rgb888_toio);
static void drm_fb_xrgb8888_to_xrgb2101010_line(u32 *dbuf, const u32 *sbuf,
unsigned int pixels)
{
unsigned int x;
u32 val32;
for (x = 0; x < pixels; x++) {
val32 = ((sbuf[x] & 0x000000FF) << 2) |
((sbuf[x] & 0x0000FF00) << 4) |
((sbuf[x] & 0x00FF0000) << 6);
*dbuf++ = val32 | ((val32 >> 8) & 0x00300C03);
}
}
/**
* drm_fb_xrgb8888_to_xrgb2101010_toio - Convert XRGB8888 to XRGB2101010 clip
* buffer
* @dst: XRGB2101010 destination buffer (iomem)
* @dst_pitch: Number of bytes between two consecutive scanlines within dst
* @vaddr: XRGB8888 source buffer
* @fb: DRM framebuffer
* @clip: Clip rectangle area to copy
*
* Drivers can use this function for XRGB2101010 devices that don't natively
* support XRGB8888.
*/
void drm_fb_xrgb8888_to_xrgb2101010_toio(void __iomem *dst,
unsigned int dst_pitch, const void *vaddr,
const struct drm_framebuffer *fb,
const struct drm_rect *clip)
{
size_t linepixels = clip->x2 - clip->x1;
size_t dst_len = linepixels * sizeof(u32);
unsigned int y, lines = clip->y2 - clip->y1;
void *dbuf;
if (!dst_pitch)
dst_pitch = dst_len;
dbuf = kmalloc(dst_len, GFP_KERNEL);
if (!dbuf)
return;
vaddr += clip_offset(clip, fb->pitches[0], sizeof(u32));
for (y = 0; y < lines; y++) {
drm_fb_xrgb8888_to_xrgb2101010_line(dbuf, vaddr, linepixels);
memcpy_toio(dst, dbuf, dst_len);
vaddr += fb->pitches[0];
dst += dst_pitch;
}
kfree(dbuf);
}
EXPORT_SYMBOL(drm_fb_xrgb8888_to_xrgb2101010_toio);
/**
* drm_fb_xrgb8888_to_gray8 - Convert XRGB8888 to grayscale
* @dst: 8-bit grayscale destination buffer
@ -500,6 +555,10 @@ int drm_fb_blit_toio(void __iomem *dst, unsigned int dst_pitch, uint32_t dst_for
fb_format = DRM_FORMAT_XRGB8888;
if (dst_format == DRM_FORMAT_ARGB8888)
dst_format = DRM_FORMAT_XRGB8888;
if (fb_format == DRM_FORMAT_ARGB2101010)
fb_format = DRM_FORMAT_XRGB2101010;
if (dst_format == DRM_FORMAT_ARGB2101010)
dst_format = DRM_FORMAT_XRGB2101010;
if (dst_format == fb_format) {
drm_fb_memcpy_toio(dst, dst_pitch, vmap, fb, clip);
@ -515,6 +574,11 @@ int drm_fb_blit_toio(void __iomem *dst, unsigned int dst_pitch, uint32_t dst_for
drm_fb_xrgb8888_to_rgb888_toio(dst, dst_pitch, vmap, fb, clip);
return 0;
}
} else if (dst_format == DRM_FORMAT_XRGB2101010) {
if (fb_format == DRM_FORMAT_XRGB8888) {
drm_fb_xrgb8888_to_xrgb2101010_toio(dst, dst_pitch, vmap, fb, clip);
return 0;
}
}
return -EINVAL;

View File

@ -269,6 +269,9 @@ const struct drm_format_info *__drm_format_info(u32 format)
.num_planes = 3, .char_per_block = { 2, 2, 2 },
.block_w = { 1, 1, 1 }, .block_h = { 1, 1, 1 }, .hsub = 0,
.vsub = 0, .is_yuv = true },
{ .format = DRM_FORMAT_P030, .depth = 0, .num_planes = 2,
.char_per_block = { 4, 8, 0 }, .block_w = { 3, 3, 0 }, .block_h = { 1, 1, 0 },
.hsub = 2, .vsub = 2, .is_yuv = true},
};
unsigned int i;

View File

@ -3252,6 +3252,33 @@ static const struct panel_desc starry_kr070pe2t = {
.connector_type = DRM_MODE_CONNECTOR_DPI,
};
static const struct display_timing tsd_tst043015cmhx_timing = {
.pixelclock = { 5000000, 9000000, 12000000 },
.hactive = { 480, 480, 480 },
.hfront_porch = { 4, 5, 65 },
.hback_porch = { 36, 40, 255 },
.hsync_len = { 1, 1, 1 },
.vactive = { 272, 272, 272 },
.vfront_porch = { 2, 8, 97 },
.vback_porch = { 3, 8, 31 },
.vsync_len = { 1, 1, 1 },
.flags = DISPLAY_FLAGS_HSYNC_LOW | DISPLAY_FLAGS_VSYNC_LOW |
DISPLAY_FLAGS_DE_HIGH | DISPLAY_FLAGS_PIXDATA_POSEDGE,
};
static const struct panel_desc tsd_tst043015cmhx = {
.timings = &tsd_tst043015cmhx_timing,
.num_timings = 1,
.bpc = 8,
.size = {
.width = 105,
.height = 67,
},
.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
.bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_SAMPLE_NEGEDGE,
};
static const struct drm_display_mode tfc_s9700rtwv43tr_01b_mode = {
.clock = 30000,
.hdisplay = 800,
@ -3928,6 +3955,9 @@ static const struct of_device_id platform_of_match[] = {
}, {
.compatible = "starry,kr070pe2t",
.data = &starry_kr070pe2t,
}, {
.compatible = "team-source-display,tst043015cmhx",
.data = &tsd_tst043015cmhx,
}, {
.compatible = "tfc,s9700rtwv43tr-01b",
.data = &tfc_s9700rtwv43tr_01b,

View File

@ -0,0 +1,13 @@
config DRM_SPRD
tristate "DRM Support for Unisoc SoCs Platform"
depends on ARCH_SPRD || COMPILE_TEST
depends on DRM && OF
select DRM_GEM_CMA_HELPER
select DRM_KMS_CMA_HELPER
select DRM_KMS_HELPER
select DRM_MIPI_DSI
select VIDEOMODE_HELPERS
help
Choose this option if you have a Unisoc chipset.
If M is selected the module will be called sprd_drm.

View File

@ -0,0 +1,8 @@
# SPDX-License-Identifier: GPL-2.0
sprd-drm-y := sprd_drm.o \
sprd_dpu.o \
sprd_dsi.o \
megacores_pll.o
obj-$(CONFIG_DRM_SPRD) += sprd-drm.o

View File

@ -0,0 +1,305 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2020 Unisoc Inc.
*/
#include <asm/div64.h>
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/regmap.h>
#include <linux/string.h>
#include "sprd_dsi.h"
#define L 0
#define H 1
#define CLK 0
#define DATA 1
#define INFINITY 0xffffffff
#define MIN_OUTPUT_FREQ (100)
#define AVERAGE(a, b) (min(a, b) + abs((b) - (a)) / 2)
/* sharkle */
#define VCO_BAND_LOW 750
#define VCO_BAND_MID 1100
#define VCO_BAND_HIGH 1500
#define PHY_REF_CLK 26000
static int dphy_calc_pll_param(struct dphy_pll *pll)
{
const u32 khz = 1000;
const u32 mhz = 1000000;
const unsigned long long factor = 100;
unsigned long long tmp;
int i;
pll->potential_fvco = pll->freq / khz;
pll->ref_clk = PHY_REF_CLK / khz;
for (i = 0; i < 4; ++i) {
if (pll->potential_fvco >= VCO_BAND_LOW &&
pll->potential_fvco <= VCO_BAND_HIGH) {
pll->fvco = pll->potential_fvco;
pll->out_sel = BIT(i);
break;
}
pll->potential_fvco <<= 1;
}
if (pll->fvco == 0)
return -EINVAL;
if (pll->fvco >= VCO_BAND_LOW && pll->fvco <= VCO_BAND_MID) {
/* vco band control */
pll->vco_band = 0x0;
/* low pass filter control */
pll->lpf_sel = 1;
} else if (pll->fvco > VCO_BAND_MID && pll->fvco <= VCO_BAND_HIGH) {
pll->vco_band = 0x1;
pll->lpf_sel = 0;
} else {
return -EINVAL;
}
pll->nint = pll->fvco / pll->ref_clk;
tmp = pll->fvco * factor * mhz;
do_div(tmp, pll->ref_clk);
tmp = tmp - pll->nint * factor * mhz;
tmp *= BIT(20);
do_div(tmp, 100000000);
pll->kint = (u32)tmp;
pll->refin = 3; /* pre-divider bypass */
pll->sdm_en = true; /* use fraction N PLL */
pll->fdk_s = 0x1; /* fraction */
pll->cp_s = 0x0;
pll->det_delay = 0x1;
return 0;
}
static void dphy_set_pll_reg(struct dphy_pll *pll, struct regmap *regmap)
{
u8 reg_val[9] = {0};
int i;
u8 reg_addr[] = {
0x03, 0x04, 0x06, 0x08, 0x09,
0x0a, 0x0b, 0x0e, 0x0f
};
reg_val[0] = 1 | (1 << 1) | (pll->lpf_sel << 2);
reg_val[1] = pll->div | (1 << 3) | (pll->cp_s << 5) | (pll->fdk_s << 7);
reg_val[2] = pll->nint;
reg_val[3] = pll->vco_band | (pll->sdm_en << 1) | (pll->refin << 2);
reg_val[4] = pll->kint >> 12;
reg_val[5] = pll->kint >> 4;
reg_val[6] = pll->out_sel | ((pll->kint << 4) & 0xf);
reg_val[7] = 1 << 4;
reg_val[8] = pll->det_delay;
for (i = 0; i < sizeof(reg_addr); ++i) {
regmap_write(regmap, reg_addr[i], reg_val[i]);
DRM_DEBUG("%02x: %02x\n", reg_addr[i], reg_val[i]);
}
}
int dphy_pll_config(struct dsi_context *ctx)
{
struct sprd_dsi *dsi = container_of(ctx, struct sprd_dsi, ctx);
struct regmap *regmap = ctx->regmap;
struct dphy_pll *pll = &ctx->pll;
int ret;
pll->freq = dsi->slave->hs_rate;
/* FREQ = 26M * (NINT + KINT / 2^20) / out_sel */
ret = dphy_calc_pll_param(pll);
if (ret) {
drm_err(dsi->drm, "failed to calculate dphy pll parameters\n");
return ret;
}
dphy_set_pll_reg(pll, regmap);
return 0;
}
static void dphy_set_timing_reg(struct regmap *regmap, int type, u8 val[])
{
switch (type) {
case REQUEST_TIME:
regmap_write(regmap, 0x31, val[CLK]);
regmap_write(regmap, 0x41, val[DATA]);
regmap_write(regmap, 0x51, val[DATA]);
regmap_write(regmap, 0x61, val[DATA]);
regmap_write(regmap, 0x71, val[DATA]);
regmap_write(regmap, 0x90, val[CLK]);
regmap_write(regmap, 0xa0, val[DATA]);
regmap_write(regmap, 0xb0, val[DATA]);
regmap_write(regmap, 0xc0, val[DATA]);
regmap_write(regmap, 0xd0, val[DATA]);
break;
case PREPARE_TIME:
regmap_write(regmap, 0x32, val[CLK]);
regmap_write(regmap, 0x42, val[DATA]);
regmap_write(regmap, 0x52, val[DATA]);
regmap_write(regmap, 0x62, val[DATA]);
regmap_write(regmap, 0x72, val[DATA]);
regmap_write(regmap, 0x91, val[CLK]);
regmap_write(regmap, 0xa1, val[DATA]);
regmap_write(regmap, 0xb1, val[DATA]);
regmap_write(regmap, 0xc1, val[DATA]);
regmap_write(regmap, 0xd1, val[DATA]);
break;
case ZERO_TIME:
regmap_write(regmap, 0x33, val[CLK]);
regmap_write(regmap, 0x43, val[DATA]);
regmap_write(regmap, 0x53, val[DATA]);
regmap_write(regmap, 0x63, val[DATA]);
regmap_write(regmap, 0x73, val[DATA]);
regmap_write(regmap, 0x92, val[CLK]);
regmap_write(regmap, 0xa2, val[DATA]);
regmap_write(regmap, 0xb2, val[DATA]);
regmap_write(regmap, 0xc2, val[DATA]);
regmap_write(regmap, 0xd2, val[DATA]);
break;
case TRAIL_TIME:
regmap_write(regmap, 0x34, val[CLK]);
regmap_write(regmap, 0x44, val[DATA]);
regmap_write(regmap, 0x54, val[DATA]);
regmap_write(regmap, 0x64, val[DATA]);
regmap_write(regmap, 0x74, val[DATA]);
regmap_write(regmap, 0x93, val[CLK]);
regmap_write(regmap, 0xa3, val[DATA]);
regmap_write(regmap, 0xb3, val[DATA]);
regmap_write(regmap, 0xc3, val[DATA]);
regmap_write(regmap, 0xd3, val[DATA]);
break;
case EXIT_TIME:
regmap_write(regmap, 0x36, val[CLK]);
regmap_write(regmap, 0x46, val[DATA]);
regmap_write(regmap, 0x56, val[DATA]);
regmap_write(regmap, 0x66, val[DATA]);
regmap_write(regmap, 0x76, val[DATA]);
regmap_write(regmap, 0x95, val[CLK]);
regmap_write(regmap, 0xA5, val[DATA]);
regmap_write(regmap, 0xB5, val[DATA]);
regmap_write(regmap, 0xc5, val[DATA]);
regmap_write(regmap, 0xd5, val[DATA]);
break;
case CLKPOST_TIME:
regmap_write(regmap, 0x35, val[CLK]);
regmap_write(regmap, 0x94, val[CLK]);
break;
/* the following just use default value */
case SETTLE_TIME:
fallthrough;
case TA_GET:
fallthrough;
case TA_GO:
fallthrough;
case TA_SURE:
fallthrough;
default:
break;
}
}
void dphy_timing_config(struct dsi_context *ctx)
{
struct regmap *regmap = ctx->regmap;
struct dphy_pll *pll = &ctx->pll;
const u32 factor = 2;
const u32 scale = 100;
u32 t_ui, t_byteck, t_half_byteck;
u32 range[2], constant;
u8 val[2];
u32 tmp = 0;
/* t_ui: 1 ui, byteck: 8 ui, half byteck: 4 ui */
t_ui = 1000 * scale / (pll->freq / 1000);
t_byteck = t_ui << 3;
t_half_byteck = t_ui << 2;
constant = t_ui << 1;
/* REQUEST_TIME: HS T-LPX: LP-01
* For T-LPX, mipi spec defined min value is 50ns,
* but maybe it shouldn't be too small, because BTA,
* LP-10, LP-00, LP-01, all of this is related to T-LPX.
*/
range[L] = 50 * scale;
range[H] = INFINITY;
val[CLK] = DIV_ROUND_UP(range[L] * (factor << 1), t_byteck) - 2;
val[DATA] = val[CLK];
dphy_set_timing_reg(regmap, REQUEST_TIME, val);
/* PREPARE_TIME: HS sequence: LP-00 */
range[L] = 38 * scale;
range[H] = 95 * scale;
tmp = AVERAGE(range[L], range[H]);
val[CLK] = DIV_ROUND_UP(AVERAGE(range[L], range[H]), t_half_byteck) - 1;
range[L] = 40 * scale + 4 * t_ui;
range[H] = 85 * scale + 6 * t_ui;
tmp |= AVERAGE(range[L], range[H]) << 16;
val[DATA] = DIV_ROUND_UP(AVERAGE(range[L], range[H]), t_half_byteck) - 1;
dphy_set_timing_reg(regmap, PREPARE_TIME, val);
/* ZERO_TIME: HS-ZERO */
range[L] = 300 * scale;
range[H] = INFINITY;
val[CLK] = DIV_ROUND_UP(range[L] * factor + (tmp & 0xffff)
- 525 * t_byteck / 100, t_byteck) - 2;
range[L] = 145 * scale + 10 * t_ui;
val[DATA] = DIV_ROUND_UP(range[L] * factor
+ ((tmp >> 16) & 0xffff) - 525 * t_byteck / 100,
t_byteck) - 2;
dphy_set_timing_reg(regmap, ZERO_TIME, val);
/* TRAIL_TIME: HS-TRAIL */
range[L] = 60 * scale;
range[H] = INFINITY;
val[CLK] = DIV_ROUND_UP(range[L] * factor - constant, t_half_byteck);
range[L] = max(8 * t_ui, 60 * scale + 4 * t_ui);
val[DATA] = DIV_ROUND_UP(range[L] * 3 / 2 - constant, t_half_byteck) - 2;
dphy_set_timing_reg(regmap, TRAIL_TIME, val);
/* EXIT_TIME: */
range[L] = 100 * scale;
range[H] = INFINITY;
val[CLK] = DIV_ROUND_UP(range[L] * factor, t_byteck) - 2;
val[DATA] = val[CLK];
dphy_set_timing_reg(regmap, EXIT_TIME, val);
/* CLKPOST_TIME: */
range[L] = 60 * scale + 52 * t_ui;
range[H] = INFINITY;
val[CLK] = DIV_ROUND_UP(range[L] * factor, t_byteck) - 2;
val[DATA] = val[CLK];
dphy_set_timing_reg(regmap, CLKPOST_TIME, val);
/* SETTLE_TIME:
* This time is used for receiver. So for transmitter,
* it can be ignored.
*/
/* TA_GO:
* transmitter drives bridge state(LP-00) before releasing control,
* reg 0x1f default value: 0x04, which is good.
*/
/* TA_SURE:
* After LP-10 state and before bridge state(LP-00),
* reg 0x20 default value: 0x01, which is good.
*/
/* TA_GET:
* receiver drives Bridge state(LP-00) before releasing control
* reg 0x21 default value: 0x03, which is good.
*/
}

View File

@ -0,0 +1,880 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2020 Unisoc Inc.
*/
#include <linux/component.h>
#include <linux/delay.h>
#include <linux/dma-buf.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_graph.h>
#include <linux/of_irq.h>
#include <linux/wait.h>
#include <linux/workqueue.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_cma_helper.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_plane_helper.h>
#include "sprd_drm.h"
#include "sprd_dpu.h"
#include "sprd_dsi.h"
/* Global control registers */
#define REG_DPU_CTRL 0x04
#define REG_DPU_CFG0 0x08
#define REG_PANEL_SIZE 0x20
#define REG_BLEND_SIZE 0x24
#define REG_BG_COLOR 0x2C
/* Layer0 control registers */
#define REG_LAY_BASE_ADDR0 0x30
#define REG_LAY_BASE_ADDR1 0x34
#define REG_LAY_BASE_ADDR2 0x38
#define REG_LAY_CTRL 0x40
#define REG_LAY_SIZE 0x44
#define REG_LAY_PITCH 0x48
#define REG_LAY_POS 0x4C
#define REG_LAY_ALPHA 0x50
#define REG_LAY_CROP_START 0x5C
/* Interrupt control registers */
#define REG_DPU_INT_EN 0x1E0
#define REG_DPU_INT_CLR 0x1E4
#define REG_DPU_INT_STS 0x1E8
/* DPI control registers */
#define REG_DPI_CTRL 0x1F0
#define REG_DPI_H_TIMING 0x1F4
#define REG_DPI_V_TIMING 0x1F8
/* MMU control registers */
#define REG_MMU_EN 0x800
#define REG_MMU_VPN_RANGE 0x80C
#define REG_MMU_PPN1 0x83C
#define REG_MMU_RANGE1 0x840
#define REG_MMU_PPN2 0x844
#define REG_MMU_RANGE2 0x848
/* Global control bits */
#define BIT_DPU_RUN BIT(0)
#define BIT_DPU_STOP BIT(1)
#define BIT_DPU_REG_UPDATE BIT(2)
#define BIT_DPU_IF_EDPI BIT(0)
/* Layer control bits */
#define BIT_DPU_LAY_EN BIT(0)
#define BIT_DPU_LAY_LAYER_ALPHA (0x01 << 2)
#define BIT_DPU_LAY_COMBO_ALPHA (0x02 << 2)
#define BIT_DPU_LAY_FORMAT_YUV422_2PLANE (0x00 << 4)
#define BIT_DPU_LAY_FORMAT_YUV420_2PLANE (0x01 << 4)
#define BIT_DPU_LAY_FORMAT_YUV420_3PLANE (0x02 << 4)
#define BIT_DPU_LAY_FORMAT_ARGB8888 (0x03 << 4)
#define BIT_DPU_LAY_FORMAT_RGB565 (0x04 << 4)
#define BIT_DPU_LAY_DATA_ENDIAN_B0B1B2B3 (0x00 << 8)
#define BIT_DPU_LAY_DATA_ENDIAN_B3B2B1B0 (0x01 << 8)
#define BIT_DPU_LAY_NO_SWITCH (0x00 << 10)
#define BIT_DPU_LAY_RB_OR_UV_SWITCH (0x01 << 10)
#define BIT_DPU_LAY_MODE_BLEND_NORMAL (0x00 << 16)
#define BIT_DPU_LAY_MODE_BLEND_PREMULT (0x01 << 16)
#define BIT_DPU_LAY_ROTATION_0 (0x00 << 20)
#define BIT_DPU_LAY_ROTATION_90 (0x01 << 20)
#define BIT_DPU_LAY_ROTATION_180 (0x02 << 20)
#define BIT_DPU_LAY_ROTATION_270 (0x03 << 20)
#define BIT_DPU_LAY_ROTATION_0_M (0x04 << 20)
#define BIT_DPU_LAY_ROTATION_90_M (0x05 << 20)
#define BIT_DPU_LAY_ROTATION_180_M (0x06 << 20)
#define BIT_DPU_LAY_ROTATION_270_M (0x07 << 20)
/* Interrupt control & status bits */
#define BIT_DPU_INT_DONE BIT(0)
#define BIT_DPU_INT_TE BIT(1)
#define BIT_DPU_INT_ERR BIT(2)
#define BIT_DPU_INT_UPDATE_DONE BIT(4)
#define BIT_DPU_INT_VSYNC BIT(5)
/* DPI control bits */
#define BIT_DPU_EDPI_TE_EN BIT(8)
#define BIT_DPU_EDPI_FROM_EXTERNAL_PAD BIT(10)
#define BIT_DPU_DPI_HALT_EN BIT(16)
static const u32 layer_fmts[] = {
DRM_FORMAT_XRGB8888,
DRM_FORMAT_XBGR8888,
DRM_FORMAT_ARGB8888,
DRM_FORMAT_ABGR8888,
DRM_FORMAT_RGBA8888,
DRM_FORMAT_BGRA8888,
DRM_FORMAT_RGBX8888,
DRM_FORMAT_RGB565,
DRM_FORMAT_BGR565,
DRM_FORMAT_NV12,
DRM_FORMAT_NV21,
DRM_FORMAT_NV16,
DRM_FORMAT_NV61,
DRM_FORMAT_YUV420,
DRM_FORMAT_YVU420,
};
struct sprd_plane {
struct drm_plane base;
};
static int dpu_wait_stop_done(struct sprd_dpu *dpu)
{
struct dpu_context *ctx = &dpu->ctx;
int rc;
if (ctx->stopped)
return 0;
rc = wait_event_interruptible_timeout(ctx->wait_queue, ctx->evt_stop,
msecs_to_jiffies(500));
ctx->evt_stop = false;
ctx->stopped = true;
if (!rc) {
drm_err(dpu->drm, "dpu wait for stop done time out!\n");
return -ETIMEDOUT;
}
return 0;
}
static int dpu_wait_update_done(struct sprd_dpu *dpu)
{
struct dpu_context *ctx = &dpu->ctx;
int rc;
ctx->evt_update = false;
rc = wait_event_interruptible_timeout(ctx->wait_queue, ctx->evt_update,
msecs_to_jiffies(500));
if (!rc) {
drm_err(dpu->drm, "dpu wait for reg update done time out!\n");
return -ETIMEDOUT;
}
return 0;
}
static u32 drm_format_to_dpu(struct drm_framebuffer *fb)
{
u32 format = 0;
switch (fb->format->format) {
case DRM_FORMAT_BGRA8888:
/* BGRA8888 -> ARGB8888 */
format |= BIT_DPU_LAY_DATA_ENDIAN_B3B2B1B0;
format |= BIT_DPU_LAY_FORMAT_ARGB8888;
break;
case DRM_FORMAT_RGBX8888:
case DRM_FORMAT_RGBA8888:
/* RGBA8888 -> ABGR8888 */
format |= BIT_DPU_LAY_DATA_ENDIAN_B3B2B1B0;
fallthrough;
case DRM_FORMAT_ABGR8888:
/* RB switch */
format |= BIT_DPU_LAY_RB_OR_UV_SWITCH;
fallthrough;
case DRM_FORMAT_ARGB8888:
format |= BIT_DPU_LAY_FORMAT_ARGB8888;
break;
case DRM_FORMAT_XBGR8888:
/* RB switch */
format |= BIT_DPU_LAY_RB_OR_UV_SWITCH;
fallthrough;
case DRM_FORMAT_XRGB8888:
format |= BIT_DPU_LAY_FORMAT_ARGB8888;
break;
case DRM_FORMAT_BGR565:
/* RB switch */
format |= BIT_DPU_LAY_RB_OR_UV_SWITCH;
fallthrough;
case DRM_FORMAT_RGB565:
format |= BIT_DPU_LAY_FORMAT_RGB565;
break;
case DRM_FORMAT_NV12:
/* 2-Lane: Yuv420 */
format |= BIT_DPU_LAY_FORMAT_YUV420_2PLANE;
/* Y endian */
format |= BIT_DPU_LAY_DATA_ENDIAN_B0B1B2B3;
/* UV endian */
format |= BIT_DPU_LAY_NO_SWITCH;
break;
case DRM_FORMAT_NV21:
/* 2-Lane: Yuv420 */
format |= BIT_DPU_LAY_FORMAT_YUV420_2PLANE;
/* Y endian */
format |= BIT_DPU_LAY_DATA_ENDIAN_B0B1B2B3;
/* UV endian */
format |= BIT_DPU_LAY_RB_OR_UV_SWITCH;
break;
case DRM_FORMAT_NV16:
/* 2-Lane: Yuv422 */
format |= BIT_DPU_LAY_FORMAT_YUV422_2PLANE;
/* Y endian */
format |= BIT_DPU_LAY_DATA_ENDIAN_B3B2B1B0;
/* UV endian */
format |= BIT_DPU_LAY_RB_OR_UV_SWITCH;
break;
case DRM_FORMAT_NV61:
/* 2-Lane: Yuv422 */
format |= BIT_DPU_LAY_FORMAT_YUV422_2PLANE;
/* Y endian */
format |= BIT_DPU_LAY_DATA_ENDIAN_B0B1B2B3;
/* UV endian */
format |= BIT_DPU_LAY_NO_SWITCH;
break;
case DRM_FORMAT_YUV420:
format |= BIT_DPU_LAY_FORMAT_YUV420_3PLANE;
/* Y endian */
format |= BIT_DPU_LAY_DATA_ENDIAN_B0B1B2B3;
/* UV endian */
format |= BIT_DPU_LAY_NO_SWITCH;
break;
case DRM_FORMAT_YVU420:
format |= BIT_DPU_LAY_FORMAT_YUV420_3PLANE;
/* Y endian */
format |= BIT_DPU_LAY_DATA_ENDIAN_B0B1B2B3;
/* UV endian */
format |= BIT_DPU_LAY_RB_OR_UV_SWITCH;
break;
default:
break;
}
return format;
}
static u32 drm_rotation_to_dpu(struct drm_plane_state *state)
{
u32 rotation = 0;
switch (state->rotation) {
default:
case DRM_MODE_ROTATE_0:
rotation = BIT_DPU_LAY_ROTATION_0;
break;
case DRM_MODE_ROTATE_90:
rotation = BIT_DPU_LAY_ROTATION_90;
break;
case DRM_MODE_ROTATE_180:
rotation = BIT_DPU_LAY_ROTATION_180;
break;
case DRM_MODE_ROTATE_270:
rotation = BIT_DPU_LAY_ROTATION_270;
break;
case DRM_MODE_REFLECT_Y:
rotation = BIT_DPU_LAY_ROTATION_180_M;
break;
case (DRM_MODE_REFLECT_Y | DRM_MODE_ROTATE_90):
rotation = BIT_DPU_LAY_ROTATION_90_M;
break;
case DRM_MODE_REFLECT_X:
rotation = BIT_DPU_LAY_ROTATION_0_M;
break;
case (DRM_MODE_REFLECT_X | DRM_MODE_ROTATE_90):
rotation = BIT_DPU_LAY_ROTATION_270_M;
break;
}
return rotation;
}
static u32 drm_blend_to_dpu(struct drm_plane_state *state)
{
u32 blend = 0;
switch (state->pixel_blend_mode) {
case DRM_MODE_BLEND_COVERAGE:
/* alpha mode select - combo alpha */
blend |= BIT_DPU_LAY_COMBO_ALPHA;
/* Normal mode */
blend |= BIT_DPU_LAY_MODE_BLEND_NORMAL;
break;
case DRM_MODE_BLEND_PREMULTI:
/* alpha mode select - combo alpha */
blend |= BIT_DPU_LAY_COMBO_ALPHA;
/* Pre-mult mode */
blend |= BIT_DPU_LAY_MODE_BLEND_PREMULT;
break;
case DRM_MODE_BLEND_PIXEL_NONE:
default:
/* don't do blending, maybe RGBX */
/* alpha mode select - layer alpha */
blend |= BIT_DPU_LAY_LAYER_ALPHA;
break;
}
return blend;
}
static void sprd_dpu_layer(struct sprd_dpu *dpu, struct drm_plane_state *state)
{
struct dpu_context *ctx = &dpu->ctx;
struct drm_gem_cma_object *cma_obj;
struct drm_framebuffer *fb = state->fb;
u32 addr, size, offset, pitch, blend, format, rotation;
u32 src_x = state->src_x >> 16;
u32 src_y = state->src_y >> 16;
u32 src_w = state->src_w >> 16;
u32 src_h = state->src_h >> 16;
u32 dst_x = state->crtc_x;
u32 dst_y = state->crtc_y;
u32 alpha = state->alpha;
u32 index = state->zpos;
int i;
offset = (dst_x & 0xffff) | (dst_y << 16);
size = (src_w & 0xffff) | (src_h << 16);
for (i = 0; i < fb->format->num_planes; i++) {
cma_obj = drm_fb_cma_get_gem_obj(fb, i);
addr = cma_obj->paddr + fb->offsets[i];
if (i == 0)
layer_reg_wr(ctx, REG_LAY_BASE_ADDR0, addr, index);
else if (i == 1)
layer_reg_wr(ctx, REG_LAY_BASE_ADDR1, addr, index);
else
layer_reg_wr(ctx, REG_LAY_BASE_ADDR2, addr, index);
}
if (fb->format->num_planes == 3) {
/* UV pitch is 1/2 of Y pitch */
pitch = (fb->pitches[0] / fb->format->cpp[0]) |
(fb->pitches[0] / fb->format->cpp[0] << 15);
} else {
pitch = fb->pitches[0] / fb->format->cpp[0];
}
layer_reg_wr(ctx, REG_LAY_POS, offset, index);
layer_reg_wr(ctx, REG_LAY_SIZE, size, index);
layer_reg_wr(ctx, REG_LAY_CROP_START,
src_y << 16 | src_x, index);
layer_reg_wr(ctx, REG_LAY_ALPHA, alpha, index);
layer_reg_wr(ctx, REG_LAY_PITCH, pitch, index);
format = drm_format_to_dpu(fb);
blend = drm_blend_to_dpu(state);
rotation = drm_rotation_to_dpu(state);
layer_reg_wr(ctx, REG_LAY_CTRL, BIT_DPU_LAY_EN |
format |
blend |
rotation,
index);
}
static void sprd_dpu_flip(struct sprd_dpu *dpu)
{
struct dpu_context *ctx = &dpu->ctx;
/*
* Make sure the dpu is in stop status. DPU has no shadow
* registers in EDPI mode. So the config registers can only be
* updated in the rising edge of DPU_RUN bit.
*/
if (ctx->if_type == SPRD_DPU_IF_EDPI)
dpu_wait_stop_done(dpu);
/* update trigger and wait */
if (ctx->if_type == SPRD_DPU_IF_DPI) {
if (!ctx->stopped) {
dpu_reg_set(ctx, REG_DPU_CTRL, BIT_DPU_REG_UPDATE);
dpu_wait_update_done(dpu);
}
dpu_reg_set(ctx, REG_DPU_INT_EN, BIT_DPU_INT_ERR);
} else if (ctx->if_type == SPRD_DPU_IF_EDPI) {
dpu_reg_set(ctx, REG_DPU_CTRL, BIT_DPU_RUN);
ctx->stopped = false;
}
}
static void sprd_dpu_init(struct sprd_dpu *dpu)
{
struct dpu_context *ctx = &dpu->ctx;
u32 int_mask = 0;
writel(0x00, ctx->base + REG_BG_COLOR);
writel(0x00, ctx->base + REG_MMU_EN);
writel(0x00, ctx->base + REG_MMU_PPN1);
writel(0xffff, ctx->base + REG_MMU_RANGE1);
writel(0x00, ctx->base + REG_MMU_PPN2);
writel(0xffff, ctx->base + REG_MMU_RANGE2);
writel(0x1ffff, ctx->base + REG_MMU_VPN_RANGE);
if (ctx->if_type == SPRD_DPU_IF_DPI) {
/* use dpi as interface */
dpu_reg_clr(ctx, REG_DPU_CFG0, BIT_DPU_IF_EDPI);
/* disable Halt function for SPRD DSI */
dpu_reg_clr(ctx, REG_DPI_CTRL, BIT_DPU_DPI_HALT_EN);
/* select te from external pad */
dpu_reg_set(ctx, REG_DPI_CTRL, BIT_DPU_EDPI_FROM_EXTERNAL_PAD);
/* enable dpu update done INT */
int_mask |= BIT_DPU_INT_UPDATE_DONE;
/* enable dpu done INT */
int_mask |= BIT_DPU_INT_DONE;
/* enable dpu dpi vsync */
int_mask |= BIT_DPU_INT_VSYNC;
/* enable dpu TE INT */
int_mask |= BIT_DPU_INT_TE;
/* enable underflow err INT */
int_mask |= BIT_DPU_INT_ERR;
} else if (ctx->if_type == SPRD_DPU_IF_EDPI) {
/* use edpi as interface */
dpu_reg_set(ctx, REG_DPU_CFG0, BIT_DPU_IF_EDPI);
/* use external te */
dpu_reg_set(ctx, REG_DPI_CTRL, BIT_DPU_EDPI_FROM_EXTERNAL_PAD);
/* enable te */
dpu_reg_set(ctx, REG_DPI_CTRL, BIT_DPU_EDPI_TE_EN);
/* enable stop done INT */
int_mask |= BIT_DPU_INT_DONE;
/* enable TE INT */
int_mask |= BIT_DPU_INT_TE;
}
writel(int_mask, ctx->base + REG_DPU_INT_EN);
}
static void sprd_dpu_fini(struct sprd_dpu *dpu)
{
struct dpu_context *ctx = &dpu->ctx;
writel(0x00, ctx->base + REG_DPU_INT_EN);
writel(0xff, ctx->base + REG_DPU_INT_CLR);
}
static void sprd_dpi_init(struct sprd_dpu *dpu)
{
struct dpu_context *ctx = &dpu->ctx;
u32 reg_val;
u32 size;
size = (ctx->vm.vactive << 16) | ctx->vm.hactive;
writel(size, ctx->base + REG_PANEL_SIZE);
writel(size, ctx->base + REG_BLEND_SIZE);
if (ctx->if_type == SPRD_DPU_IF_DPI) {
/* set dpi timing */
reg_val = ctx->vm.hsync_len << 0 |
ctx->vm.hback_porch << 8 |
ctx->vm.hfront_porch << 20;
writel(reg_val, ctx->base + REG_DPI_H_TIMING);
reg_val = ctx->vm.vsync_len << 0 |
ctx->vm.vback_porch << 8 |
ctx->vm.vfront_porch << 20;
writel(reg_val, ctx->base + REG_DPI_V_TIMING);
}
}
void sprd_dpu_run(struct sprd_dpu *dpu)
{
struct dpu_context *ctx = &dpu->ctx;
dpu_reg_set(ctx, REG_DPU_CTRL, BIT_DPU_RUN);
ctx->stopped = false;
}
void sprd_dpu_stop(struct sprd_dpu *dpu)
{
struct dpu_context *ctx = &dpu->ctx;
if (ctx->if_type == SPRD_DPU_IF_DPI)
dpu_reg_set(ctx, REG_DPU_CTRL, BIT_DPU_STOP);
dpu_wait_stop_done(dpu);
}
static int sprd_plane_atomic_check(struct drm_plane *plane,
struct drm_atomic_state *state)
{
struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state,
plane);
struct drm_crtc_state *crtc_state;
u32 fmt;
if (!plane_state->fb || !plane_state->crtc)
return 0;
fmt = drm_format_to_dpu(plane_state->fb);
if (!fmt)
return -EINVAL;
crtc_state = drm_atomic_get_crtc_state(plane_state->state, plane_state->crtc);
if (IS_ERR(crtc_state))
return PTR_ERR(crtc_state);
return drm_atomic_helper_check_plane_state(plane_state, crtc_state,
DRM_PLANE_HELPER_NO_SCALING,
DRM_PLANE_HELPER_NO_SCALING,
true, true);
}
static void sprd_plane_atomic_update(struct drm_plane *drm_plane,
struct drm_atomic_state *state)
{
struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state,
drm_plane);
struct sprd_dpu *dpu = to_sprd_crtc(new_state->crtc);
/* start configure dpu layers */
sprd_dpu_layer(dpu, new_state);
}
static void sprd_plane_atomic_disable(struct drm_plane *drm_plane,
struct drm_atomic_state *state)
{
struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state,
drm_plane);
struct sprd_dpu *dpu = to_sprd_crtc(old_state->crtc);
layer_reg_wr(&dpu->ctx, REG_LAY_CTRL, 0x00, old_state->zpos);
}
static void sprd_plane_create_properties(struct sprd_plane *plane, int index)
{
unsigned int supported_modes = BIT(DRM_MODE_BLEND_PIXEL_NONE) |
BIT(DRM_MODE_BLEND_PREMULTI) |
BIT(DRM_MODE_BLEND_COVERAGE);
/* create rotation property */
drm_plane_create_rotation_property(&plane->base,
DRM_MODE_ROTATE_0,
DRM_MODE_ROTATE_MASK |
DRM_MODE_REFLECT_MASK);
/* create alpha property */
drm_plane_create_alpha_property(&plane->base);
/* create blend mode property */
drm_plane_create_blend_mode_property(&plane->base, supported_modes);
/* create zpos property */
drm_plane_create_zpos_immutable_property(&plane->base, index);
}
static const struct drm_plane_helper_funcs sprd_plane_helper_funcs = {
.atomic_check = sprd_plane_atomic_check,
.atomic_update = sprd_plane_atomic_update,
.atomic_disable = sprd_plane_atomic_disable,
};
static const struct drm_plane_funcs sprd_plane_funcs = {
.update_plane = drm_atomic_helper_update_plane,
.disable_plane = drm_atomic_helper_disable_plane,
.destroy = drm_plane_cleanup,
.reset = drm_atomic_helper_plane_reset,
.atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
};
static struct sprd_plane *sprd_planes_init(struct drm_device *drm)
{
struct sprd_plane *plane, *primary;
enum drm_plane_type plane_type;
int i;
for (i = 0; i < 6; i++) {
plane_type = (i == 0) ? DRM_PLANE_TYPE_PRIMARY :
DRM_PLANE_TYPE_OVERLAY;
plane = drmm_universal_plane_alloc(drm, struct sprd_plane, base,
1, &sprd_plane_funcs,
layer_fmts, ARRAY_SIZE(layer_fmts),
NULL, plane_type, NULL);
if (IS_ERR(plane)) {
drm_err(drm, "failed to init drm plane: %d\n", i);
return plane;
}
drm_plane_helper_add(&plane->base, &sprd_plane_helper_funcs);
sprd_plane_create_properties(plane, i);
if (i == 0)
primary = plane;
}
return primary;
}
static void sprd_crtc_mode_set_nofb(struct drm_crtc *crtc)
{
struct sprd_dpu *dpu = to_sprd_crtc(crtc);
struct drm_display_mode *mode = &crtc->state->adjusted_mode;
struct drm_encoder *encoder;
struct sprd_dsi *dsi;
drm_display_mode_to_videomode(mode, &dpu->ctx.vm);
drm_for_each_encoder_mask(encoder, crtc->dev,
crtc->state->encoder_mask) {
dsi = encoder_to_dsi(encoder);
if (dsi->slave->mode_flags & MIPI_DSI_MODE_VIDEO)
dpu->ctx.if_type = SPRD_DPU_IF_DPI;
else
dpu->ctx.if_type = SPRD_DPU_IF_EDPI;
}
sprd_dpi_init(dpu);
}
static void sprd_crtc_atomic_enable(struct drm_crtc *crtc,
struct drm_atomic_state *state)
{
struct sprd_dpu *dpu = to_sprd_crtc(crtc);
sprd_dpu_init(dpu);
drm_crtc_vblank_on(&dpu->base);
}
static void sprd_crtc_atomic_disable(struct drm_crtc *crtc,
struct drm_atomic_state *state)
{
struct sprd_dpu *dpu = to_sprd_crtc(crtc);
struct drm_device *drm = dpu->base.dev;
drm_crtc_vblank_off(&dpu->base);
sprd_dpu_fini(dpu);
spin_lock_irq(&drm->event_lock);
if (crtc->state->event) {
drm_crtc_send_vblank_event(crtc, crtc->state->event);
crtc->state->event = NULL;
}
spin_unlock_irq(&drm->event_lock);
}
static void sprd_crtc_atomic_flush(struct drm_crtc *crtc,
struct drm_atomic_state *state)
{
struct sprd_dpu *dpu = to_sprd_crtc(crtc);
struct drm_device *drm = dpu->base.dev;
sprd_dpu_flip(dpu);
spin_lock_irq(&drm->event_lock);
if (crtc->state->event) {
drm_crtc_send_vblank_event(crtc, crtc->state->event);
crtc->state->event = NULL;
}
spin_unlock_irq(&drm->event_lock);
}
static int sprd_crtc_enable_vblank(struct drm_crtc *crtc)
{
struct sprd_dpu *dpu = to_sprd_crtc(crtc);
dpu_reg_set(&dpu->ctx, REG_DPU_INT_EN, BIT_DPU_INT_VSYNC);
return 0;
}
static void sprd_crtc_disable_vblank(struct drm_crtc *crtc)
{
struct sprd_dpu *dpu = to_sprd_crtc(crtc);
dpu_reg_clr(&dpu->ctx, REG_DPU_INT_EN, BIT_DPU_INT_VSYNC);
}
static const struct drm_crtc_helper_funcs sprd_crtc_helper_funcs = {
.mode_set_nofb = sprd_crtc_mode_set_nofb,
.atomic_flush = sprd_crtc_atomic_flush,
.atomic_enable = sprd_crtc_atomic_enable,
.atomic_disable = sprd_crtc_atomic_disable,
};
static const struct drm_crtc_funcs sprd_crtc_funcs = {
.destroy = drm_crtc_cleanup,
.set_config = drm_atomic_helper_set_config,
.page_flip = drm_atomic_helper_page_flip,
.reset = drm_atomic_helper_crtc_reset,
.atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
.enable_vblank = sprd_crtc_enable_vblank,
.disable_vblank = sprd_crtc_disable_vblank,
};
static struct sprd_dpu *sprd_crtc_init(struct drm_device *drm,
struct drm_plane *primary, struct device *dev)
{
struct device_node *port;
struct sprd_dpu *dpu;
dpu = drmm_crtc_alloc_with_planes(drm, struct sprd_dpu, base,
primary, NULL,
&sprd_crtc_funcs, NULL);
if (IS_ERR(dpu)) {
drm_err(drm, "failed to init crtc\n");
return dpu;
}
drm_crtc_helper_add(&dpu->base, &sprd_crtc_helper_funcs);
/*
* set crtc port so that drm_of_find_possible_crtcs call works
*/
port = of_graph_get_port_by_id(dev->of_node, 0);
if (!port) {
drm_err(drm, "failed to found crtc output port for %s\n",
dev->of_node->full_name);
return ERR_PTR(-EINVAL);
}
dpu->base.port = port;
of_node_put(port);
return dpu;
}
static irqreturn_t sprd_dpu_isr(int irq, void *data)
{
struct sprd_dpu *dpu = data;
struct dpu_context *ctx = &dpu->ctx;
u32 reg_val, int_mask = 0;
reg_val = readl(ctx->base + REG_DPU_INT_STS);
/* disable err interrupt */
if (reg_val & BIT_DPU_INT_ERR) {
int_mask |= BIT_DPU_INT_ERR;
drm_warn(dpu->drm, "Warning: dpu underflow!\n");
}
/* dpu update done isr */
if (reg_val & BIT_DPU_INT_UPDATE_DONE) {
ctx->evt_update = true;
wake_up_interruptible_all(&ctx->wait_queue);
}
/* dpu stop done isr */
if (reg_val & BIT_DPU_INT_DONE) {
ctx->evt_stop = true;
wake_up_interruptible_all(&ctx->wait_queue);
}
if (reg_val & BIT_DPU_INT_VSYNC)
drm_crtc_handle_vblank(&dpu->base);
writel(reg_val, ctx->base + REG_DPU_INT_CLR);
dpu_reg_clr(ctx, REG_DPU_INT_EN, int_mask);
return IRQ_HANDLED;
}
static int sprd_dpu_context_init(struct sprd_dpu *dpu,
struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct dpu_context *ctx = &dpu->ctx;
struct resource *res;
int ret;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
ctx->base = devm_ioremap(dev, res->start, resource_size(res));
if (!ctx->base) {
dev_err(dev, "failed to map dpu registers\n");
return -EFAULT;
}
ctx->irq = platform_get_irq(pdev, 0);
if (ctx->irq < 0) {
dev_err(dev, "failed to get dpu irq\n");
return ctx->irq;
}
/* disable and clear interrupts before register dpu IRQ. */
writel(0x00, ctx->base + REG_DPU_INT_EN);
writel(0xff, ctx->base + REG_DPU_INT_CLR);
ret = devm_request_irq(dev, ctx->irq, sprd_dpu_isr,
IRQF_TRIGGER_NONE, "DPU", dpu);
if (ret) {
dev_err(dev, "failed to register dpu irq handler\n");
return ret;
}
init_waitqueue_head(&ctx->wait_queue);
return 0;
}
static int sprd_dpu_bind(struct device *dev, struct device *master, void *data)
{
struct drm_device *drm = data;
struct sprd_dpu *dpu;
struct sprd_plane *plane;
int ret;
plane = sprd_planes_init(drm);
if (IS_ERR(plane))
return PTR_ERR(plane);
dpu = sprd_crtc_init(drm, &plane->base, dev);
if (IS_ERR(dpu))
return PTR_ERR(dpu);
dpu->drm = drm;
dev_set_drvdata(dev, dpu);
ret = sprd_dpu_context_init(dpu, dev);
if (ret)
return ret;
return 0;
}
static const struct component_ops dpu_component_ops = {
.bind = sprd_dpu_bind,
};
static const struct of_device_id dpu_match_table[] = {
{ .compatible = "sprd,sharkl3-dpu" },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, dpu_match_table);
static int sprd_dpu_probe(struct platform_device *pdev)
{
return component_add(&pdev->dev, &dpu_component_ops);
}
static int sprd_dpu_remove(struct platform_device *pdev)
{
component_del(&pdev->dev, &dpu_component_ops);
return 0;
}
struct platform_driver sprd_dpu_driver = {
.probe = sprd_dpu_probe,
.remove = sprd_dpu_remove,
.driver = {
.name = "sprd-dpu-drv",
.of_match_table = dpu_match_table,
},
};
MODULE_AUTHOR("Leon He <leon.he@unisoc.com>");
MODULE_AUTHOR("Kevin Tang <kevin.tang@unisoc.com>");
MODULE_DESCRIPTION("Unisoc Display Controller Driver");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,109 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020 Unisoc Inc.
*/
#ifndef __SPRD_DPU_H__
#define __SPRD_DPU_H__
#include <linux/bug.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/string.h>
#include <video/videomode.h>
#include <drm/drm_crtc.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_print.h>
#include <drm/drm_vblank.h>
#include <uapi/drm/drm_mode.h>
/* DPU Layer registers offset */
#define DPU_LAY_REG_OFFSET 0x30
enum {
SPRD_DPU_IF_DPI,
SPRD_DPU_IF_EDPI,
SPRD_DPU_IF_LIMIT
};
/**
* Sprd DPU context structure
*
* @base: DPU controller base address
* @irq: IRQ number to install the handler for
* @if_type: The type of DPI interface, default is DPI mode.
* @vm: videomode structure to use for DPU and DPI initialization
* @stopped: indicates whether DPU are stopped
* @wait_queue: wait queue, used to wait for DPU shadow register update done and
* DPU stop register done interrupt signal.
* @evt_update: wait queue condition for DPU shadow register
* @evt_stop: wait queue condition for DPU stop register
*/
struct dpu_context {
void __iomem *base;
int irq;
u8 if_type;
struct videomode vm;
bool stopped;
wait_queue_head_t wait_queue;
bool evt_update;
bool evt_stop;
};
/**
* Sprd DPU device structure
*
* @crtc: crtc object
* @drm: A point to drm device
* @ctx: DPU's implementation specific context object
*/
struct sprd_dpu {
struct drm_crtc base;
struct drm_device *drm;
struct dpu_context ctx;
};
static inline struct sprd_dpu *to_sprd_crtc(struct drm_crtc *crtc)
{
return container_of(crtc, struct sprd_dpu, base);
}
static inline void
dpu_reg_set(struct dpu_context *ctx, u32 offset, u32 set_bits)
{
u32 bits = readl_relaxed(ctx->base + offset);
writel(bits | set_bits, ctx->base + offset);
}
static inline void
dpu_reg_clr(struct dpu_context *ctx, u32 offset, u32 clr_bits)
{
u32 bits = readl_relaxed(ctx->base + offset);
writel(bits & ~clr_bits, ctx->base + offset);
}
static inline u32
layer_reg_rd(struct dpu_context *ctx, u32 offset, int index)
{
u32 layer_offset = offset + index * DPU_LAY_REG_OFFSET;
return readl(ctx->base + layer_offset);
}
static inline void
layer_reg_wr(struct dpu_context *ctx, u32 offset, u32 cfg_bits, int index)
{
u32 layer_offset = offset + index * DPU_LAY_REG_OFFSET;
writel(cfg_bits, ctx->base + layer_offset);
}
void sprd_dpu_run(struct sprd_dpu *dpu);
void sprd_dpu_stop(struct sprd_dpu *dpu);
#endif

View File

@ -0,0 +1,205 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2020 Unisoc Inc.
*/
#include <linux/component.h>
#include <linux/dma-mapping.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/of_graph.h>
#include <linux/of_platform.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_drv.h>
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_of.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_vblank.h>
#include "sprd_drm.h"
#define DRIVER_NAME "sprd"
#define DRIVER_DESC "Spreadtrum SoCs' DRM Driver"
#define DRIVER_DATE "20200201"
#define DRIVER_MAJOR 1
#define DRIVER_MINOR 0
static const struct drm_mode_config_helper_funcs sprd_drm_mode_config_helper = {
.atomic_commit_tail = drm_atomic_helper_commit_tail_rpm,
};
static const struct drm_mode_config_funcs sprd_drm_mode_config_funcs = {
.fb_create = drm_gem_fb_create,
.atomic_check = drm_atomic_helper_check,
.atomic_commit = drm_atomic_helper_commit,
};
static void sprd_drm_mode_config_init(struct drm_device *drm)
{
drm->mode_config.min_width = 0;
drm->mode_config.min_height = 0;
drm->mode_config.max_width = 8192;
drm->mode_config.max_height = 8192;
drm->mode_config.allow_fb_modifiers = true;
drm->mode_config.funcs = &sprd_drm_mode_config_funcs;
drm->mode_config.helper_private = &sprd_drm_mode_config_helper;
}
DEFINE_DRM_GEM_CMA_FOPS(sprd_drm_fops);
static struct drm_driver sprd_drm_drv = {
.driver_features = DRIVER_GEM | DRIVER_MODESET | DRIVER_ATOMIC,
.fops = &sprd_drm_fops,
/* GEM Operations */
DRM_GEM_CMA_DRIVER_OPS,
.name = DRIVER_NAME,
.desc = DRIVER_DESC,
.date = DRIVER_DATE,
.major = DRIVER_MAJOR,
.minor = DRIVER_MINOR,
};
static int sprd_drm_bind(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct drm_device *drm;
struct sprd_drm *sprd;
int ret;
sprd = devm_drm_dev_alloc(dev, &sprd_drm_drv, struct sprd_drm, drm);
if (IS_ERR(sprd))
return PTR_ERR(sprd);
drm = &sprd->drm;
platform_set_drvdata(pdev, drm);
ret = drmm_mode_config_init(drm);
if (ret)
return ret;
sprd_drm_mode_config_init(drm);
/* bind and init sub drivers */
ret = component_bind_all(drm->dev, drm);
if (ret) {
drm_err(drm, "failed to bind all component.\n");
return ret;
}
/* vblank init */
ret = drm_vblank_init(drm, drm->mode_config.num_crtc);
if (ret) {
drm_err(drm, "failed to initialize vblank.\n");
goto err_unbind_all;
}
/* reset all the states of crtc/plane/encoder/connector */
drm_mode_config_reset(drm);
/* init kms poll for handling hpd */
drm_kms_helper_poll_init(drm);
ret = drm_dev_register(drm, 0);
if (ret < 0)
goto err_kms_helper_poll_fini;
return 0;
err_kms_helper_poll_fini:
drm_kms_helper_poll_fini(drm);
err_unbind_all:
component_unbind_all(drm->dev, drm);
return ret;
}
static void sprd_drm_unbind(struct device *dev)
{
struct drm_device *drm = dev_get_drvdata(dev);
drm_dev_unregister(drm);
drm_kms_helper_poll_fini(drm);
component_unbind_all(drm->dev, drm);
}
static const struct component_master_ops drm_component_ops = {
.bind = sprd_drm_bind,
.unbind = sprd_drm_unbind,
};
static int compare_of(struct device *dev, void *data)
{
return dev->of_node == data;
}
static int sprd_drm_probe(struct platform_device *pdev)
{
return drm_of_component_probe(&pdev->dev, compare_of, &drm_component_ops);
}
static int sprd_drm_remove(struct platform_device *pdev)
{
component_master_del(&pdev->dev, &drm_component_ops);
return 0;
}
static void sprd_drm_shutdown(struct platform_device *pdev)
{
struct drm_device *drm = platform_get_drvdata(pdev);
if (!drm) {
drm_warn(drm, "drm device is not available, no shutdown\n");
return;
}
drm_atomic_helper_shutdown(drm);
}
static const struct of_device_id drm_match_table[] = {
{ .compatible = "sprd,display-subsystem", },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, drm_match_table);
static struct platform_driver sprd_drm_driver = {
.probe = sprd_drm_probe,
.remove = sprd_drm_remove,
.shutdown = sprd_drm_shutdown,
.driver = {
.name = "sprd-drm-drv",
.of_match_table = drm_match_table,
},
};
static struct platform_driver *sprd_drm_drivers[] = {
&sprd_drm_driver,
&sprd_dpu_driver,
&sprd_dsi_driver,
};
static int __init sprd_drm_init(void)
{
return platform_register_drivers(sprd_drm_drivers,
ARRAY_SIZE(sprd_drm_drivers));
}
static void __exit sprd_drm_exit(void)
{
platform_unregister_drivers(sprd_drm_drivers,
ARRAY_SIZE(sprd_drm_drivers));
}
module_init(sprd_drm_init);
module_exit(sprd_drm_exit);
MODULE_AUTHOR("Leon He <leon.he@unisoc.com>");
MODULE_AUTHOR("Kevin Tang <kevin.tang@unisoc.com>");
MODULE_DESCRIPTION("Unisoc DRM KMS Master Driver");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,19 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020 Unisoc Inc.
*/
#ifndef _SPRD_DRM_H_
#define _SPRD_DRM_H_
#include <drm/drm_atomic.h>
#include <drm/drm_print.h>
struct sprd_drm {
struct drm_device drm;
};
extern struct platform_driver sprd_dpu_driver;
extern struct platform_driver sprd_dsi_driver;
#endif /* _SPRD_DRM_H_ */

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,126 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020 Unisoc Inc.
*/
#ifndef __SPRD_DSI_H__
#define __SPRD_DSI_H__
#include <linux/of.h>
#include <linux/device.h>
#include <linux/regmap.h>
#include <video/videomode.h>
#include <drm/drm_bridge.h>
#include <drm/drm_connector.h>
#include <drm/drm_encoder.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_print.h>
#include <drm/drm_panel.h>
#define encoder_to_dsi(encoder) \
container_of(encoder, struct sprd_dsi, encoder)
enum dsi_work_mode {
DSI_MODE_CMD = 0,
DSI_MODE_VIDEO
};
enum video_burst_mode {
VIDEO_NON_BURST_WITH_SYNC_PULSES = 0,
VIDEO_NON_BURST_WITH_SYNC_EVENTS,
VIDEO_BURST_WITH_SYNC_PULSES
};
enum dsi_color_coding {
COLOR_CODE_16BIT_CONFIG1 = 0,
COLOR_CODE_16BIT_CONFIG2,
COLOR_CODE_16BIT_CONFIG3,
COLOR_CODE_18BIT_CONFIG1,
COLOR_CODE_18BIT_CONFIG2,
COLOR_CODE_24BIT,
COLOR_CODE_20BIT_YCC422_LOOSELY,
COLOR_CODE_24BIT_YCC422,
COLOR_CODE_16BIT_YCC422,
COLOR_CODE_30BIT,
COLOR_CODE_36BIT,
COLOR_CODE_12BIT_YCC420,
COLOR_CODE_COMPRESSTION,
COLOR_CODE_MAX
};
enum pll_timing {
NONE,
REQUEST_TIME,
PREPARE_TIME,
SETTLE_TIME,
ZERO_TIME,
TRAIL_TIME,
EXIT_TIME,
CLKPOST_TIME,
TA_GET,
TA_GO,
TA_SURE,
TA_WAIT,
};
struct dphy_pll {
u8 refin; /* Pre-divider control signal */
u8 cp_s; /* 00: SDM_EN=1, 10: SDM_EN=0 */
u8 fdk_s; /* PLL mode control: integer or fraction */
u8 sdm_en;
u8 div;
u8 int_n; /* integer N PLL */
u32 ref_clk; /* dphy reference clock, unit: MHz */
u32 freq; /* panel config, unit: KHz */
u32 fvco;
u32 potential_fvco;
u32 nint; /* sigma delta modulator NINT control */
u32 kint; /* sigma delta modulator KINT control */
u8 lpf_sel; /* low pass filter control */
u8 out_sel; /* post divider control */
u8 vco_band; /* vco range */
u8 det_delay;
};
struct dsi_context {
void __iomem *base;
struct regmap *regmap;
struct dphy_pll pll;
struct videomode vm;
bool enabled;
u8 work_mode;
u8 burst_mode;
u32 int0_mask;
u32 int1_mask;
/* maximum time (ns) for data lanes from HS to LP */
u16 data_hs2lp;
/* maximum time (ns) for data lanes from LP to HS */
u16 data_lp2hs;
/* maximum time (ns) for clk lanes from HS to LP */
u16 clk_hs2lp;
/* maximum time (ns) for clk lanes from LP to HS */
u16 clk_lp2hs;
/* maximum time (ns) for BTA operation - REQUIRED */
u16 max_rd_time;
/* enable receiving frame ack packets - for video mode */
bool frame_ack_en;
/* enable receiving tear effect ack packets - for cmd mode */
bool te_ack_en;
};
struct sprd_dsi {
struct drm_device *drm;
struct mipi_dsi_host host;
struct mipi_dsi_device *slave;
struct drm_encoder encoder;
struct drm_bridge *panel_bridge;
struct dsi_context ctx;
};
int dphy_pll_config(struct dsi_context *ctx);
void dphy_timing_config(struct dsi_context *ctx);
#endif /* __SPRD_DSI_H__ */

View File

@ -571,8 +571,8 @@ static const uint32_t simpledrm_default_formats[] = {
//DRM_FORMAT_XRGB1555,
//DRM_FORMAT_ARGB1555,
DRM_FORMAT_RGB888,
//DRM_FORMAT_XRGB2101010,
//DRM_FORMAT_ARGB2101010,
DRM_FORMAT_XRGB2101010,
DRM_FORMAT_ARGB2101010,
};
static const uint64_t simpledrm_format_modifiers[] = {

View File

@ -391,7 +391,7 @@ struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size)
bo = kzalloc(sizeof(*bo), GFP_KERNEL);
if (!bo)
return NULL;
return ERR_PTR(-ENOMEM);
bo->madv = VC4_MADV_WILLNEED;
refcount_set(&bo->usecnt, 0);

View File

@ -33,6 +33,7 @@ static const struct hvs_format {
u32 hvs; /* HVS_FORMAT_* */
u32 pixel_order;
u32 pixel_order_hvs5;
bool hvs5_only;
} hvs_formats[] = {
{
.drm = DRM_FORMAT_XRGB8888,
@ -128,6 +129,12 @@ static const struct hvs_format {
.hvs = HVS_PIXEL_FORMAT_YCBCR_YUV422_2PLANE,
.pixel_order = HVS_PIXEL_ORDER_XYCRCB,
},
{
.drm = DRM_FORMAT_P030,
.hvs = HVS_PIXEL_FORMAT_YCBCR_10BIT,
.pixel_order = HVS_PIXEL_ORDER_XYCBCR,
.hvs5_only = true,
},
};
static const struct hvs_format *vc4_get_hvs_format(u32 drm_format)
@ -616,6 +623,51 @@ static int vc4_plane_allocate_lbm(struct drm_plane_state *state)
return 0;
}
/*
* The colorspace conversion matrices are held in 3 entries in the dlist.
* Create an array of them, with entries for each full and limited mode, and
* each supported colorspace.
*/
static const u32 colorspace_coeffs[2][DRM_COLOR_ENCODING_MAX][3] = {
{
/* Limited range */
{
/* BT601 */
SCALER_CSC0_ITR_R_601_5,
SCALER_CSC1_ITR_R_601_5,
SCALER_CSC2_ITR_R_601_5,
}, {
/* BT709 */
SCALER_CSC0_ITR_R_709_3,
SCALER_CSC1_ITR_R_709_3,
SCALER_CSC2_ITR_R_709_3,
}, {
/* BT2020 */
SCALER_CSC0_ITR_R_2020,
SCALER_CSC1_ITR_R_2020,
SCALER_CSC2_ITR_R_2020,
}
}, {
/* Full range */
{
/* JFIF */
SCALER_CSC0_JPEG_JFIF,
SCALER_CSC1_JPEG_JFIF,
SCALER_CSC2_JPEG_JFIF,
}, {
/* BT709 */
SCALER_CSC0_ITR_R_709_3_FR,
SCALER_CSC1_ITR_R_709_3_FR,
SCALER_CSC2_ITR_R_709_3_FR,
}, {
/* BT2020 */
SCALER_CSC0_ITR_R_2020_FR,
SCALER_CSC1_ITR_R_2020_FR,
SCALER_CSC2_ITR_R_2020_FR,
}
}
};
/* Writes out a full display list for an active plane to the plane's
* private dlist state.
*/
@ -762,47 +814,90 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
case DRM_FORMAT_MOD_BROADCOM_SAND128:
case DRM_FORMAT_MOD_BROADCOM_SAND256: {
uint32_t param = fourcc_mod_broadcom_param(fb->modifier);
u32 tile_w, tile, x_off, pix_per_tile;
hvs_format = HVS_PIXEL_FORMAT_H264;
switch (base_format_mod) {
case DRM_FORMAT_MOD_BROADCOM_SAND64:
tiling = SCALER_CTL0_TILING_64B;
tile_w = 64;
break;
case DRM_FORMAT_MOD_BROADCOM_SAND128:
tiling = SCALER_CTL0_TILING_128B;
tile_w = 128;
break;
case DRM_FORMAT_MOD_BROADCOM_SAND256:
tiling = SCALER_CTL0_TILING_256B_OR_T;
tile_w = 256;
break;
default:
break;
}
if (param > SCALER_TILE_HEIGHT_MASK) {
DRM_DEBUG_KMS("SAND height too large (%d)\n", param);
DRM_DEBUG_KMS("SAND height too large (%d)\n",
param);
return -EINVAL;
}
pix_per_tile = tile_w / fb->format->cpp[0];
tile = vc4_state->src_x / pix_per_tile;
x_off = vc4_state->src_x % pix_per_tile;
if (fb->format->format == DRM_FORMAT_P030) {
hvs_format = HVS_PIXEL_FORMAT_YCBCR_10BIT;
tiling = SCALER_CTL0_TILING_128B;
} else {
hvs_format = HVS_PIXEL_FORMAT_H264;
switch (base_format_mod) {
case DRM_FORMAT_MOD_BROADCOM_SAND64:
tiling = SCALER_CTL0_TILING_64B;
break;
case DRM_FORMAT_MOD_BROADCOM_SAND128:
tiling = SCALER_CTL0_TILING_128B;
break;
case DRM_FORMAT_MOD_BROADCOM_SAND256:
tiling = SCALER_CTL0_TILING_256B_OR_T;
break;
default:
return -EINVAL;
}
}
/* Adjust the base pointer to the first pixel to be scanned
* out.
*
* For P030, y_ptr [31:4] is the 128bit word for the start pixel
* y_ptr [3:0] is the pixel (0-11) contained within that 128bit
* word that should be taken as the first pixel.
* Ditto uv_ptr [31:4] vs [3:0], however [3:0] contains the
* element within the 128bit word, eg for pixel 3 the value
* should be 6.
*/
for (i = 0; i < num_planes; i++) {
u32 tile_w, tile, x_off, pix_per_tile;
if (fb->format->format == DRM_FORMAT_P030) {
/*
* Spec says: bits [31:4] of the given address
* should point to the 128-bit word containing
* the desired starting pixel, and bits[3:0]
* should be between 0 and 11, indicating which
* of the 12-pixels in that 128-bit word is the
* first pixel to be used
*/
u32 remaining_pixels = vc4_state->src_x % 96;
u32 aligned = remaining_pixels / 12;
u32 last_bits = remaining_pixels % 12;
x_off = aligned * 16 + last_bits;
tile_w = 128;
pix_per_tile = 96;
} else {
switch (base_format_mod) {
case DRM_FORMAT_MOD_BROADCOM_SAND64:
tile_w = 64;
break;
case DRM_FORMAT_MOD_BROADCOM_SAND128:
tile_w = 128;
break;
case DRM_FORMAT_MOD_BROADCOM_SAND256:
tile_w = 256;
break;
default:
return -EINVAL;
}
pix_per_tile = tile_w / fb->format->cpp[0];
x_off = (vc4_state->src_x % pix_per_tile) /
(i ? h_subsample : 1) *
fb->format->cpp[i];
}
tile = vc4_state->src_x / pix_per_tile;
vc4_state->offsets[i] += param * tile_w * tile;
vc4_state->offsets[i] += src_y /
(i ? v_subsample : 1) *
tile_w;
vc4_state->offsets[i] += x_off /
(i ? h_subsample : 1) *
fb->format->cpp[i];
vc4_state->offsets[i] += x_off & ~(i ? 1 : 0);
}
pitch0 = VC4_SET_FIELD(param, SCALER_TILE_HEIGHT);
@ -955,7 +1050,8 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
/* Pitch word 1/2 */
for (i = 1; i < num_planes; i++) {
if (hvs_format != HVS_PIXEL_FORMAT_H264) {
if (hvs_format != HVS_PIXEL_FORMAT_H264 &&
hvs_format != HVS_PIXEL_FORMAT_YCBCR_10BIT) {
vc4_dlist_write(vc4_state,
VC4_SET_FIELD(fb->pitches[i],
SCALER_SRC_PITCH));
@ -966,9 +1062,20 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
/* Colorspace conversion words */
if (vc4_state->is_yuv) {
vc4_dlist_write(vc4_state, SCALER_CSC0_ITR_R_601_5);
vc4_dlist_write(vc4_state, SCALER_CSC1_ITR_R_601_5);
vc4_dlist_write(vc4_state, SCALER_CSC2_ITR_R_601_5);
enum drm_color_encoding color_encoding = state->color_encoding;
enum drm_color_range color_range = state->color_range;
const u32 *ccm;
if (color_encoding >= DRM_COLOR_ENCODING_MAX)
color_encoding = DRM_COLOR_YCBCR_BT601;
if (color_range >= DRM_COLOR_RANGE_MAX)
color_range = DRM_COLOR_YCBCR_LIMITED_RANGE;
ccm = colorspace_coeffs[color_range][color_encoding];
vc4_dlist_write(vc4_state, ccm[0]);
vc4_dlist_write(vc4_state, ccm[1]);
vc4_dlist_write(vc4_state, ccm[2]);
}
vc4_state->lbm_offset = 0;
@ -1315,6 +1422,13 @@ static bool vc4_format_mod_supported(struct drm_plane *plane,
default:
return false;
}
case DRM_FORMAT_P030:
switch (fourcc_mod_broadcom_mod(modifier)) {
case DRM_FORMAT_MOD_BROADCOM_SAND128:
return true;
default:
return false;
}
case DRM_FORMAT_RGBX1010102:
case DRM_FORMAT_BGRX1010102:
case DRM_FORMAT_RGBA1010102:
@ -1347,8 +1461,11 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
struct drm_plane *plane = NULL;
struct vc4_plane *vc4_plane;
u32 formats[ARRAY_SIZE(hvs_formats)];
int num_formats = 0;
int ret = 0;
unsigned i;
bool hvs5 = of_device_is_compatible(dev->dev->of_node,
"brcm,bcm2711-vc5");
static const uint64_t modifiers[] = {
DRM_FORMAT_MOD_BROADCOM_VC4_T_TILED,
DRM_FORMAT_MOD_BROADCOM_SAND128,
@ -1363,13 +1480,17 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
if (!vc4_plane)
return ERR_PTR(-ENOMEM);
for (i = 0; i < ARRAY_SIZE(hvs_formats); i++)
formats[i] = hvs_formats[i].drm;
for (i = 0; i < ARRAY_SIZE(hvs_formats); i++) {
if (!hvs_formats[i].hvs5_only || hvs5) {
formats[num_formats] = hvs_formats[i].drm;
num_formats++;
}
}
plane = &vc4_plane->base;
ret = drm_universal_plane_init(dev, plane, 0,
&vc4_plane_funcs,
formats, ARRAY_SIZE(formats),
formats, num_formats,
modifiers, type, NULL);
if (ret)
return ERR_PTR(ret);
@ -1383,6 +1504,15 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
DRM_MODE_REFLECT_X |
DRM_MODE_REFLECT_Y);
drm_plane_create_color_properties(plane,
BIT(DRM_COLOR_YCBCR_BT601) |
BIT(DRM_COLOR_YCBCR_BT709) |
BIT(DRM_COLOR_YCBCR_BT2020),
BIT(DRM_COLOR_YCBCR_LIMITED_RANGE) |
BIT(DRM_COLOR_YCBCR_FULL_RANGE),
DRM_COLOR_YCBCR_BT709,
DRM_COLOR_YCBCR_LIMITED_RANGE);
return plane;
}

View File

@ -975,7 +975,10 @@ enum hvs_pixel_format {
#define SCALER_CSC0_COEF_CR_OFS_SHIFT 0
#define SCALER_CSC0_ITR_R_601_5 0x00f00000
#define SCALER_CSC0_ITR_R_709_3 0x00f00000
#define SCALER_CSC0_ITR_R_2020 0x00f00000
#define SCALER_CSC0_JPEG_JFIF 0x00000000
#define SCALER_CSC0_ITR_R_709_3_FR 0x00000000
#define SCALER_CSC0_ITR_R_2020_FR 0x00000000
/* S2.8 contribution of Cb to Green */
#define SCALER_CSC1_COEF_CB_GRN_MASK VC4_MASK(31, 22)
@ -990,8 +993,11 @@ enum hvs_pixel_format {
#define SCALER_CSC1_COEF_CR_BLU_MASK VC4_MASK(1, 0)
#define SCALER_CSC1_COEF_CR_BLU_SHIFT 0
#define SCALER_CSC1_ITR_R_601_5 0xe73304a8
#define SCALER_CSC1_ITR_R_709_3 0xf2b784a8
#define SCALER_CSC1_JPEG_JFIF 0xea34a400
#define SCALER_CSC1_ITR_R_709_3 0xf27784a8
#define SCALER_CSC1_ITR_R_2020 0xf43594a8
#define SCALER_CSC1_JPEG_JFIF 0xea349400
#define SCALER_CSC1_ITR_R_709_3_FR 0xf4388400
#define SCALER_CSC1_ITR_R_2020_FR 0xf5b6d400
/* S2.8 contribution of Cb to Red */
#define SCALER_CSC2_COEF_CB_RED_MASK VC4_MASK(29, 20)
@ -1002,9 +1008,12 @@ enum hvs_pixel_format {
/* S2.8 contribution of Cb to Blue */
#define SCALER_CSC2_COEF_CB_BLU_MASK VC4_MASK(19, 10)
#define SCALER_CSC2_COEF_CB_BLU_SHIFT 10
#define SCALER_CSC2_ITR_R_601_5 0x00066204
#define SCALER_CSC2_ITR_R_709_3 0x00072a1c
#define SCALER_CSC2_JPEG_JFIF 0x000599c5
#define SCALER_CSC2_ITR_R_601_5 0x00066604
#define SCALER_CSC2_ITR_R_709_3 0x00072e1d
#define SCALER_CSC2_ITR_R_2020 0x0006b624
#define SCALER_CSC2_JPEG_JFIF 0x00059dc6
#define SCALER_CSC2_ITR_R_709_3_FR 0x00064ddb
#define SCALER_CSC2_ITR_R_2020_FR 0x0005e5e2
#define SCALER_TPZ0_VERT_RECALC BIT(31)
#define SCALER_TPZ0_SCALE_MASK VC4_MASK(28, 8)

View File

@ -4,6 +4,7 @@ config DRM_VMWGFX
depends on DRM && PCI && MMU
depends on X86 || ARM64
select DRM_TTM
select DRM_TTM_HELPER
select MAPPING_DIRTY_HELPERS
# Only needed for the transitional use of drm_crtc_init - can be removed
# again once vmwgfx sets up the primary plane itself.

View File

@ -9,7 +9,8 @@ vmwgfx-y := vmwgfx_execbuf.o vmwgfx_gmr.o vmwgfx_hashtab.o vmwgfx_kms.o vmwgfx_d
vmwgfx_cotable.o vmwgfx_so.o vmwgfx_binding.o vmwgfx_msg.o \
vmwgfx_simple_resource.o vmwgfx_va.o vmwgfx_blit.o \
vmwgfx_validation.o vmwgfx_page_dirty.o vmwgfx_streamoutput.o \
vmwgfx_devcaps.o ttm_object.o ttm_memory.o vmwgfx_system_manager.o
vmwgfx_devcaps.o ttm_object.o vmwgfx_system_manager.o \
vmwgfx_gem.o
vmwgfx-$(CONFIG_DRM_FBDEV_EMULATION) += vmwgfx_fb.o
vmwgfx-$(CONFIG_TRANSPARENT_HUGEPAGE) += vmwgfx_thp.o

View File

@ -1,6 +1,6 @@
/**********************************************************
/* SPDX-License-Identifier: GPL-2.0 OR MIT */
/*
* Copyright 2012-2021 VMware, Inc.
* SPDX-License-Identifier: GPL-2.0 OR MIT
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
@ -22,7 +22,7 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
**********************************************************/
*/
/*
* svga3d_cmd.h --

View File

@ -1,6 +1,6 @@
/**********************************************************
/* SPDX-License-Identifier: GPL-2.0 OR MIT */
/*
* Copyright 1998-2021 VMware, Inc.
* SPDX-License-Identifier: GPL-2.0 OR MIT
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
@ -22,7 +22,7 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
**********************************************************/
*/
/*
* svga3d_devcaps.h --
@ -347,6 +347,10 @@ typedef uint32 SVGA3dDevCapIndex;
#define SVGA3D_DEVCAP_SM5 258
#define SVGA3D_DEVCAP_MULTISAMPLE_8X 259
#define SVGA3D_DEVCAP_MAX_FORCED_SAMPLE_COUNT 260
#define SVGA3D_DEVCAP_GL43 261
#define SVGA3D_DEVCAP_MAX 262
#define SVGA3D_DXFMT_SUPPORTED (1 << 0)

View File

@ -1,6 +1,6 @@
/**********************************************************
/* SPDX-License-Identifier: GPL-2.0 OR MIT */
/*
* Copyright 2012-2021 VMware, Inc.
* SPDX-License-Identifier: GPL-2.0 OR MIT
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
@ -22,7 +22,7 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
**********************************************************/
*/
/*
* svga3d_dx.h --
@ -508,11 +508,11 @@ typedef struct SVGA3dCmdDXSetPredication {
#pragma pack(pop)
#pragma pack(push, 1)
typedef struct MKS3dDXSOState {
typedef struct SVGA3dDXSOState {
uint32 offset;
uint32 intOffset;
uint32 vertexCount;
uint32 dead;
uint32 dead1;
uint32 dead2;
} SVGA3dDXSOState;
#pragma pack(pop)

View File

@ -1,6 +1,6 @@
/**********************************************************
/* SPDX-License-Identifier: GPL-2.0 OR MIT */
/*
* Copyright 2012-2021 VMware, Inc.
* SPDX-License-Identifier: GPL-2.0 OR MIT
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
@ -22,7 +22,7 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
**********************************************************/
*/
/*
* svga3d_limits.h --
@ -82,4 +82,6 @@
#define SVGA3D_MIN_SBX_DATA_SIZE (GBYTES_2_BYTES(1))
#define SVGA3D_MAX_SBX_DATA_SIZE (GBYTES_2_BYTES(4))
#define SVGA3D_MIN_SBX_DATA_SIZE_DVM (MBYTES_2_BYTES(900))
#define SVGA3D_MAX_SBX_DATA_SIZE_DVM (MBYTES_2_BYTES(910))
#endif

View File

@ -1,6 +1,6 @@
/**********************************************************
/* SPDX-License-Identifier: GPL-2.0 OR MIT */
/*
* Copyright 1998-2015 VMware, Inc.
* SPDX-License-Identifier: GPL-2.0 OR MIT
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
@ -22,7 +22,7 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
**********************************************************/
*/
/*
* svga3d_reg.h --

View File

@ -1,6 +1,6 @@
/**********************************************************
/* SPDX-License-Identifier: GPL-2.0 OR MIT */
/*
* Copyright 2012-2021 VMware, Inc.
* SPDX-License-Identifier: GPL-2.0 OR MIT
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
@ -22,7 +22,7 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
**********************************************************/
*/
/*
* svga3d_types.h --
@ -370,7 +370,6 @@ typedef enum SVGA3dSurfaceFormat {
#define SVGA3D_SURFACE_TRANSFER_FROM_BUFFER (CONST64U(1) << 30)
#define SVGA3D_SURFACE_RESERVED1 (CONST64U(1) << 31)
#define SVGA3D_SURFACE_VADECODE SVGA3D_SURFACE_RESERVED1
#define SVGA3D_SURFACE_MULTISAMPLE (CONST64U(1) << 32)

View File

@ -1,6 +1,6 @@
/**********************************************************
/* SPDX-License-Identifier: GPL-2.0 OR MIT */
/*
* Copyright 2007,2020 VMware, Inc.
* SPDX-License-Identifier: GPL-2.0 OR MIT
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
@ -22,7 +22,7 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
**********************************************************/
*/
/*
* svga_escape.h --

View File

@ -1,6 +1,6 @@
/**********************************************************
/* SPDX-License-Identifier: GPL-2.0 OR MIT */
/*
* Copyright 2007-2021 VMware, Inc.
* SPDX-License-Identifier: GPL-2.0 OR MIT
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
@ -22,7 +22,7 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
**********************************************************/
*/
/*
* svga_overlay.h --

View File

@ -1,6 +1,6 @@
/**********************************************************
/* SPDX-License-Identifier: GPL-2.0 OR MIT */
/*
* Copyright 1998-2021 VMware, Inc.
* SPDX-License-Identifier: GPL-2.0 OR MIT
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
@ -22,7 +22,7 @@
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
**********************************************************/
*/
/*
* svga_reg.h --
@ -442,6 +442,7 @@ typedef struct {
#define SVGA_CAP2_TRACE_FULL_FB 0x00002000
#define SVGA_CAP2_EXTRA_REGS 0x00004000
#define SVGA_CAP2_LO_STAGING 0x00008000
#define SVGA_CAP2_VIDEO_BLT 0x00010000
#define SVGA_CAP2_RESERVED 0x80000000
typedef enum {
@ -450,9 +451,10 @@ typedef enum {
SVGABackdoorCap3dHWVersion = 2,
SVGABackdoorCapDeviceCaps2 = 3,
SVGABackdoorCapDevelCaps = 4,
SVGABackdoorDevelRenderer = 5,
SVGABackdoorDevelUsingISB = 6,
SVGABackdoorCapMax = 7,
SVGABackdoorCapDevCaps = 5,
SVGABackdoorDevelRenderer = 6,
SVGABackdoorDevelUsingISB = 7,
SVGABackdoorCapMax = 8,
} SVGABackdoorCapType;
enum {

View File

@ -1,586 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 OR MIT */
/**************************************************************************
*
* Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sub license, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice (including the
* next paragraph) shall be included in all copies or substantial portions
* of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
* USE OR OTHER DEALINGS IN THE SOFTWARE.
*
**************************************************************************/
#define pr_fmt(fmt) "[TTM] " fmt
#include <linux/spinlock.h>
#include <linux/sched.h>
#include <linux/wait.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <drm/drm_device.h>
#include <drm/drm_file.h>
#include <drm/ttm/ttm_device.h>
#include "ttm_memory.h"
#define TTM_MEMORY_ALLOC_RETRIES 4
struct ttm_mem_global ttm_mem_glob;
EXPORT_SYMBOL(ttm_mem_glob);
struct ttm_mem_zone {
struct kobject kobj;
struct ttm_mem_global *glob;
const char *name;
uint64_t zone_mem;
uint64_t emer_mem;
uint64_t max_mem;
uint64_t swap_limit;
uint64_t used_mem;
};
static struct attribute ttm_mem_sys = {
.name = "zone_memory",
.mode = S_IRUGO
};
static struct attribute ttm_mem_emer = {
.name = "emergency_memory",
.mode = S_IRUGO | S_IWUSR
};
static struct attribute ttm_mem_max = {
.name = "available_memory",
.mode = S_IRUGO | S_IWUSR
};
static struct attribute ttm_mem_swap = {
.name = "swap_limit",
.mode = S_IRUGO | S_IWUSR
};
static struct attribute ttm_mem_used = {
.name = "used_memory",
.mode = S_IRUGO
};
static void ttm_mem_zone_kobj_release(struct kobject *kobj)
{
struct ttm_mem_zone *zone =
container_of(kobj, struct ttm_mem_zone, kobj);
pr_info("Zone %7s: Used memory at exit: %llu KiB\n",
zone->name, (unsigned long long)zone->used_mem >> 10);
kfree(zone);
}
static ssize_t ttm_mem_zone_show(struct kobject *kobj,
struct attribute *attr,
char *buffer)
{
struct ttm_mem_zone *zone =
container_of(kobj, struct ttm_mem_zone, kobj);
uint64_t val = 0;
spin_lock(&zone->glob->lock);
if (attr == &ttm_mem_sys)
val = zone->zone_mem;
else if (attr == &ttm_mem_emer)
val = zone->emer_mem;
else if (attr == &ttm_mem_max)
val = zone->max_mem;
else if (attr == &ttm_mem_swap)
val = zone->swap_limit;
else if (attr == &ttm_mem_used)
val = zone->used_mem;
spin_unlock(&zone->glob->lock);
return snprintf(buffer, PAGE_SIZE, "%llu\n",
(unsigned long long) val >> 10);
}
static void ttm_check_swapping(struct ttm_mem_global *glob);
static ssize_t ttm_mem_zone_store(struct kobject *kobj,
struct attribute *attr,
const char *buffer,
size_t size)
{
struct ttm_mem_zone *zone =
container_of(kobj, struct ttm_mem_zone, kobj);
int chars;
unsigned long val;
uint64_t val64;
chars = sscanf(buffer, "%lu", &val);
if (chars == 0)
return size;
val64 = val;
val64 <<= 10;
spin_lock(&zone->glob->lock);
if (val64 > zone->zone_mem)
val64 = zone->zone_mem;
if (attr == &ttm_mem_emer) {
zone->emer_mem = val64;
if (zone->max_mem > val64)
zone->max_mem = val64;
} else if (attr == &ttm_mem_max) {
zone->max_mem = val64;
if (zone->emer_mem < val64)
zone->emer_mem = val64;
} else if (attr == &ttm_mem_swap)
zone->swap_limit = val64;
spin_unlock(&zone->glob->lock);
ttm_check_swapping(zone->glob);
return size;
}
static struct attribute *ttm_mem_zone_attrs[] = {
&ttm_mem_sys,
&ttm_mem_emer,
&ttm_mem_max,
&ttm_mem_swap,
&ttm_mem_used,
NULL
};
static const struct sysfs_ops ttm_mem_zone_ops = {
.show = &ttm_mem_zone_show,
.store = &ttm_mem_zone_store
};
static struct kobj_type ttm_mem_zone_kobj_type = {
.release = &ttm_mem_zone_kobj_release,
.sysfs_ops = &ttm_mem_zone_ops,
.default_attrs = ttm_mem_zone_attrs,
};
static struct kobj_type ttm_mem_glob_kobj_type = {0};
static bool ttm_zones_above_swap_target(struct ttm_mem_global *glob,
bool from_wq, uint64_t extra)
{
unsigned int i;
struct ttm_mem_zone *zone;
uint64_t target;
for (i = 0; i < glob->num_zones; ++i) {
zone = glob->zones[i];
if (from_wq)
target = zone->swap_limit;
else if (capable(CAP_SYS_ADMIN))
target = zone->emer_mem;
else
target = zone->max_mem;
target = (extra > target) ? 0ULL : target;
if (zone->used_mem > target)
return true;
}
return false;
}
/*
* At this point we only support a single shrink callback.
* Extend this if needed, perhaps using a linked list of callbacks.
* Note that this function is reentrant:
* many threads may try to swap out at any given time.
*/
static void ttm_shrink(struct ttm_mem_global *glob, bool from_wq,
uint64_t extra, struct ttm_operation_ctx *ctx)
{
int ret;
spin_lock(&glob->lock);
while (ttm_zones_above_swap_target(glob, from_wq, extra)) {
spin_unlock(&glob->lock);
ret = ttm_global_swapout(ctx, GFP_KERNEL);
spin_lock(&glob->lock);
if (unlikely(ret <= 0))
break;
}
spin_unlock(&glob->lock);
}
static void ttm_shrink_work(struct work_struct *work)
{
struct ttm_operation_ctx ctx = {
.interruptible = false,
.no_wait_gpu = false
};
struct ttm_mem_global *glob =
container_of(work, struct ttm_mem_global, work);
ttm_shrink(glob, true, 0ULL, &ctx);
}
static int ttm_mem_init_kernel_zone(struct ttm_mem_global *glob,
const struct sysinfo *si)
{
struct ttm_mem_zone *zone = kzalloc(sizeof(*zone), GFP_KERNEL);
uint64_t mem;
int ret;
if (unlikely(!zone))
return -ENOMEM;
mem = si->totalram - si->totalhigh;
mem *= si->mem_unit;
zone->name = "kernel";
zone->zone_mem = mem;
zone->max_mem = mem >> 1;
zone->emer_mem = (mem >> 1) + (mem >> 2);
zone->swap_limit = zone->max_mem - (mem >> 3);
zone->used_mem = 0;
zone->glob = glob;
glob->zone_kernel = zone;
ret = kobject_init_and_add(
&zone->kobj, &ttm_mem_zone_kobj_type, &glob->kobj, zone->name);
if (unlikely(ret != 0)) {
kobject_put(&zone->kobj);
return ret;
}
glob->zones[glob->num_zones++] = zone;
return 0;
}
#ifdef CONFIG_HIGHMEM
static int ttm_mem_init_highmem_zone(struct ttm_mem_global *glob,
const struct sysinfo *si)
{
struct ttm_mem_zone *zone;
uint64_t mem;
int ret;
if (si->totalhigh == 0)
return 0;
zone = kzalloc(sizeof(*zone), GFP_KERNEL);
if (unlikely(!zone))
return -ENOMEM;
mem = si->totalram;
mem *= si->mem_unit;
zone->name = "highmem";
zone->zone_mem = mem;
zone->max_mem = mem >> 1;
zone->emer_mem = (mem >> 1) + (mem >> 2);
zone->swap_limit = zone->max_mem - (mem >> 3);
zone->used_mem = 0;
zone->glob = glob;
glob->zone_highmem = zone;
ret = kobject_init_and_add(
&zone->kobj, &ttm_mem_zone_kobj_type, &glob->kobj, "%s",
zone->name);
if (unlikely(ret != 0)) {
kobject_put(&zone->kobj);
return ret;
}
glob->zones[glob->num_zones++] = zone;
return 0;
}
#else
static int ttm_mem_init_dma32_zone(struct ttm_mem_global *glob,
const struct sysinfo *si)
{
struct ttm_mem_zone *zone = kzalloc(sizeof(*zone), GFP_KERNEL);
uint64_t mem;
int ret;
if (unlikely(!zone))
return -ENOMEM;
mem = si->totalram;
mem *= si->mem_unit;
/**
* No special dma32 zone needed.
*/
if (mem <= ((uint64_t) 1ULL << 32)) {
kfree(zone);
return 0;
}
/*
* Limit max dma32 memory to 4GB for now
* until we can figure out how big this
* zone really is.
*/
mem = ((uint64_t) 1ULL << 32);
zone->name = "dma32";
zone->zone_mem = mem;
zone->max_mem = mem >> 1;
zone->emer_mem = (mem >> 1) + (mem >> 2);
zone->swap_limit = zone->max_mem - (mem >> 3);
zone->used_mem = 0;
zone->glob = glob;
glob->zone_dma32 = zone;
ret = kobject_init_and_add(
&zone->kobj, &ttm_mem_zone_kobj_type, &glob->kobj, zone->name);
if (unlikely(ret != 0)) {
kobject_put(&zone->kobj);
return ret;
}
glob->zones[glob->num_zones++] = zone;
return 0;
}
#endif
int ttm_mem_global_init(struct ttm_mem_global *glob, struct device *dev)
{
struct sysinfo si;
int ret;
int i;
struct ttm_mem_zone *zone;
spin_lock_init(&glob->lock);
glob->swap_queue = create_singlethread_workqueue("ttm_swap");
INIT_WORK(&glob->work, ttm_shrink_work);
ret = kobject_init_and_add(&glob->kobj, &ttm_mem_glob_kobj_type,
&dev->kobj, "memory_accounting");
if (unlikely(ret != 0)) {
kobject_put(&glob->kobj);
return ret;
}
si_meminfo(&si);
ret = ttm_mem_init_kernel_zone(glob, &si);
if (unlikely(ret != 0))
goto out_no_zone;
#ifdef CONFIG_HIGHMEM
ret = ttm_mem_init_highmem_zone(glob, &si);
if (unlikely(ret != 0))
goto out_no_zone;
#else
ret = ttm_mem_init_dma32_zone(glob, &si);
if (unlikely(ret != 0))
goto out_no_zone;
#endif
for (i = 0; i < glob->num_zones; ++i) {
zone = glob->zones[i];
pr_info("Zone %7s: Available graphics memory: %llu KiB\n",
zone->name, (unsigned long long)zone->max_mem >> 10);
}
return 0;
out_no_zone:
ttm_mem_global_release(glob);
return ret;
}
void ttm_mem_global_release(struct ttm_mem_global *glob)
{
struct ttm_mem_zone *zone;
unsigned int i;
destroy_workqueue(glob->swap_queue);
glob->swap_queue = NULL;
for (i = 0; i < glob->num_zones; ++i) {
zone = glob->zones[i];
kobject_del(&zone->kobj);
kobject_put(&zone->kobj);
}
kobject_del(&glob->kobj);
kobject_put(&glob->kobj);
memset(glob, 0, sizeof(*glob));
}
static void ttm_check_swapping(struct ttm_mem_global *glob)
{
bool needs_swapping = false;
unsigned int i;
struct ttm_mem_zone *zone;
spin_lock(&glob->lock);
for (i = 0; i < glob->num_zones; ++i) {
zone = glob->zones[i];
if (zone->used_mem > zone->swap_limit) {
needs_swapping = true;
break;
}
}
spin_unlock(&glob->lock);
if (unlikely(needs_swapping))
(void)queue_work(glob->swap_queue, &glob->work);
}
static void ttm_mem_global_free_zone(struct ttm_mem_global *glob,
struct ttm_mem_zone *single_zone,
uint64_t amount)
{
unsigned int i;
struct ttm_mem_zone *zone;
spin_lock(&glob->lock);
for (i = 0; i < glob->num_zones; ++i) {
zone = glob->zones[i];
if (single_zone && zone != single_zone)
continue;
zone->used_mem -= amount;
}
spin_unlock(&glob->lock);
}
void ttm_mem_global_free(struct ttm_mem_global *glob,
uint64_t amount)
{
return ttm_mem_global_free_zone(glob, glob->zone_kernel, amount);
}
EXPORT_SYMBOL(ttm_mem_global_free);
static int ttm_mem_global_reserve(struct ttm_mem_global *glob,
struct ttm_mem_zone *single_zone,
uint64_t amount, bool reserve)
{
uint64_t limit;
int ret = -ENOMEM;
unsigned int i;
struct ttm_mem_zone *zone;
spin_lock(&glob->lock);
for (i = 0; i < glob->num_zones; ++i) {
zone = glob->zones[i];
if (single_zone && zone != single_zone)
continue;
limit = (capable(CAP_SYS_ADMIN)) ?
zone->emer_mem : zone->max_mem;
if (zone->used_mem > limit)
goto out_unlock;
}
if (reserve) {
for (i = 0; i < glob->num_zones; ++i) {
zone = glob->zones[i];
if (single_zone && zone != single_zone)
continue;
zone->used_mem += amount;
}
}
ret = 0;
out_unlock:
spin_unlock(&glob->lock);
ttm_check_swapping(glob);
return ret;
}
static int ttm_mem_global_alloc_zone(struct ttm_mem_global *glob,
struct ttm_mem_zone *single_zone,
uint64_t memory,
struct ttm_operation_ctx *ctx)
{
int count = TTM_MEMORY_ALLOC_RETRIES;
while (unlikely(ttm_mem_global_reserve(glob,
single_zone,
memory, true)
!= 0)) {
if (ctx->no_wait_gpu)
return -ENOMEM;
if (unlikely(count-- == 0))
return -ENOMEM;
ttm_shrink(glob, false, memory + (memory >> 2) + 16, ctx);
}
return 0;
}
int ttm_mem_global_alloc(struct ttm_mem_global *glob, uint64_t memory,
struct ttm_operation_ctx *ctx)
{
/**
* Normal allocations of kernel memory are registered in
* the kernel zone.
*/
return ttm_mem_global_alloc_zone(glob, glob->zone_kernel, memory, ctx);
}
EXPORT_SYMBOL(ttm_mem_global_alloc);
int ttm_mem_global_alloc_page(struct ttm_mem_global *glob,
struct page *page, uint64_t size,
struct ttm_operation_ctx *ctx)
{
struct ttm_mem_zone *zone = NULL;
/**
* Page allocations may be registed in a single zone
* only if highmem or !dma32.
*/
#ifdef CONFIG_HIGHMEM
if (PageHighMem(page) && glob->zone_highmem != NULL)
zone = glob->zone_highmem;
#else
if (glob->zone_dma32 && page_to_pfn(page) > 0x00100000UL)
zone = glob->zone_kernel;
#endif
return ttm_mem_global_alloc_zone(glob, zone, size, ctx);
}
void ttm_mem_global_free_page(struct ttm_mem_global *glob, struct page *page,
uint64_t size)
{
struct ttm_mem_zone *zone = NULL;
#ifdef CONFIG_HIGHMEM
if (PageHighMem(page) && glob->zone_highmem != NULL)
zone = glob->zone_highmem;
#else
if (glob->zone_dma32 && page_to_pfn(page) > 0x00100000UL)
zone = glob->zone_kernel;
#endif
ttm_mem_global_free_zone(glob, zone, size);
}
size_t ttm_round_pot(size_t size)
{
if ((size & (size - 1)) == 0)
return size;
else if (size > PAGE_SIZE)
return PAGE_ALIGN(size);
else {
size_t tmp_size = 4;
while (tmp_size < size)
tmp_size <<= 1;
return tmp_size;
}
return 0;
}
EXPORT_SYMBOL(ttm_round_pot);

View File

@ -1,92 +0,0 @@
/**************************************************************************
*
* Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sub license, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice (including the
* next paragraph) shall be included in all copies or substantial portions
* of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
* USE OR OTHER DEALINGS IN THE SOFTWARE.
*
**************************************************************************/
#ifndef TTM_MEMORY_H
#define TTM_MEMORY_H
#include <linux/workqueue.h>
#include <linux/spinlock.h>
#include <linux/bug.h>
#include <linux/wait.h>
#include <linux/errno.h>
#include <linux/kobject.h>
#include <linux/mm.h>
#include <drm/ttm/ttm_bo_api.h>
/**
* struct ttm_mem_global - Global memory accounting structure.
*
* @shrink: A single callback to shrink TTM memory usage. Extend this
* to a linked list to be able to handle multiple callbacks when needed.
* @swap_queue: A workqueue to handle shrinking in low memory situations. We
* need a separate workqueue since it will spend a lot of time waiting
* for the GPU, and this will otherwise block other workqueue tasks(?)
* At this point we use only a single-threaded workqueue.
* @work: The workqueue callback for the shrink queue.
* @lock: Lock to protect the @shrink - and the memory accounting members,
* that is, essentially the whole structure with some exceptions.
* @zones: Array of pointers to accounting zones.
* @num_zones: Number of populated entries in the @zones array.
* @zone_kernel: Pointer to the kernel zone.
* @zone_highmem: Pointer to the highmem zone if there is one.
* @zone_dma32: Pointer to the dma32 zone if there is one.
*
* Note that this structure is not per device. It should be global for all
* graphics devices.
*/
#define TTM_MEM_MAX_ZONES 2
struct ttm_mem_zone;
extern struct ttm_mem_global {
struct kobject kobj;
struct workqueue_struct *swap_queue;
struct work_struct work;
spinlock_t lock;
struct ttm_mem_zone *zones[TTM_MEM_MAX_ZONES];
unsigned int num_zones;
struct ttm_mem_zone *zone_kernel;
#ifdef CONFIG_HIGHMEM
struct ttm_mem_zone *zone_highmem;
#else
struct ttm_mem_zone *zone_dma32;
#endif
} ttm_mem_glob;
int ttm_mem_global_init(struct ttm_mem_global *glob, struct device *dev);
void ttm_mem_global_release(struct ttm_mem_global *glob);
int ttm_mem_global_alloc(struct ttm_mem_global *glob, uint64_t memory,
struct ttm_operation_ctx *ctx);
void ttm_mem_global_free(struct ttm_mem_global *glob, uint64_t amount);
int ttm_mem_global_alloc_page(struct ttm_mem_global *glob,
struct page *page, uint64_t size,
struct ttm_operation_ctx *ctx);
void ttm_mem_global_free_page(struct ttm_mem_global *glob,
struct page *page, uint64_t size);
size_t ttm_round_pot(size_t size);
#endif

View File

@ -50,6 +50,7 @@
#include <linux/atomic.h>
#include <linux/module.h>
#include "ttm_object.h"
#include "vmwgfx_drv.h"
MODULE_IMPORT_NS(DMA_BUF);
@ -73,7 +74,7 @@ struct ttm_object_file {
struct ttm_object_device *tdev;
spinlock_t lock;
struct list_head ref_list;
struct vmwgfx_open_hash ref_hash[TTM_REF_NUM];
struct vmwgfx_open_hash ref_hash;
struct kref refcount;
};
@ -93,10 +94,8 @@ struct ttm_object_device {
spinlock_t object_lock;
struct vmwgfx_open_hash object_hash;
atomic_t object_count;
struct ttm_mem_global *mem_glob;
struct dma_buf_ops ops;
void (*dmabuf_release)(struct dma_buf *dma_buf);
size_t dma_buf_size;
struct idr idr;
};
@ -126,7 +125,6 @@ struct ttm_ref_object {
struct vmwgfx_hash_item hash;
struct list_head head;
struct kref kref;
enum ttm_ref_type ref_type;
struct ttm_base_object *obj;
struct ttm_object_file *tfile;
};
@ -162,9 +160,7 @@ int ttm_base_object_init(struct ttm_object_file *tfile,
struct ttm_base_object *base,
bool shareable,
enum ttm_object_type object_type,
void (*refcount_release) (struct ttm_base_object **),
void (*ref_obj_release) (struct ttm_base_object *,
enum ttm_ref_type ref_type))
void (*refcount_release) (struct ttm_base_object **))
{
struct ttm_object_device *tdev = tfile->tdev;
int ret;
@ -172,7 +168,6 @@ int ttm_base_object_init(struct ttm_object_file *tfile,
base->shareable = shareable;
base->tfile = ttm_object_file_ref(tfile);
base->refcount_release = refcount_release;
base->ref_obj_release = ref_obj_release;
base->object_type = object_type;
kref_init(&base->refcount);
idr_preload(GFP_KERNEL);
@ -184,7 +179,7 @@ int ttm_base_object_init(struct ttm_object_file *tfile,
return ret;
base->handle = ret;
ret = ttm_ref_object_add(tfile, base, TTM_REF_USAGE, NULL, false);
ret = ttm_ref_object_add(tfile, base, NULL, false);
if (unlikely(ret != 0))
goto out_err1;
@ -248,7 +243,7 @@ struct ttm_base_object *
ttm_base_object_noref_lookup(struct ttm_object_file *tfile, uint32_t key)
{
struct vmwgfx_hash_item *hash;
struct vmwgfx_open_hash *ht = &tfile->ref_hash[TTM_REF_USAGE];
struct vmwgfx_open_hash *ht = &tfile->ref_hash;
int ret;
rcu_read_lock();
@ -268,7 +263,7 @@ struct ttm_base_object *ttm_base_object_lookup(struct ttm_object_file *tfile,
{
struct ttm_base_object *base = NULL;
struct vmwgfx_hash_item *hash;
struct vmwgfx_open_hash *ht = &tfile->ref_hash[TTM_REF_USAGE];
struct vmwgfx_open_hash *ht = &tfile->ref_hash;
int ret;
rcu_read_lock();
@ -299,64 +294,14 @@ ttm_base_object_lookup_for_ref(struct ttm_object_device *tdev, uint32_t key)
return base;
}
/**
* ttm_ref_object_exists - Check whether a caller has a valid ref object
* (has opened) a base object.
*
* @tfile: Pointer to a struct ttm_object_file identifying the caller.
* @base: Pointer to a struct base object.
*
* Checks wether the caller identified by @tfile has put a valid USAGE
* reference object on the base object identified by @base.
*/
bool ttm_ref_object_exists(struct ttm_object_file *tfile,
struct ttm_base_object *base)
{
struct vmwgfx_open_hash *ht = &tfile->ref_hash[TTM_REF_USAGE];
struct vmwgfx_hash_item *hash;
struct ttm_ref_object *ref;
rcu_read_lock();
if (unlikely(vmwgfx_ht_find_item_rcu(ht, base->handle, &hash) != 0))
goto out_false;
/*
* Verify that the ref object is really pointing to our base object.
* Our base object could actually be dead, and the ref object pointing
* to another base object with the same handle.
*/
ref = drm_hash_entry(hash, struct ttm_ref_object, hash);
if (unlikely(base != ref->obj))
goto out_false;
/*
* Verify that the ref->obj pointer was actually valid!
*/
rmb();
if (unlikely(kref_read(&ref->kref) == 0))
goto out_false;
rcu_read_unlock();
return true;
out_false:
rcu_read_unlock();
return false;
}
int ttm_ref_object_add(struct ttm_object_file *tfile,
struct ttm_base_object *base,
enum ttm_ref_type ref_type, bool *existed,
bool *existed,
bool require_existed)
{
struct vmwgfx_open_hash *ht = &tfile->ref_hash[ref_type];
struct vmwgfx_open_hash *ht = &tfile->ref_hash;
struct ttm_ref_object *ref;
struct vmwgfx_hash_item *hash;
struct ttm_mem_global *mem_glob = tfile->tdev->mem_glob;
struct ttm_operation_ctx ctx = {
.interruptible = false,
.no_wait_gpu = false
};
int ret = -EINVAL;
if (base->tfile != tfile && !base->shareable)
@ -381,20 +326,14 @@ int ttm_ref_object_add(struct ttm_object_file *tfile,
if (require_existed)
return -EPERM;
ret = ttm_mem_global_alloc(mem_glob, sizeof(*ref),
&ctx);
if (unlikely(ret != 0))
return ret;
ref = kmalloc(sizeof(*ref), GFP_KERNEL);
if (unlikely(ref == NULL)) {
ttm_mem_global_free(mem_glob, sizeof(*ref));
return -ENOMEM;
}
ref->hash.key = base->handle;
ref->obj = base;
ref->tfile = tfile;
ref->ref_type = ref_type;
kref_init(&ref->kref);
spin_lock(&tfile->lock);
@ -412,7 +351,6 @@ int ttm_ref_object_add(struct ttm_object_file *tfile,
spin_unlock(&tfile->lock);
BUG_ON(ret != -EINVAL);
ttm_mem_global_free(mem_glob, sizeof(*ref));
kfree(ref);
}
@ -424,29 +362,23 @@ ttm_ref_object_release(struct kref *kref)
{
struct ttm_ref_object *ref =
container_of(kref, struct ttm_ref_object, kref);
struct ttm_base_object *base = ref->obj;
struct ttm_object_file *tfile = ref->tfile;
struct vmwgfx_open_hash *ht;
struct ttm_mem_global *mem_glob = tfile->tdev->mem_glob;
ht = &tfile->ref_hash[ref->ref_type];
ht = &tfile->ref_hash;
(void)vmwgfx_ht_remove_item_rcu(ht, &ref->hash);
list_del(&ref->head);
spin_unlock(&tfile->lock);
if (ref->ref_type != TTM_REF_USAGE && base->ref_obj_release)
base->ref_obj_release(base, ref->ref_type);
ttm_base_object_unref(&ref->obj);
ttm_mem_global_free(mem_glob, sizeof(*ref));
kfree_rcu(ref, rcu_head);
spin_lock(&tfile->lock);
}
int ttm_ref_object_base_unref(struct ttm_object_file *tfile,
unsigned long key, enum ttm_ref_type ref_type)
unsigned long key)
{
struct vmwgfx_open_hash *ht = &tfile->ref_hash[ref_type];
struct vmwgfx_open_hash *ht = &tfile->ref_hash;
struct ttm_ref_object *ref;
struct vmwgfx_hash_item *hash;
int ret;
@ -467,7 +399,6 @@ void ttm_object_file_release(struct ttm_object_file **p_tfile)
{
struct ttm_ref_object *ref;
struct list_head *list;
unsigned int i;
struct ttm_object_file *tfile = *p_tfile;
*p_tfile = NULL;
@ -485,8 +416,7 @@ void ttm_object_file_release(struct ttm_object_file **p_tfile)
}
spin_unlock(&tfile->lock);
for (i = 0; i < TTM_REF_NUM; ++i)
vmwgfx_ht_remove(&tfile->ref_hash[i]);
vmwgfx_ht_remove(&tfile->ref_hash);
ttm_object_file_unref(&tfile);
}
@ -495,8 +425,6 @@ struct ttm_object_file *ttm_object_file_init(struct ttm_object_device *tdev,
unsigned int hash_order)
{
struct ttm_object_file *tfile = kmalloc(sizeof(*tfile), GFP_KERNEL);
unsigned int i;
unsigned int j = 0;
int ret;
if (unlikely(tfile == NULL))
@ -507,18 +435,13 @@ struct ttm_object_file *ttm_object_file_init(struct ttm_object_device *tdev,
kref_init(&tfile->refcount);
INIT_LIST_HEAD(&tfile->ref_list);
for (i = 0; i < TTM_REF_NUM; ++i) {
ret = vmwgfx_ht_create(&tfile->ref_hash[i], hash_order);
if (ret) {
j = i;
goto out_err;
}
}
ret = vmwgfx_ht_create(&tfile->ref_hash, hash_order);
if (ret)
goto out_err;
return tfile;
out_err:
for (i = 0; i < j; ++i)
vmwgfx_ht_remove(&tfile->ref_hash[i]);
vmwgfx_ht_remove(&tfile->ref_hash);
kfree(tfile);
@ -526,8 +449,7 @@ out_err:
}
struct ttm_object_device *
ttm_object_device_init(struct ttm_mem_global *mem_glob,
unsigned int hash_order,
ttm_object_device_init(unsigned int hash_order,
const struct dma_buf_ops *ops)
{
struct ttm_object_device *tdev = kmalloc(sizeof(*tdev), GFP_KERNEL);
@ -536,19 +458,24 @@ ttm_object_device_init(struct ttm_mem_global *mem_glob,
if (unlikely(tdev == NULL))
return NULL;
tdev->mem_glob = mem_glob;
spin_lock_init(&tdev->object_lock);
atomic_set(&tdev->object_count, 0);
ret = vmwgfx_ht_create(&tdev->object_hash, hash_order);
if (ret != 0)
goto out_no_object_hash;
idr_init_base(&tdev->idr, 1);
/*
* Our base is at VMWGFX_NUM_MOB + 1 because we want to create
* a seperate namespace for GEM handles (which are
* 1..VMWGFX_NUM_MOB) and the surface handles. Some ioctl's
* can take either handle as an argument so we want to
* easily be able to tell whether the handle refers to a
* GEM buffer or a surface.
*/
idr_init_base(&tdev->idr, VMWGFX_NUM_MOB + 1);
tdev->ops = *ops;
tdev->dmabuf_release = tdev->ops.release;
tdev->ops.release = ttm_prime_dmabuf_release;
tdev->dma_buf_size = ttm_round_pot(sizeof(struct dma_buf)) +
ttm_round_pot(sizeof(struct file));
return tdev;
out_no_object_hash:
@ -633,7 +560,6 @@ static void ttm_prime_dmabuf_release(struct dma_buf *dma_buf)
if (prime->dma_buf == dma_buf)
prime->dma_buf = NULL;
mutex_unlock(&prime->mutex);
ttm_mem_global_free(tdev->mem_glob, tdev->dma_buf_size);
ttm_base_object_unref(&base);
}
@ -667,7 +593,7 @@ int ttm_prime_fd_to_handle(struct ttm_object_file *tfile,
prime = (struct ttm_prime_object *) dma_buf->priv;
base = &prime->base;
*handle = base->handle;
ret = ttm_ref_object_add(tfile, base, TTM_REF_USAGE, NULL, false);
ret = ttm_ref_object_add(tfile, base, NULL, false);
dma_buf_put(dma_buf);
@ -715,30 +641,18 @@ int ttm_prime_handle_to_fd(struct ttm_object_file *tfile,
dma_buf = prime->dma_buf;
if (!dma_buf || !get_dma_buf_unless_doomed(dma_buf)) {
DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
struct ttm_operation_ctx ctx = {
.interruptible = true,
.no_wait_gpu = false
};
exp_info.ops = &tdev->ops;
exp_info.size = prime->size;
exp_info.flags = flags;
exp_info.priv = prime;
/*
* Need to create a new dma_buf, with memory accounting.
* Need to create a new dma_buf
*/
ret = ttm_mem_global_alloc(tdev->mem_glob, tdev->dma_buf_size,
&ctx);
if (unlikely(ret != 0)) {
mutex_unlock(&prime->mutex);
goto out_unref;
}
dma_buf = dma_buf_export(&exp_info);
if (IS_ERR(dma_buf)) {
ret = PTR_ERR(dma_buf);
ttm_mem_global_free(tdev->mem_glob,
tdev->dma_buf_size);
mutex_unlock(&prime->mutex);
goto out_unref;
}
@ -773,7 +687,6 @@ out_unref:
* @shareable: See ttm_base_object_init
* @type: See ttm_base_object_init
* @refcount_release: See ttm_base_object_init
* @ref_obj_release: See ttm_base_object_init
*
* Initializes an object which is compatible with the drm_prime model
* for data sharing between processes and devices.
@ -781,9 +694,7 @@ out_unref:
int ttm_prime_object_init(struct ttm_object_file *tfile, size_t size,
struct ttm_prime_object *prime, bool shareable,
enum ttm_object_type type,
void (*refcount_release) (struct ttm_base_object **),
void (*ref_obj_release) (struct ttm_base_object *,
enum ttm_ref_type ref_type))
void (*refcount_release) (struct ttm_base_object **))
{
mutex_init(&prime->mutex);
prime->size = PAGE_ALIGN(size);
@ -792,6 +703,5 @@ int ttm_prime_object_init(struct ttm_object_file *tfile, size_t size,
prime->refcount_release = refcount_release;
return ttm_base_object_init(tfile, &prime->base, shareable,
ttm_prime_type,
ttm_prime_refcount_release,
ref_obj_release);
ttm_prime_refcount_release);
}

View File

@ -42,31 +42,8 @@
#include <linux/list.h>
#include <linux/rcupdate.h>
#include "ttm_memory.h"
#include "vmwgfx_hashtab.h"
/**
* enum ttm_ref_type
*
* Describes what type of reference a ref object holds.
*
* TTM_REF_USAGE is a simple refcount on a base object.
*
* TTM_REF_SYNCCPU_READ is a SYNCCPU_READ reference on a
* buffer object.
*
* TTM_REF_SYNCCPU_WRITE is a SYNCCPU_WRITE reference on a
* buffer object.
*
*/
enum ttm_ref_type {
TTM_REF_USAGE,
TTM_REF_SYNCCPU_READ,
TTM_REF_SYNCCPU_WRITE,
TTM_REF_NUM
};
/**
* enum ttm_object_type
*
@ -77,7 +54,6 @@ enum ttm_ref_type {
enum ttm_object_type {
ttm_fence_type,
ttm_buffer_type,
ttm_lock_type,
ttm_prime_type,
ttm_driver_type0 = 256,
@ -128,8 +104,6 @@ struct ttm_base_object {
struct ttm_object_file *tfile;
struct kref refcount;
void (*refcount_release) (struct ttm_base_object **base);
void (*ref_obj_release) (struct ttm_base_object *base,
enum ttm_ref_type ref_type);
u32 handle;
enum ttm_object_type object_type;
u32 shareable;
@ -178,11 +152,7 @@ extern int ttm_base_object_init(struct ttm_object_file *tfile,
bool shareable,
enum ttm_object_type type,
void (*refcount_release) (struct ttm_base_object
**),
void (*ref_obj_release) (struct ttm_base_object
*,
enum ttm_ref_type
ref_type));
**));
/**
* ttm_base_object_lookup
@ -246,12 +216,9 @@ extern void ttm_base_object_unref(struct ttm_base_object **p_base);
*/
extern int ttm_ref_object_add(struct ttm_object_file *tfile,
struct ttm_base_object *base,
enum ttm_ref_type ref_type, bool *existed,
bool *existed,
bool require_existed);
extern bool ttm_ref_object_exists(struct ttm_object_file *tfile,
struct ttm_base_object *base);
/**
* ttm_ref_object_base_unref
*
@ -264,8 +231,7 @@ extern bool ttm_ref_object_exists(struct ttm_object_file *tfile,
* will be unreferenced.
*/
extern int ttm_ref_object_base_unref(struct ttm_object_file *tfile,
unsigned long key,
enum ttm_ref_type ref_type);
unsigned long key);
/**
* ttm_object_file_init - initialize a struct ttm_object file
@ -296,7 +262,6 @@ extern void ttm_object_file_release(struct ttm_object_file **p_tfile);
/**
* ttm_object device init - initialize a struct ttm_object_device
*
* @mem_glob: struct ttm_mem_global for memory accounting.
* @hash_order: Order of hash table used to hash the base objects.
* @ops: DMA buf ops for prime objects of this device.
*
@ -305,8 +270,7 @@ extern void ttm_object_file_release(struct ttm_object_file **p_tfile);
*/
extern struct ttm_object_device *
ttm_object_device_init(struct ttm_mem_global *mem_glob,
unsigned int hash_order,
ttm_object_device_init(unsigned int hash_order,
const struct dma_buf_ops *ops);
/**
@ -331,10 +295,7 @@ extern int ttm_prime_object_init(struct ttm_object_file *tfile,
bool shareable,
enum ttm_object_type type,
void (*refcount_release)
(struct ttm_base_object **),
void (*ref_obj_release)
(struct ttm_base_object *,
enum ttm_ref_type ref_type));
(struct ttm_base_object **));
static inline enum ttm_object_type
ttm_base_object_type(struct ttm_base_object *base)
@ -352,13 +313,6 @@ extern int ttm_prime_handle_to_fd(struct ttm_object_file *tfile,
#define ttm_prime_object_kfree(__obj, __prime) \
kfree_rcu(__obj, __prime.base.rhead)
/*
* Extra memory required by the base object's idr storage, which is allocated
* separately from the base object itself. We estimate an on-average 128 bytes
* per idr.
*/
#define TTM_OBJ_EXTRA_SIZE 128
struct ttm_base_object *
ttm_base_object_noref_lookup(struct ttm_object_file *tfile, uint32_t key);

View File

@ -353,6 +353,27 @@ void vmw_binding_add(struct vmw_ctx_binding_state *cbs,
INIT_LIST_HEAD(&loc->res_list);
}
/**
* vmw_binding_cb_offset_update: Update the offset of a cb binding
*
* @cbs: Pointer to the context binding state tracker.
* @shader_slot: The shader slot of the binding.
* @slot: The slot of the binding.
* @offsetInBytes: The new offset of the binding.
*
* Updates the offset of an existing cb binding in the context binding
* state structure @cbs.
*/
void vmw_binding_cb_offset_update(struct vmw_ctx_binding_state *cbs,
u32 shader_slot, u32 slot, u32 offsetInBytes)
{
struct vmw_ctx_bindinfo *loc =
vmw_binding_loc(cbs, vmw_ctx_binding_cb, shader_slot, slot);
struct vmw_ctx_bindinfo_cb *loc_cb =
(struct vmw_ctx_bindinfo_cb *)((u8 *) loc);
loc_cb->offset = offsetInBytes;
}
/**
* vmw_binding_add_uav_index - Add UAV index for tracking.
* @cbs: Pointer to the context binding state tracker.
@ -1070,7 +1091,7 @@ static int vmw_emit_set_uav(struct vmw_ctx_binding_state *cbs)
size_t cmd_size, view_id_size;
const struct vmw_resource *ctx = vmw_cbs_context(cbs);
vmw_collect_view_ids(cbs, loc, SVGA3D_MAX_UAVIEWS);
vmw_collect_view_ids(cbs, loc, vmw_max_num_uavs(cbs->dev_priv));
view_id_size = cbs->bind_cmd_count*sizeof(uint32);
cmd_size = sizeof(*cmd) + view_id_size;
cmd = VMW_CMD_CTX_RESERVE(ctx->dev_priv, cmd_size, ctx->id);
@ -1100,7 +1121,7 @@ static int vmw_emit_set_cs_uav(struct vmw_ctx_binding_state *cbs)
size_t cmd_size, view_id_size;
const struct vmw_resource *ctx = vmw_cbs_context(cbs);
vmw_collect_view_ids(cbs, loc, SVGA3D_MAX_UAVIEWS);
vmw_collect_view_ids(cbs, loc, vmw_max_num_uavs(cbs->dev_priv));
view_id_size = cbs->bind_cmd_count*sizeof(uint32);
cmd_size = sizeof(*cmd) + view_id_size;
cmd = VMW_CMD_CTX_RESERVE(ctx->dev_priv, cmd_size, ctx->id);
@ -1327,8 +1348,7 @@ static int vmw_binding_scrub_so(struct vmw_ctx_bindinfo *bi, bool rebind)
}
/**
* vmw_binding_state_alloc - Allocate a struct vmw_ctx_binding_state with
* memory accounting.
* vmw_binding_state_alloc - Allocate a struct vmw_ctx_binding_state.
*
* @dev_priv: Pointer to a device private structure.
*
@ -1338,20 +1358,9 @@ struct vmw_ctx_binding_state *
vmw_binding_state_alloc(struct vmw_private *dev_priv)
{
struct vmw_ctx_binding_state *cbs;
struct ttm_operation_ctx ctx = {
.interruptible = false,
.no_wait_gpu = false
};
int ret;
ret = ttm_mem_global_alloc(vmw_mem_glob(dev_priv), sizeof(*cbs),
&ctx);
if (ret)
return ERR_PTR(ret);
cbs = vzalloc(sizeof(*cbs));
if (!cbs) {
ttm_mem_global_free(vmw_mem_glob(dev_priv), sizeof(*cbs));
return ERR_PTR(-ENOMEM);
}
@ -1362,17 +1371,13 @@ vmw_binding_state_alloc(struct vmw_private *dev_priv)
}
/**
* vmw_binding_state_free - Free a struct vmw_ctx_binding_state and its
* memory accounting info.
* vmw_binding_state_free - Free a struct vmw_ctx_binding_state.
*
* @cbs: Pointer to the struct vmw_ctx_binding_state to be freed.
*/
void vmw_binding_state_free(struct vmw_ctx_binding_state *cbs)
{
struct vmw_private *dev_priv = cbs->dev_priv;
vfree(cbs);
ttm_mem_global_free(vmw_mem_glob(dev_priv), sizeof(*cbs));
}
/**

View File

@ -200,7 +200,7 @@ struct vmw_dx_shader_bindings {
* @splice_index: The device splice index set by user-space.
*/
struct vmw_ctx_bindinfo_uav {
struct vmw_ctx_bindinfo_view views[SVGA3D_MAX_UAVIEWS];
struct vmw_ctx_bindinfo_view views[SVGA3D_DX11_1_MAX_UAVIEWS];
uint32 index;
};
@ -217,6 +217,8 @@ struct vmw_ctx_bindinfo_so {
extern void vmw_binding_add(struct vmw_ctx_binding_state *cbs,
const struct vmw_ctx_bindinfo *ci,
u32 shader_slot, u32 slot);
extern void vmw_binding_cb_offset_update(struct vmw_ctx_binding_state *cbs,
u32 shader_slot, u32 slot, u32 offsetInBytes);
extern void vmw_binding_add_uav_index(struct vmw_ctx_binding_state *cbs,
uint32 slot, uint32 splice_index);
extern void

View File

@ -32,18 +32,6 @@
#include "ttm_object.h"
/**
* struct vmw_user_buffer_object - User-space-visible buffer object
*
* @prime: The prime object providing user visibility.
* @vbo: The struct vmw_buffer_object
*/
struct vmw_user_buffer_object {
struct ttm_prime_object prime;
struct vmw_buffer_object vbo;
};
/**
* vmw_buffer_object - Convert a struct ttm_buffer_object to a struct
* vmw_buffer_object.
@ -59,23 +47,6 @@ vmw_buffer_object(struct ttm_buffer_object *bo)
}
/**
* vmw_user_buffer_object - Convert a struct ttm_buffer_object to a struct
* vmw_user_buffer_object.
*
* @bo: Pointer to the TTM buffer object.
* Return: Pointer to the struct vmw_buffer_object embedding the TTM buffer
* object.
*/
static struct vmw_user_buffer_object *
vmw_user_buffer_object(struct ttm_buffer_object *bo)
{
struct vmw_buffer_object *vmw_bo = vmw_buffer_object(bo);
return container_of(vmw_bo, struct vmw_user_buffer_object, vbo);
}
/**
* vmw_bo_pin_in_placement - Validate a buffer to placement.
*
@ -391,39 +362,6 @@ void vmw_bo_unmap(struct vmw_buffer_object *vbo)
}
/**
* vmw_bo_acc_size - Calculate the pinned memory usage of buffers
*
* @dev_priv: Pointer to a struct vmw_private identifying the device.
* @size: The requested buffer size.
* @user: Whether this is an ordinary dma buffer or a user dma buffer.
*/
static size_t vmw_bo_acc_size(struct vmw_private *dev_priv, size_t size,
bool user)
{
static size_t struct_size, user_struct_size;
size_t num_pages = PFN_UP(size);
size_t page_array_size = ttm_round_pot(num_pages * sizeof(void *));
if (unlikely(struct_size == 0)) {
size_t backend_size = ttm_round_pot(vmw_tt_size);
struct_size = backend_size +
ttm_round_pot(sizeof(struct vmw_buffer_object));
user_struct_size = backend_size +
ttm_round_pot(sizeof(struct vmw_user_buffer_object)) +
TTM_OBJ_EXTRA_SIZE;
}
if (dev_priv->map_mode == vmw_dma_alloc_coherent)
page_array_size +=
ttm_round_pot(num_pages * sizeof(dma_addr_t));
return ((user) ? user_struct_size : struct_size) +
page_array_size;
}
/**
* vmw_bo_bo_free - vmw buffer object destructor
*
@ -436,27 +374,10 @@ void vmw_bo_bo_free(struct ttm_buffer_object *bo)
WARN_ON(vmw_bo->dirty);
WARN_ON(!RB_EMPTY_ROOT(&vmw_bo->res_tree));
vmw_bo_unmap(vmw_bo);
dma_resv_fini(&bo->base._resv);
drm_gem_object_release(&bo->base);
kfree(vmw_bo);
}
/**
* vmw_user_bo_destroy - vmw buffer object destructor
*
* @bo: Pointer to the embedded struct ttm_buffer_object
*/
static void vmw_user_bo_destroy(struct ttm_buffer_object *bo)
{
struct vmw_user_buffer_object *vmw_user_bo = vmw_user_buffer_object(bo);
struct vmw_buffer_object *vbo = &vmw_user_bo->vbo;
WARN_ON(vbo->dirty);
WARN_ON(!RB_EMPTY_ROOT(&vbo->res_tree));
vmw_bo_unmap(vbo);
ttm_prime_object_kfree(vmw_user_bo, prime);
}
/**
* vmw_bo_create_kernel - Create a pinned BO for internal kernel use.
*
@ -471,33 +392,27 @@ int vmw_bo_create_kernel(struct vmw_private *dev_priv, unsigned long size,
struct ttm_placement *placement,
struct ttm_buffer_object **p_bo)
{
struct ttm_operation_ctx ctx = { false, false };
struct ttm_operation_ctx ctx = {
.interruptible = false,
.no_wait_gpu = false
};
struct ttm_buffer_object *bo;
size_t acc_size;
struct drm_device *vdev = &dev_priv->drm;
int ret;
bo = kzalloc(sizeof(*bo), GFP_KERNEL);
if (unlikely(!bo))
return -ENOMEM;
acc_size = ttm_round_pot(sizeof(*bo));
acc_size += ttm_round_pot(PFN_UP(size) * sizeof(void *));
acc_size += ttm_round_pot(sizeof(struct ttm_tt));
size = ALIGN(size, PAGE_SIZE);
ret = ttm_mem_global_alloc(&ttm_mem_glob, acc_size, &ctx);
if (unlikely(ret))
goto error_free;
bo->base.size = size;
dma_resv_init(&bo->base._resv);
drm_vma_node_reset(&bo->base.vma_node);
drm_gem_private_object_init(vdev, &bo->base, size);
ret = ttm_bo_init_reserved(&dev_priv->bdev, bo, size,
ttm_bo_type_kernel, placement, 0,
&ctx, NULL, NULL, NULL);
if (unlikely(ret))
goto error_account;
goto error_free;
ttm_bo_pin(bo);
ttm_bo_unreserve(bo);
@ -505,14 +420,38 @@ int vmw_bo_create_kernel(struct vmw_private *dev_priv, unsigned long size,
return 0;
error_account:
ttm_mem_global_free(&ttm_mem_glob, acc_size);
error_free:
kfree(bo);
return ret;
}
int vmw_bo_create(struct vmw_private *vmw,
size_t size, struct ttm_placement *placement,
bool interruptible, bool pin,
void (*bo_free)(struct ttm_buffer_object *bo),
struct vmw_buffer_object **p_bo)
{
int ret;
*p_bo = kmalloc(sizeof(**p_bo), GFP_KERNEL);
if (unlikely(!*p_bo)) {
DRM_ERROR("Failed to allocate a buffer.\n");
return -ENOMEM;
}
ret = vmw_bo_init(vmw, *p_bo, size,
placement, interruptible, pin,
bo_free);
if (unlikely(ret != 0))
goto out_error;
return ret;
out_error:
kfree(*p_bo);
*p_bo = NULL;
return ret;
}
/**
* vmw_bo_init - Initialize a vmw buffer object
*
@ -533,192 +472,44 @@ int vmw_bo_init(struct vmw_private *dev_priv,
bool interruptible, bool pin,
void (*bo_free)(struct ttm_buffer_object *bo))
{
struct ttm_operation_ctx ctx = { interruptible, false };
struct ttm_operation_ctx ctx = {
.interruptible = interruptible,
.no_wait_gpu = false
};
struct ttm_device *bdev = &dev_priv->bdev;
size_t acc_size;
struct drm_device *vdev = &dev_priv->drm;
int ret;
bool user = (bo_free == &vmw_user_bo_destroy);
WARN_ON_ONCE(!bo_free && (!user && (bo_free != vmw_bo_bo_free)));
acc_size = vmw_bo_acc_size(dev_priv, size, user);
WARN_ON_ONCE(!bo_free);
memset(vmw_bo, 0, sizeof(*vmw_bo));
BUILD_BUG_ON(TTM_MAX_BO_PRIORITY <= 3);
vmw_bo->base.priority = 3;
vmw_bo->res_tree = RB_ROOT;
ret = ttm_mem_global_alloc(&ttm_mem_glob, acc_size, &ctx);
if (unlikely(ret))
return ret;
vmw_bo->base.base.size = size;
dma_resv_init(&vmw_bo->base.base._resv);
drm_vma_node_reset(&vmw_bo->base.base.vma_node);
size = ALIGN(size, PAGE_SIZE);
drm_gem_private_object_init(vdev, &vmw_bo->base.base, size);
ret = ttm_bo_init_reserved(bdev, &vmw_bo->base, size,
ttm_bo_type_device, placement,
ttm_bo_type_device,
placement,
0, &ctx, NULL, NULL, bo_free);
if (unlikely(ret)) {
ttm_mem_global_free(&ttm_mem_glob, acc_size);
return ret;
}
if (pin)
ttm_bo_pin(&vmw_bo->base);
ttm_bo_unreserve(&vmw_bo->base);
return 0;
}
/**
* vmw_user_bo_release - TTM reference base object release callback for
* vmw user buffer objects
*
* @p_base: The TTM base object pointer about to be unreferenced.
*
* Clears the TTM base object pointer and drops the reference the
* base object has on the underlying struct vmw_buffer_object.
*/
static void vmw_user_bo_release(struct ttm_base_object **p_base)
{
struct vmw_user_buffer_object *vmw_user_bo;
struct ttm_base_object *base = *p_base;
*p_base = NULL;
if (unlikely(base == NULL))
return;
vmw_user_bo = container_of(base, struct vmw_user_buffer_object,
prime.base);
ttm_bo_put(&vmw_user_bo->vbo.base);
}
/**
* vmw_user_bo_ref_obj_release - TTM synccpu reference object release callback
* for vmw user buffer objects
*
* @base: Pointer to the TTM base object
* @ref_type: Reference type of the reference reaching zero.
*
* Called when user-space drops its last synccpu reference on the buffer
* object, Either explicitly or as part of a cleanup file close.
*/
static void vmw_user_bo_ref_obj_release(struct ttm_base_object *base,
enum ttm_ref_type ref_type)
{
struct vmw_user_buffer_object *user_bo;
user_bo = container_of(base, struct vmw_user_buffer_object, prime.base);
switch (ref_type) {
case TTM_REF_SYNCCPU_WRITE:
atomic_dec(&user_bo->vbo.cpu_writers);
break;
default:
WARN_ONCE(true, "Undefined buffer object reference release.\n");
}
}
/**
* vmw_user_bo_alloc - Allocate a user buffer object
*
* @dev_priv: Pointer to a struct device private.
* @tfile: Pointer to a struct ttm_object_file on which to register the user
* object.
* @size: Size of the buffer object.
* @shareable: Boolean whether the buffer is shareable with other open files.
* @handle: Pointer to where the handle value should be assigned.
* @p_vbo: Pointer to where the refcounted struct vmw_buffer_object pointer
* should be assigned.
* @p_base: The TTM base object pointer about to be allocated.
* Return: Zero on success, negative error code on error.
*/
int vmw_user_bo_alloc(struct vmw_private *dev_priv,
struct ttm_object_file *tfile,
uint32_t size,
bool shareable,
uint32_t *handle,
struct vmw_buffer_object **p_vbo,
struct ttm_base_object **p_base)
{
struct vmw_user_buffer_object *user_bo;
int ret;
user_bo = kzalloc(sizeof(*user_bo), GFP_KERNEL);
if (unlikely(!user_bo)) {
DRM_ERROR("Failed to allocate a buffer.\n");
return -ENOMEM;
}
ret = vmw_bo_init(dev_priv, &user_bo->vbo, size,
(dev_priv->has_mob) ?
&vmw_sys_placement :
&vmw_vram_sys_placement, true, false,
&vmw_user_bo_destroy);
if (unlikely(ret != 0))
return ret;
ttm_bo_get(&user_bo->vbo.base);
ret = ttm_prime_object_init(tfile,
size,
&user_bo->prime,
shareable,
ttm_buffer_type,
&vmw_user_bo_release,
&vmw_user_bo_ref_obj_release);
if (unlikely(ret != 0)) {
ttm_bo_put(&user_bo->vbo.base);
goto out_no_base_object;
}
*p_vbo = &user_bo->vbo;
if (p_base) {
*p_base = &user_bo->prime.base;
kref_get(&(*p_base)->refcount);
}
*handle = user_bo->prime.base.handle;
out_no_base_object:
return ret;
}
/**
* vmw_user_bo_verify_access - verify access permissions on this
* buffer object.
*
* @bo: Pointer to the buffer object being accessed
* @tfile: Identifying the caller.
*/
int vmw_user_bo_verify_access(struct ttm_buffer_object *bo,
struct ttm_object_file *tfile)
{
struct vmw_user_buffer_object *vmw_user_bo;
if (unlikely(bo->destroy != vmw_user_bo_destroy))
return -EPERM;
vmw_user_bo = vmw_user_buffer_object(bo);
/* Check that the caller has opened the object. */
if (likely(ttm_ref_object_exists(tfile, &vmw_user_bo->prime.base)))
return 0;
DRM_ERROR("Could not grant buffer access.\n");
return -EPERM;
}
/**
* vmw_user_bo_synccpu_grab - Grab a struct vmw_user_buffer_object for cpu
* vmw_user_bo_synccpu_grab - Grab a struct vmw_buffer_object for cpu
* access, idling previous GPU operations on the buffer and optionally
* blocking it for further command submissions.
*
* @user_bo: Pointer to the buffer object being grabbed for CPU access
* @tfile: Identifying the caller.
* @vmw_bo: Pointer to the buffer object being grabbed for CPU access
* @flags: Flags indicating how the grab should be performed.
* Return: Zero on success, Negative error code on error. In particular,
* -EBUSY will be returned if a dontblock operation is requested and the
@ -727,13 +518,11 @@ int vmw_user_bo_verify_access(struct ttm_buffer_object *bo,
*
* A blocking grab will be automatically released when @tfile is closed.
*/
static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
struct ttm_object_file *tfile,
static int vmw_user_bo_synccpu_grab(struct vmw_buffer_object *vmw_bo,
uint32_t flags)
{
bool nonblock = !!(flags & drm_vmw_synccpu_dontblock);
struct ttm_buffer_object *bo = &user_bo->vbo.base;
bool existed;
struct ttm_buffer_object *bo = &vmw_bo->base;
int ret;
if (flags & drm_vmw_synccpu_allow_cs) {
@ -755,17 +544,12 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
ret = ttm_bo_wait(bo, true, nonblock);
if (likely(ret == 0))
atomic_inc(&user_bo->vbo.cpu_writers);
atomic_inc(&vmw_bo->cpu_writers);
ttm_bo_unreserve(bo);
if (unlikely(ret != 0))
return ret;
ret = ttm_ref_object_add(tfile, &user_bo->prime.base,
TTM_REF_SYNCCPU_WRITE, &existed, false);
if (ret != 0 || existed)
atomic_dec(&user_bo->vbo.cpu_writers);
return ret;
}
@ -773,19 +557,23 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo,
* vmw_user_bo_synccpu_release - Release a previous grab for CPU access,
* and unblock command submission on the buffer if blocked.
*
* @filp: Identifying the caller.
* @handle: Handle identifying the buffer object.
* @tfile: Identifying the caller.
* @flags: Flags indicating the type of release.
*/
static int vmw_user_bo_synccpu_release(uint32_t handle,
struct ttm_object_file *tfile,
uint32_t flags)
static int vmw_user_bo_synccpu_release(struct drm_file *filp,
uint32_t handle,
uint32_t flags)
{
if (!(flags & drm_vmw_synccpu_allow_cs))
return ttm_ref_object_base_unref(tfile, handle,
TTM_REF_SYNCCPU_WRITE);
struct vmw_buffer_object *vmw_bo;
int ret = vmw_user_bo_lookup(filp, handle, &vmw_bo);
return 0;
if (!(flags & drm_vmw_synccpu_allow_cs)) {
atomic_dec(&vmw_bo->cpu_writers);
}
ttm_bo_put(&vmw_bo->base);
return ret;
}
@ -807,9 +595,6 @@ int vmw_user_bo_synccpu_ioctl(struct drm_device *dev, void *data,
struct drm_vmw_synccpu_arg *arg =
(struct drm_vmw_synccpu_arg *) data;
struct vmw_buffer_object *vbo;
struct vmw_user_buffer_object *user_bo;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct ttm_base_object *buffer_base;
int ret;
if ((arg->flags & (drm_vmw_synccpu_read | drm_vmw_synccpu_write)) == 0
@ -822,16 +607,12 @@ int vmw_user_bo_synccpu_ioctl(struct drm_device *dev, void *data,
switch (arg->op) {
case drm_vmw_synccpu_grab:
ret = vmw_user_bo_lookup(tfile, arg->handle, &vbo,
&buffer_base);
ret = vmw_user_bo_lookup(file_priv, arg->handle, &vbo);
if (unlikely(ret != 0))
return ret;
user_bo = container_of(vbo, struct vmw_user_buffer_object,
vbo);
ret = vmw_user_bo_synccpu_grab(user_bo, tfile, arg->flags);
ret = vmw_user_bo_synccpu_grab(vbo, arg->flags);
vmw_bo_unreference(&vbo);
ttm_base_object_unref(&buffer_base);
if (unlikely(ret != 0 && ret != -ERESTARTSYS &&
ret != -EBUSY)) {
DRM_ERROR("Failed synccpu grab on handle 0x%08x.\n",
@ -840,7 +621,8 @@ int vmw_user_bo_synccpu_ioctl(struct drm_device *dev, void *data,
}
break;
case drm_vmw_synccpu_release:
ret = vmw_user_bo_synccpu_release(arg->handle, tfile,
ret = vmw_user_bo_synccpu_release(file_priv,
arg->handle,
arg->flags);
if (unlikely(ret != 0)) {
DRM_ERROR("Failed synccpu release on handle 0x%08x.\n",
@ -856,50 +638,6 @@ int vmw_user_bo_synccpu_ioctl(struct drm_device *dev, void *data,
return 0;
}
/**
* vmw_bo_alloc_ioctl - ioctl function implementing the buffer object
* allocation functionality.
*
* @dev: Identifies the drm device.
* @data: Pointer to the ioctl argument.
* @file_priv: Identifies the caller.
* Return: Zero on success, negative error code on error.
*
* This function checks the ioctl arguments for validity and allocates a
* struct vmw_user_buffer_object bo.
*/
int vmw_bo_alloc_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
struct vmw_private *dev_priv = vmw_priv(dev);
union drm_vmw_alloc_dmabuf_arg *arg =
(union drm_vmw_alloc_dmabuf_arg *)data;
struct drm_vmw_alloc_dmabuf_req *req = &arg->req;
struct drm_vmw_dmabuf_rep *rep = &arg->rep;
struct vmw_buffer_object *vbo;
uint32_t handle;
int ret;
ret = vmw_user_bo_alloc(dev_priv, vmw_fpriv(file_priv)->tfile,
req->size, false, &handle, &vbo,
NULL);
if (unlikely(ret != 0))
goto out_no_bo;
rep->handle = handle;
rep->map_handle = drm_vma_node_offset_addr(&vbo->base.base.vma_node);
rep->cur_gmr_id = handle;
rep->cur_gmr_offset = 0;
vmw_bo_unreference(&vbo);
out_no_bo:
return ret;
}
/**
* vmw_bo_unref_ioctl - Generic handle close ioctl.
*
@ -917,65 +655,48 @@ int vmw_bo_unref_ioctl(struct drm_device *dev, void *data,
struct drm_vmw_unref_dmabuf_arg *arg =
(struct drm_vmw_unref_dmabuf_arg *)data;
return ttm_ref_object_base_unref(vmw_fpriv(file_priv)->tfile,
arg->handle,
TTM_REF_USAGE);
drm_gem_handle_delete(file_priv, arg->handle);
return 0;
}
/**
* vmw_user_bo_lookup - Look up a vmw user buffer object from a handle.
*
* @tfile: The TTM object file the handle is registered with.
* @filp: The file the handle is registered with.
* @handle: The user buffer object handle
* @out: Pointer to a where a pointer to the embedded
* struct vmw_buffer_object should be placed.
* @p_base: Pointer to where a pointer to the TTM base object should be
* placed, or NULL if no such pointer is required.
* Return: Zero on success, Negative error code on error.
*
* Both the output base object pointer and the vmw buffer object pointer
* will be refcounted.
* The vmw buffer object pointer will be refcounted.
*/
int vmw_user_bo_lookup(struct ttm_object_file *tfile,
uint32_t handle, struct vmw_buffer_object **out,
struct ttm_base_object **p_base)
int vmw_user_bo_lookup(struct drm_file *filp,
uint32_t handle,
struct vmw_buffer_object **out)
{
struct vmw_user_buffer_object *vmw_user_bo;
struct ttm_base_object *base;
struct drm_gem_object *gobj;
base = ttm_base_object_lookup(tfile, handle);
if (unlikely(base == NULL)) {
gobj = drm_gem_object_lookup(filp, handle);
if (!gobj) {
DRM_ERROR("Invalid buffer object handle 0x%08lx.\n",
(unsigned long)handle);
return -ESRCH;
}
if (unlikely(ttm_base_object_type(base) != ttm_buffer_type)) {
ttm_base_object_unref(&base);
DRM_ERROR("Invalid buffer object handle 0x%08lx.\n",
(unsigned long)handle);
return -EINVAL;
}
vmw_user_bo = container_of(base, struct vmw_user_buffer_object,
prime.base);
ttm_bo_get(&vmw_user_bo->vbo.base);
if (p_base)
*p_base = base;
else
ttm_base_object_unref(&base);
*out = &vmw_user_bo->vbo;
*out = gem_to_vmw_bo(gobj);
ttm_bo_get(&(*out)->base);
drm_gem_object_put(gobj);
return 0;
}
/**
* vmw_user_bo_noref_lookup - Look up a vmw user buffer object without reference
* @tfile: The TTM object file the handle is registered with.
* @filp: The TTM object file the handle is registered with.
* @handle: The user buffer object handle.
*
* This function looks up a struct vmw_user_bo and returns a pointer to the
* This function looks up a struct vmw_bo and returns a pointer to the
* struct vmw_buffer_object it derives from without refcounting the pointer.
* The returned pointer is only valid until vmw_user_bo_noref_release() is
* called, and the object pointed to by the returned pointer may be doomed.
@ -988,52 +709,23 @@ int vmw_user_bo_lookup(struct ttm_object_file *tfile,
* error pointer on failure.
*/
struct vmw_buffer_object *
vmw_user_bo_noref_lookup(struct ttm_object_file *tfile, u32 handle)
vmw_user_bo_noref_lookup(struct drm_file *filp, u32 handle)
{
struct vmw_user_buffer_object *vmw_user_bo;
struct ttm_base_object *base;
struct vmw_buffer_object *vmw_bo;
struct ttm_buffer_object *bo;
struct drm_gem_object *gobj = drm_gem_object_lookup(filp, handle);
base = ttm_base_object_noref_lookup(tfile, handle);
if (!base) {
if (!gobj) {
DRM_ERROR("Invalid buffer object handle 0x%08lx.\n",
(unsigned long)handle);
return ERR_PTR(-ESRCH);
}
vmw_bo = gem_to_vmw_bo(gobj);
bo = ttm_bo_get_unless_zero(&vmw_bo->base);
vmw_bo = vmw_buffer_object(bo);
drm_gem_object_put(gobj);
if (unlikely(ttm_base_object_type(base) != ttm_buffer_type)) {
ttm_base_object_noref_release();
DRM_ERROR("Invalid buffer object handle 0x%08lx.\n",
(unsigned long)handle);
return ERR_PTR(-EINVAL);
}
vmw_user_bo = container_of(base, struct vmw_user_buffer_object,
prime.base);
return &vmw_user_bo->vbo;
}
/**
* vmw_user_bo_reference - Open a handle to a vmw user buffer object.
*
* @tfile: The TTM object file to register the handle with.
* @vbo: The embedded vmw buffer object.
* @handle: Pointer to where the new handle should be placed.
* Return: Zero on success, Negative error code on error.
*/
int vmw_user_bo_reference(struct ttm_object_file *tfile,
struct vmw_buffer_object *vbo,
uint32_t *handle)
{
struct vmw_user_buffer_object *user_bo;
if (vbo->base.destroy != vmw_user_bo_destroy)
return -EINVAL;
user_bo = container_of(vbo, struct vmw_user_buffer_object, vbo);
*handle = user_bo->prime.base.handle;
return ttm_ref_object_add(tfile, &user_bo->prime.base,
TTM_REF_USAGE, NULL, false);
return vmw_bo;
}
@ -1087,68 +779,15 @@ int vmw_dumb_create(struct drm_file *file_priv,
int ret;
args->pitch = args->width * ((args->bpp + 7) / 8);
args->size = args->pitch * args->height;
args->size = ALIGN(args->pitch * args->height, PAGE_SIZE);
ret = vmw_user_bo_alloc(dev_priv, vmw_fpriv(file_priv)->tfile,
args->size, false, &args->handle,
&vbo, NULL);
if (unlikely(ret != 0))
goto out_no_bo;
ret = vmw_gem_object_create_with_handle(dev_priv, file_priv,
args->size, &args->handle,
&vbo);
vmw_bo_unreference(&vbo);
out_no_bo:
return ret;
}
/**
* vmw_dumb_map_offset - Return the address space offset of a dumb buffer
*
* @file_priv: Pointer to a struct drm_file identifying the caller.
* @dev: Pointer to the drm device.
* @handle: Handle identifying the dumb buffer.
* @offset: The address space offset returned.
* Return: Zero on success, negative error code on failure.
*
* This is a driver callback for the core drm dumb_map_offset functionality.
*/
int vmw_dumb_map_offset(struct drm_file *file_priv,
struct drm_device *dev, uint32_t handle,
uint64_t *offset)
{
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct vmw_buffer_object *out_buf;
int ret;
ret = vmw_user_bo_lookup(tfile, handle, &out_buf, NULL);
if (ret != 0)
return -EINVAL;
*offset = drm_vma_node_offset_addr(&out_buf->base.base.vma_node);
vmw_bo_unreference(&out_buf);
return 0;
}
/**
* vmw_dumb_destroy - Destroy a dumb boffer
*
* @file_priv: Pointer to a struct drm_file identifying the caller.
* @dev: Pointer to the drm device.
* @handle: Handle identifying the dumb buffer.
* Return: Zero on success, negative error code on failure.
*
* This is a driver callback for the core drm dumb_destroy functionality.
*/
int vmw_dumb_destroy(struct drm_file *file_priv,
struct drm_device *dev,
uint32_t handle)
{
return ttm_ref_object_base_unref(vmw_fpriv(file_priv)->tfile,
handle, TTM_REF_USAGE);
}
/**
* vmw_bo_swap_notify - swapout notify callback.
*
@ -1157,8 +796,7 @@ int vmw_dumb_destroy(struct drm_file *file_priv,
void vmw_bo_swap_notify(struct ttm_buffer_object *bo)
{
/* Is @bo embedded in a struct vmw_buffer_object? */
if (bo->destroy != vmw_bo_bo_free &&
bo->destroy != vmw_user_bo_destroy)
if (vmw_bo_is_vmw_bo(bo))
return;
/* Kill any cached kernel maps before swapout */
@ -1182,8 +820,7 @@ void vmw_bo_move_notify(struct ttm_buffer_object *bo,
struct vmw_buffer_object *vbo;
/* Make sure @bo is embedded in a struct vmw_buffer_object? */
if (bo->destroy != vmw_bo_bo_free &&
bo->destroy != vmw_user_bo_destroy)
if (vmw_bo_is_vmw_bo(bo))
return;
vbo = container_of(bo, struct vmw_buffer_object, base);
@ -1204,3 +841,22 @@ void vmw_bo_move_notify(struct ttm_buffer_object *bo,
if (mem->mem_type != VMW_PL_MOB && bo->resource->mem_type == VMW_PL_MOB)
vmw_resource_unbind_list(vbo);
}
/**
* vmw_bo_is_vmw_bo - check if the buffer object is a &vmw_buffer_object
* @bo: buffer object to be checked
*
* Uses destroy function associated with the object to determine if this is
* a &vmw_buffer_object.
*
* Returns:
* true if the object is of &vmw_buffer_object type, false if not.
*/
bool vmw_bo_is_vmw_bo(struct ttm_buffer_object *bo)
{
if (bo->destroy == &vmw_bo_bo_free ||
bo->destroy == &vmw_gem_destroy)
return true;
return false;
}

View File

@ -324,22 +324,3 @@ void vmw_cmdbuf_res_man_destroy(struct vmw_cmdbuf_res_manager *man)
kfree(man);
}
/**
* vmw_cmdbuf_res_man_size - Return the size of a command buffer managed
* resource manager
*
* Returns the approximate allocation size of a command buffer managed
* resource manager.
*/
size_t vmw_cmdbuf_res_man_size(void)
{
static size_t res_man_size;
if (unlikely(res_man_size == 0))
res_man_size =
ttm_round_pot(sizeof(struct vmw_cmdbuf_res_manager)) +
ttm_round_pot(sizeof(struct hlist_head) <<
VMW_CMDBUF_RES_MAN_HT_ORDER);
return res_man_size;
}

View File

@ -60,8 +60,6 @@ static int vmw_dx_context_unbind(struct vmw_resource *res,
struct ttm_validate_buffer *val_buf);
static int vmw_dx_context_destroy(struct vmw_resource *res);
static uint64_t vmw_user_context_size;
static const struct vmw_user_resource_conv user_context_conv = {
.object_type = VMW_RES_CONTEXT,
.base_obj_to_res = vmw_user_context_base_to_res,
@ -686,7 +684,6 @@ static void vmw_user_context_free(struct vmw_resource *res)
{
struct vmw_user_context *ctx =
container_of(res, struct vmw_user_context, res);
struct vmw_private *dev_priv = res->dev_priv;
if (ctx->cbs)
vmw_binding_state_free(ctx->cbs);
@ -694,8 +691,6 @@ static void vmw_user_context_free(struct vmw_resource *res)
(void) vmw_context_bind_dx_query(res, NULL);
ttm_base_object_kfree(ctx, base);
ttm_mem_global_free(vmw_mem_glob(dev_priv),
vmw_user_context_size);
}
/*
@ -720,7 +715,7 @@ int vmw_context_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_vmw_context_arg *arg = (struct drm_vmw_context_arg *)data;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
return ttm_ref_object_base_unref(tfile, arg->cid, TTM_REF_USAGE);
return ttm_ref_object_base_unref(tfile, arg->cid);
}
static int vmw_context_define(struct drm_device *dev, void *data,
@ -732,10 +727,6 @@ static int vmw_context_define(struct drm_device *dev, void *data,
struct vmw_resource *tmp;
struct drm_vmw_context_arg *arg = (struct drm_vmw_context_arg *)data;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct ttm_operation_ctx ttm_opt_ctx = {
.interruptible = true,
.no_wait_gpu = false
};
int ret;
if (!has_sm4_context(dev_priv) && dx) {
@ -743,25 +734,8 @@ static int vmw_context_define(struct drm_device *dev, void *data,
return -EINVAL;
}
if (unlikely(vmw_user_context_size == 0))
vmw_user_context_size = ttm_round_pot(sizeof(*ctx)) +
((dev_priv->has_mob) ? vmw_cmdbuf_res_man_size() : 0) +
+ VMW_IDA_ACC_SIZE + TTM_OBJ_EXTRA_SIZE;
ret = ttm_mem_global_alloc(vmw_mem_glob(dev_priv),
vmw_user_context_size,
&ttm_opt_ctx);
if (unlikely(ret != 0)) {
if (ret != -ERESTARTSYS)
DRM_ERROR("Out of graphics memory for context"
" creation.\n");
goto out_ret;
}
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (unlikely(!ctx)) {
ttm_mem_global_free(vmw_mem_glob(dev_priv),
vmw_user_context_size);
ret = -ENOMEM;
goto out_ret;
}
@ -780,7 +754,7 @@ static int vmw_context_define(struct drm_device *dev, void *data,
tmp = vmw_resource_reference(&ctx->res);
ret = ttm_base_object_init(tfile, &ctx->base, false, VMW_RES_CONTEXT,
&vmw_user_context_base_release, NULL);
&vmw_user_context_base_release);
if (unlikely(ret != 0)) {
vmw_resource_unreference(&tmp);

View File

@ -407,12 +407,8 @@ static int vmw_cotable_resize(struct vmw_resource *res, size_t new_size)
* for the new COTable. Initially pin the buffer object to make sure
* we can use tryreserve without failure.
*/
buf = kzalloc(sizeof(*buf), GFP_KERNEL);
if (!buf)
return -ENOMEM;
ret = vmw_bo_init(dev_priv, buf, new_size, &vmw_mob_placement,
true, true, vmw_bo_bo_free);
ret = vmw_bo_create(dev_priv, new_size, &vmw_mob_placement,
true, true, vmw_bo_bo_free, &buf);
if (ret) {
DRM_ERROR("Failed initializing new cotable MOB.\n");
return ret;
@ -546,8 +542,6 @@ static void vmw_hw_cotable_destroy(struct vmw_resource *res)
(void) vmw_cotable_destroy(res);
}
static size_t cotable_acc_size;
/**
* vmw_cotable_free - Cotable resource destructor
*
@ -555,10 +549,7 @@ static size_t cotable_acc_size;
*/
static void vmw_cotable_free(struct vmw_resource *res)
{
struct vmw_private *dev_priv = res->dev_priv;
kfree(res);
ttm_mem_global_free(vmw_mem_glob(dev_priv), cotable_acc_size);
}
/**
@ -574,21 +565,9 @@ struct vmw_resource *vmw_cotable_alloc(struct vmw_private *dev_priv,
u32 type)
{
struct vmw_cotable *vcotbl;
struct ttm_operation_ctx ttm_opt_ctx = {
.interruptible = true,
.no_wait_gpu = false
};
int ret;
u32 num_entries;
if (unlikely(cotable_acc_size == 0))
cotable_acc_size = ttm_round_pot(sizeof(struct vmw_cotable));
ret = ttm_mem_global_alloc(vmw_mem_glob(dev_priv),
cotable_acc_size, &ttm_opt_ctx);
if (unlikely(ret))
return ERR_PTR(ret);
vcotbl = kzalloc(sizeof(*vcotbl), GFP_KERNEL);
if (unlikely(!vcotbl)) {
ret = -ENOMEM;
@ -622,7 +601,6 @@ struct vmw_resource *vmw_cotable_alloc(struct vmw_private *dev_priv,
out_no_init:
kfree(vcotbl);
out_no_alloc:
ttm_mem_global_free(vmw_mem_glob(dev_priv), cotable_acc_size);
return ERR_PTR(ret);
}

View File

@ -34,6 +34,7 @@
#include <drm/drm_drv.h>
#include <drm/drm_ioctl.h>
#include <drm/drm_sysfs.h>
#include <drm/drm_gem_ttm_helper.h>
#include <drm/ttm/ttm_bo_driver.h>
#include <drm/ttm/ttm_range_manager.h>
#include <drm/ttm/ttm_placement.h>
@ -50,9 +51,6 @@
#define VMW_MIN_INITIAL_WIDTH 800
#define VMW_MIN_INITIAL_HEIGHT 600
#define VMWGFX_VALIDATION_MEM_GRAN (16*PAGE_SIZE)
/*
* Fully encoded drm commands. Might move to vmw_drm.h
*/
@ -165,7 +163,7 @@
static const struct drm_ioctl_desc vmw_ioctls[] = {
DRM_IOCTL_DEF_DRV(VMW_GET_PARAM, vmw_getparam_ioctl,
DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(VMW_ALLOC_DMABUF, vmw_bo_alloc_ioctl,
DRM_IOCTL_DEF_DRV(VMW_ALLOC_DMABUF, vmw_gem_object_create_ioctl,
DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(VMW_UNREF_DMABUF, vmw_bo_unref_ioctl,
DRM_RENDER_ALLOW),
@ -257,8 +255,8 @@ static const struct drm_ioctl_desc vmw_ioctls[] = {
};
static const struct pci_device_id vmw_pci_id_list[] = {
{ PCI_DEVICE(0x15ad, VMWGFX_PCI_ID_SVGA2) },
{ PCI_DEVICE(0x15ad, VMWGFX_PCI_ID_SVGA3) },
{ PCI_DEVICE(PCI_VENDOR_ID_VMWARE, VMWGFX_PCI_ID_SVGA2) },
{ PCI_DEVICE(PCI_VENDOR_ID_VMWARE, VMWGFX_PCI_ID_SVGA3) },
{ }
};
MODULE_DEVICE_TABLE(pci, vmw_pci_id_list);
@ -366,6 +364,7 @@ static void vmw_print_sm_type(struct vmw_private *dev_priv)
[VMW_SM_4] = "SM4",
[VMW_SM_4_1] = "SM4_1",
[VMW_SM_5] = "SM_5",
[VMW_SM_5_1X] = "SM_5_1X",
[VMW_SM_MAX] = "Invalid"
};
BUILD_BUG_ON(ARRAY_SIZE(names) != (VMW_SM_MAX + 1));
@ -399,13 +398,9 @@ static int vmw_dummy_query_bo_create(struct vmw_private *dev_priv)
* immediately succeed. This is because we're the only
* user of the bo currently.
*/
vbo = kzalloc(sizeof(*vbo), GFP_KERNEL);
if (!vbo)
return -ENOMEM;
ret = vmw_bo_init(dev_priv, vbo, PAGE_SIZE,
&vmw_sys_placement, false, true,
&vmw_bo_bo_free);
ret = vmw_bo_create(dev_priv, PAGE_SIZE,
&vmw_sys_placement, false, true,
&vmw_bo_bo_free, &vbo);
if (unlikely(ret != 0))
return ret;
@ -986,8 +981,7 @@ static int vmw_driver_load(struct vmw_private *dev_priv, u32 pci_id)
goto out_err0;
}
dev_priv->tdev = ttm_object_device_init(&ttm_mem_glob, 12,
&vmw_prime_dmabuf_ops);
dev_priv->tdev = ttm_object_device_init(12, &vmw_prime_dmabuf_ops);
if (unlikely(dev_priv->tdev == NULL)) {
drm_err(&dev_priv->drm,
@ -1083,8 +1077,6 @@ static int vmw_driver_load(struct vmw_private *dev_priv, u32 pci_id)
dev_priv->sm_type = VMW_SM_4;
}
vmw_validation_mem_init_ttm(dev_priv, VMWGFX_VALIDATION_MEM_GRAN);
/* SVGA_CAP2_DX2 (DefineGBSurface_v3) is needed for SM4_1 support */
if (has_sm4_context(dev_priv) &&
(dev_priv->capabilities2 & SVGA_CAP2_DX2)) {
@ -1092,8 +1084,11 @@ static int vmw_driver_load(struct vmw_private *dev_priv, u32 pci_id)
dev_priv->sm_type = VMW_SM_4_1;
if (has_sm4_1_context(dev_priv) &&
(dev_priv->capabilities2 & SVGA_CAP2_DX3)) {
if (vmw_devcap_get(dev_priv, SVGA3D_DEVCAP_SM5))
if (vmw_devcap_get(dev_priv, SVGA3D_DEVCAP_SM5)) {
dev_priv->sm_type = VMW_SM_5;
if (vmw_devcap_get(dev_priv, SVGA3D_DEVCAP_GL43))
dev_priv->sm_type = VMW_SM_5_1X;
}
}
}
@ -1397,7 +1392,6 @@ static void vmw_remove(struct pci_dev *pdev)
{
struct drm_device *dev = pci_get_drvdata(pdev);
ttm_mem_global_release(&ttm_mem_glob);
drm_dev_unregister(dev);
vmw_driver_unload(dev);
}
@ -1585,7 +1579,7 @@ static const struct file_operations vmwgfx_driver_fops = {
static const struct drm_driver driver = {
.driver_features =
DRIVER_MODESET | DRIVER_RENDER | DRIVER_ATOMIC,
DRIVER_MODESET | DRIVER_RENDER | DRIVER_ATOMIC | DRIVER_GEM,
.ioctls = vmw_ioctls,
.num_ioctls = ARRAY_SIZE(vmw_ioctls),
.master_set = vmw_master_set,
@ -1594,8 +1588,7 @@ static const struct drm_driver driver = {
.postclose = vmw_postclose,
.dumb_create = vmw_dumb_create,
.dumb_map_offset = vmw_dumb_map_offset,
.dumb_destroy = vmw_dumb_destroy,
.dumb_map_offset = drm_gem_ttm_dumb_map_offset,
.prime_fd_to_handle = vmw_prime_fd_to_handle,
.prime_handle_to_fd = vmw_prime_handle_to_fd,
@ -1641,23 +1634,19 @@ static int vmw_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_set_drvdata(pdev, &vmw->drm);
ret = ttm_mem_global_init(&ttm_mem_glob, &pdev->dev);
if (ret)
goto out_error;
ret = vmw_driver_load(vmw, ent->device);
if (ret)
goto out_release;
goto out_error;
ret = drm_dev_register(&vmw->drm, 0);
if (ret)
goto out_unload;
vmw_debugfs_gem_init(vmw);
return 0;
out_unload:
vmw_driver_unload(&vmw->drm);
out_release:
ttm_mem_global_release(&ttm_mem_glob);
out_error:
return ret;
}

View File

@ -54,9 +54,9 @@
#define VMWGFX_DRIVER_NAME "vmwgfx"
#define VMWGFX_DRIVER_DATE "20210722"
#define VMWGFX_DRIVER_DATE "20211206"
#define VMWGFX_DRIVER_MAJOR 2
#define VMWGFX_DRIVER_MINOR 19
#define VMWGFX_DRIVER_MINOR 20
#define VMWGFX_DRIVER_PATCHLEVEL 0
#define VMWGFX_FIFO_STATIC_SIZE (1024*1024)
#define VMWGFX_MAX_RELOCATIONS 2048
@ -333,7 +333,6 @@ struct vmw_sg_table {
struct page **pages;
const dma_addr_t *addrs;
struct sg_table *sgt;
unsigned long num_regions;
unsigned long num_pages;
};
@ -361,6 +360,19 @@ struct vmw_piter {
dma_addr_t (*dma_address)(struct vmw_piter *);
};
struct vmw_ttm_tt {
struct ttm_tt dma_ttm;
struct vmw_private *dev_priv;
int gmr_id;
struct vmw_mob *mob;
int mem_type;
struct sg_table sgt;
struct vmw_sg_table vsgt;
bool mapped;
bool bound;
};
/*
* enum vmw_display_unit_type - Describes the display unit
*/
@ -411,6 +423,7 @@ struct vmw_sw_context{
bool res_ht_initialized;
bool kernel;
struct vmw_fpriv *fp;
struct drm_file *filp;
uint32_t *cmd_bounce;
uint32_t cmd_bounce_size;
struct vmw_buffer_object *cur_query_bo;
@ -474,6 +487,7 @@ enum {
* @VMW_SM_4: Context support upto SM4.
* @VMW_SM_4_1: Context support upto SM4_1.
* @VMW_SM_5: Context support up to SM5.
* @VMW_SM_5_1X: Adds support for sm5_1 and gl43 extensions.
* @VMW_SM_MAX: Should be the last.
*/
enum vmw_sm_type {
@ -481,6 +495,7 @@ enum vmw_sm_type {
VMW_SM_4,
VMW_SM_4_1,
VMW_SM_5,
VMW_SM_5_1X,
VMW_SM_MAX
};
@ -628,9 +643,6 @@ struct vmw_private {
struct vmw_cmdbuf_man *cman;
DECLARE_BITMAP(irqthread_pending, VMW_IRQTHREAD_MAX);
/* Validation memory reservation */
struct vmw_validation_mem vvm;
uint32 *devcaps;
/*
@ -646,6 +658,11 @@ struct vmw_private {
#endif
};
static inline struct vmw_buffer_object *gem_to_vmw_bo(struct drm_gem_object *gobj)
{
return container_of((gobj), struct vmw_buffer_object, base.base);
}
static inline struct vmw_surface *vmw_res_to_srf(struct vmw_resource *res)
{
return container_of(res, struct vmw_surface, res);
@ -739,6 +756,24 @@ static inline bool has_sm5_context(const struct vmw_private *dev_priv)
return (dev_priv->sm_type >= VMW_SM_5);
}
/**
* has_gl43_context - Does the device support GL43 context.
* @dev_priv: Device private.
*
* Return: Bool value if device support SM5 context or not.
*/
static inline bool has_gl43_context(const struct vmw_private *dev_priv)
{
return (dev_priv->sm_type >= VMW_SM_5_1X);
}
static inline u32 vmw_max_num_uavs(struct vmw_private *dev_priv)
{
return (has_gl43_context(dev_priv) ?
SVGA3D_DX11_1_MAX_UAVIEWS : SVGA3D_MAX_UAVIEWS);
}
extern void vmw_svga_enable(struct vmw_private *dev_priv);
extern void vmw_svga_disable(struct vmw_private *dev_priv);
@ -768,7 +803,7 @@ extern int vmw_resource_reserve(struct vmw_resource *res, bool interruptible,
bool no_backup);
extern bool vmw_resource_needs_backup(const struct vmw_resource *res);
extern int vmw_user_lookup_handle(struct vmw_private *dev_priv,
struct ttm_object_file *tfile,
struct drm_file *filp,
uint32_t handle,
struct vmw_surface **out_surf,
struct vmw_buffer_object **out_buf);
@ -834,6 +869,7 @@ static inline void vmw_user_resource_noref_release(void)
/**
* Buffer object helper functions - vmwgfx_bo.c
*/
extern bool vmw_bo_is_vmw_bo(struct ttm_buffer_object *bo);
extern int vmw_bo_pin_in_placement(struct vmw_private *vmw_priv,
struct vmw_buffer_object *bo,
struct ttm_placement *placement,
@ -858,32 +894,23 @@ extern int vmw_bo_create_kernel(struct vmw_private *dev_priv,
unsigned long size,
struct ttm_placement *placement,
struct ttm_buffer_object **p_bo);
extern int vmw_bo_create(struct vmw_private *dev_priv,
size_t size, struct ttm_placement *placement,
bool interruptible, bool pin,
void (*bo_free)(struct ttm_buffer_object *bo),
struct vmw_buffer_object **p_bo);
extern int vmw_bo_init(struct vmw_private *dev_priv,
struct vmw_buffer_object *vmw_bo,
size_t size, struct ttm_placement *placement,
bool interruptible, bool pin,
void (*bo_free)(struct ttm_buffer_object *bo));
extern int vmw_user_bo_verify_access(struct ttm_buffer_object *bo,
struct ttm_object_file *tfile);
extern int vmw_user_bo_alloc(struct vmw_private *dev_priv,
struct ttm_object_file *tfile,
uint32_t size,
bool shareable,
uint32_t *handle,
struct vmw_buffer_object **p_dma_buf,
struct ttm_base_object **p_base);
extern int vmw_user_bo_reference(struct ttm_object_file *tfile,
struct vmw_buffer_object *dma_buf,
uint32_t *handle);
extern int vmw_bo_alloc_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
extern int vmw_bo_unref_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
extern int vmw_user_bo_synccpu_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
extern int vmw_user_bo_lookup(struct ttm_object_file *tfile,
uint32_t id, struct vmw_buffer_object **out,
struct ttm_base_object **base);
extern int vmw_user_bo_lookup(struct drm_file *filp,
uint32_t handle,
struct vmw_buffer_object **out);
extern void vmw_bo_fence_single(struct ttm_buffer_object *bo,
struct vmw_fence_obj *fence);
extern void *vmw_bo_map_and_cache(struct vmw_buffer_object *vbo);
@ -892,16 +919,7 @@ extern void vmw_bo_move_notify(struct ttm_buffer_object *bo,
struct ttm_resource *mem);
extern void vmw_bo_swap_notify(struct ttm_buffer_object *bo);
extern struct vmw_buffer_object *
vmw_user_bo_noref_lookup(struct ttm_object_file *tfile, u32 handle);
/**
* vmw_user_bo_noref_release - release a buffer object pointer looked up
* without reference
*/
static inline void vmw_user_bo_noref_release(void)
{
ttm_base_object_noref_release();
}
vmw_user_bo_noref_lookup(struct drm_file *filp, u32 handle);
/**
* vmw_bo_adjust_prio - Adjust the buffer object eviction priority
@ -952,6 +970,19 @@ static inline void vmw_bo_prio_del(struct vmw_buffer_object *vbo, int prio)
vmw_bo_prio_adjust(vbo);
}
/**
* GEM related functionality - vmwgfx_gem.c
*/
extern int vmw_gem_object_create_with_handle(struct vmw_private *dev_priv,
struct drm_file *filp,
uint32_t size,
uint32_t *handle,
struct vmw_buffer_object **p_vbo);
extern int vmw_gem_object_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp);
extern void vmw_gem_destroy(struct ttm_buffer_object *bo);
extern void vmw_debugfs_gem_init(struct vmw_private *vdev);
/**
* Misc Ioctl functionality - vmwgfx_ioctl.c
*/
@ -1028,9 +1059,6 @@ vmw_is_cursor_bypass3_enabled(const struct vmw_private *dev_priv)
extern int vmw_mmap(struct file *filp, struct vm_area_struct *vma);
extern void vmw_validation_mem_init_ttm(struct vmw_private *dev_priv,
size_t gran);
/**
* TTM buffer object driver - vmwgfx_ttm_buffer.c
*/
@ -1218,13 +1246,6 @@ void vmw_kms_lost_device(struct drm_device *dev);
int vmw_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args);
int vmw_dumb_map_offset(struct drm_file *file_priv,
struct drm_device *dev, uint32_t handle,
uint64_t *offset);
int vmw_dumb_destroy(struct drm_file *file_priv,
struct drm_device *dev,
uint32_t handle);
extern int vmw_resource_pin(struct vmw_resource *res, bool interruptible);
extern void vmw_resource_unpin(struct vmw_resource *res);
extern enum vmw_res_type vmw_res_type(const struct vmw_resource *res);
@ -1328,18 +1349,6 @@ extern int vmw_gb_surface_define_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
extern int vmw_gb_surface_reference_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int vmw_surface_gb_priv_define(struct drm_device *dev,
uint32_t user_accounting_size,
SVGA3dSurfaceAllFlags svga3d_flags,
SVGA3dSurfaceFormat format,
bool for_scanout,
uint32_t num_mip_levels,
uint32_t multisample_count,
uint32_t array_size,
struct drm_vmw_size size,
SVGA3dMSPattern multisample_pattern,
SVGA3dMSQualityLevel quality_level,
struct vmw_surface **srf_out);
extern int vmw_gb_surface_define_ext_ioctl(struct drm_device *dev,
void *data,
struct drm_file *file_priv);
@ -1348,7 +1357,6 @@ extern int vmw_gb_surface_reference_ext_ioctl(struct drm_device *dev,
struct drm_file *file_priv);
int vmw_gb_surface_define(struct vmw_private *dev_priv,
uint32_t user_accounting_size,
const struct vmw_surface_metadata *req,
struct vmw_surface **srf_out);
@ -1409,7 +1417,6 @@ void vmw_dx_streamoutput_cotable_list_scrub(struct vmw_private *dev_priv,
extern struct vmw_cmdbuf_res_manager *
vmw_cmdbuf_res_man_create(struct vmw_private *dev_priv);
extern void vmw_cmdbuf_res_man_destroy(struct vmw_cmdbuf_res_manager *man);
extern size_t vmw_cmdbuf_res_man_size(void);
extern struct vmw_resource *
vmw_cmdbuf_res_lookup(struct vmw_cmdbuf_res_manager *man,
enum vmw_cmdbuf_res_type res_type,
@ -1606,11 +1613,6 @@ vmw_bo_reference(struct vmw_buffer_object *buf)
return buf;
}
static inline struct ttm_mem_global *vmw_mem_glob(struct vmw_private *dev_priv)
{
return &ttm_mem_glob;
}
static inline void vmw_fifo_resource_inc(struct vmw_private *dev_priv)
{
atomic_inc(&dev_priv->num_fifo_resources);

View File

@ -1171,14 +1171,13 @@ static int vmw_translate_mob_ptr(struct vmw_private *dev_priv,
int ret;
vmw_validation_preload_bo(sw_context->ctx);
vmw_bo = vmw_user_bo_noref_lookup(sw_context->fp->tfile, handle);
if (IS_ERR(vmw_bo)) {
vmw_bo = vmw_user_bo_noref_lookup(sw_context->filp, handle);
if (IS_ERR_OR_NULL(vmw_bo)) {
VMW_DEBUG_USER("Could not find or use MOB buffer.\n");
return PTR_ERR(vmw_bo);
}
ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo, true, false);
vmw_user_bo_noref_release();
ttm_bo_put(&vmw_bo->base);
if (unlikely(ret != 0))
return ret;
@ -1226,14 +1225,13 @@ static int vmw_translate_guest_ptr(struct vmw_private *dev_priv,
int ret;
vmw_validation_preload_bo(sw_context->ctx);
vmw_bo = vmw_user_bo_noref_lookup(sw_context->fp->tfile, handle);
if (IS_ERR(vmw_bo)) {
vmw_bo = vmw_user_bo_noref_lookup(sw_context->filp, handle);
if (IS_ERR_OR_NULL(vmw_bo)) {
VMW_DEBUG_USER("Could not find or use GMR region.\n");
return PTR_ERR(vmw_bo);
}
ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo, false, false);
vmw_user_bo_noref_release();
ttm_bo_put(&vmw_bo->base);
if (unlikely(ret != 0))
return ret;
@ -2165,6 +2163,44 @@ vmw_cmd_dx_set_single_constant_buffer(struct vmw_private *dev_priv,
return 0;
}
/**
* vmw_cmd_dx_set_constant_buffer_offset - Validate
* SVGA_3D_CMD_DX_SET_VS/PS/GS/HS/DS/CS_CONSTANT_BUFFER_OFFSET command.
*
* @dev_priv: Pointer to a device private struct.
* @sw_context: The software context being used for this batch.
* @header: Pointer to the command header in the command stream.
*/
static int
vmw_cmd_dx_set_constant_buffer_offset(struct vmw_private *dev_priv,
struct vmw_sw_context *sw_context,
SVGA3dCmdHeader *header)
{
VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdDXSetConstantBufferOffset);
struct vmw_ctx_validation_info *ctx_node = VMW_GET_CTX_NODE(sw_context);
u32 shader_slot;
if (!has_sm5_context(dev_priv))
return -EINVAL;
if (!ctx_node)
return -EINVAL;
cmd = container_of(header, typeof(*cmd), header);
if (cmd->body.slot >= SVGA3D_DX_MAX_CONSTBUFFERS) {
VMW_DEBUG_USER("Illegal const buffer slot %u.\n",
(unsigned int) cmd->body.slot);
return -EINVAL;
}
shader_slot = cmd->header.id - SVGA_3D_CMD_DX_SET_VS_CONSTANT_BUFFER_OFFSET;
vmw_binding_cb_offset_update(ctx_node->staged, shader_slot,
cmd->body.slot, cmd->body.offsetInBytes);
return 0;
}
/**
* vmw_cmd_dx_set_shader_res - Validate SVGA_3D_CMD_DX_SET_SHADER_RESOURCES
* command
@ -2918,7 +2954,7 @@ static int vmw_cmd_set_uav(struct vmw_private *dev_priv,
if (!has_sm5_context(dev_priv))
return -EINVAL;
if (num_uav > SVGA3D_MAX_UAVIEWS) {
if (num_uav > vmw_max_num_uavs(dev_priv)) {
VMW_DEBUG_USER("Invalid UAV binding.\n");
return -EINVAL;
}
@ -2950,7 +2986,7 @@ static int vmw_cmd_set_cs_uav(struct vmw_private *dev_priv,
if (!has_sm5_context(dev_priv))
return -EINVAL;
if (num_uav > SVGA3D_MAX_UAVIEWS) {
if (num_uav > vmw_max_num_uavs(dev_priv)) {
VMW_DEBUG_USER("Invalid UAV binding.\n");
return -EINVAL;
}
@ -3528,6 +3564,24 @@ static const struct vmw_cmd_entry vmw_cmd_entries[SVGA_3D_CMD_MAX] = {
VMW_CMD_DEF(SVGA_3D_CMD_DX_TRANSFER_FROM_BUFFER,
&vmw_cmd_dx_transfer_from_buffer,
true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_VS_CONSTANT_BUFFER_OFFSET,
&vmw_cmd_dx_set_constant_buffer_offset,
true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_PS_CONSTANT_BUFFER_OFFSET,
&vmw_cmd_dx_set_constant_buffer_offset,
true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_GS_CONSTANT_BUFFER_OFFSET,
&vmw_cmd_dx_set_constant_buffer_offset,
true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_HS_CONSTANT_BUFFER_OFFSET,
&vmw_cmd_dx_set_constant_buffer_offset,
true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_DS_CONSTANT_BUFFER_OFFSET,
&vmw_cmd_dx_set_constant_buffer_offset,
true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_SET_CS_CONSTANT_BUFFER_OFFSET,
&vmw_cmd_dx_set_constant_buffer_offset,
true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_INTRA_SURFACE_COPY, &vmw_cmd_intra_surface_copy,
true, false, true),
@ -3561,6 +3615,8 @@ static const struct vmw_cmd_entry vmw_cmd_entries[SVGA_3D_CMD_MAX] = {
&vmw_cmd_dx_define_streamoutput, true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_BIND_STREAMOUTPUT,
&vmw_cmd_dx_bind_streamoutput, true, false, true),
VMW_CMD_DEF(SVGA_3D_CMD_DX_DEFINE_RASTERIZER_STATE_V2,
&vmw_cmd_dx_so_define, true, false, true),
};
bool vmw_cmd_describe(const void *buf, u32 *size, char const **cmd)
@ -3869,8 +3925,7 @@ vmw_execbuf_copy_fence_user(struct vmw_private *dev_priv,
fence_rep.fd = -1;
}
ttm_ref_object_base_unref(vmw_fp->tfile, fence_handle,
TTM_REF_USAGE);
ttm_ref_object_base_unref(vmw_fp->tfile, fence_handle);
VMW_DEBUG_USER("Fence copy error. Syncing.\n");
(void) vmw_fence_obj_wait(fence, false, false,
VMW_FENCE_WAIT_TIMEOUT);
@ -4054,8 +4109,6 @@ int vmw_execbuf_process(struct drm_file *file_priv,
struct sync_file *sync_file = NULL;
DECLARE_VAL_CONTEXT(val_ctx, &sw_context->res_ht, 1);
vmw_validation_set_val_mem(&val_ctx, &dev_priv->vvm);
if (flags & DRM_VMW_EXECBUF_FLAG_EXPORT_FENCE_FD) {
out_fence_fd = get_unused_fd_flags(O_CLOEXEC);
if (out_fence_fd < 0) {
@ -4101,6 +4154,7 @@ int vmw_execbuf_process(struct drm_file *file_priv,
sw_context->kernel = true;
}
sw_context->filp = file_priv;
sw_context->fp = vmw_fpriv(file_priv);
INIT_LIST_HEAD(&sw_context->ctx_list);
sw_context->cur_query_bo = dev_priv->pinned_bo;

View File

@ -394,22 +394,15 @@ static int vmw_fb_create_bo(struct vmw_private *vmw_priv,
struct vmw_buffer_object *vmw_bo;
int ret;
vmw_bo = kmalloc(sizeof(*vmw_bo), GFP_KERNEL);
if (!vmw_bo) {
ret = -ENOMEM;
goto err_unlock;
}
ret = vmw_bo_init(vmw_priv, vmw_bo, size,
ret = vmw_bo_create(vmw_priv, size,
&vmw_sys_placement,
false, false,
&vmw_bo_bo_free);
&vmw_bo_bo_free, &vmw_bo);
if (unlikely(ret != 0))
goto err_unlock; /* init frees the buffer on failure */
return ret;
*out = vmw_bo;
err_unlock:
return ret;
}

View File

@ -37,9 +37,6 @@ struct vmw_fence_manager {
spinlock_t lock;
struct list_head fence_list;
struct work_struct work;
u32 user_fence_size;
u32 fence_size;
u32 event_fence_action_size;
bool fifo_down;
struct list_head cleanup_list;
uint32_t pending_actions[VMW_ACTION_MAX];
@ -304,11 +301,6 @@ struct vmw_fence_manager *vmw_fence_manager_init(struct vmw_private *dev_priv)
INIT_LIST_HEAD(&fman->cleanup_list);
INIT_WORK(&fman->work, &vmw_fence_work_func);
fman->fifo_down = true;
fman->user_fence_size = ttm_round_pot(sizeof(struct vmw_user_fence)) +
TTM_OBJ_EXTRA_SIZE;
fman->fence_size = ttm_round_pot(sizeof(struct vmw_fence_obj));
fman->event_fence_action_size =
ttm_round_pot(sizeof(struct vmw_event_fence_action));
mutex_init(&fman->goal_irq_mutex);
fman->ctx = dma_fence_context_alloc(1);
@ -560,14 +552,8 @@ static void vmw_user_fence_destroy(struct vmw_fence_obj *fence)
{
struct vmw_user_fence *ufence =
container_of(fence, struct vmw_user_fence, fence);
struct vmw_fence_manager *fman = fman_from_fence(fence);
ttm_base_object_kfree(ufence, base);
/*
* Free kernel space accounting.
*/
ttm_mem_global_free(vmw_mem_glob(fman->dev_priv),
fman->user_fence_size);
}
static void vmw_user_fence_base_release(struct ttm_base_object **p_base)
@ -590,23 +576,8 @@ int vmw_user_fence_create(struct drm_file *file_priv,
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct vmw_user_fence *ufence;
struct vmw_fence_obj *tmp;
struct ttm_mem_global *mem_glob = vmw_mem_glob(fman->dev_priv);
struct ttm_operation_ctx ctx = {
.interruptible = false,
.no_wait_gpu = false
};
int ret;
/*
* Kernel memory space accounting, since this object may
* be created by a user-space request.
*/
ret = ttm_mem_global_alloc(mem_glob, fman->user_fence_size,
&ctx);
if (unlikely(ret != 0))
return ret;
ufence = kzalloc(sizeof(*ufence), GFP_KERNEL);
if (unlikely(!ufence)) {
ret = -ENOMEM;
@ -625,9 +596,10 @@ int vmw_user_fence_create(struct drm_file *file_priv,
* vmw_user_fence_base_release.
*/
tmp = vmw_fence_obj_reference(&ufence->fence);
ret = ttm_base_object_init(tfile, &ufence->base, false,
VMW_RES_FENCE,
&vmw_user_fence_base_release, NULL);
&vmw_user_fence_base_release);
if (unlikely(ret != 0)) {
@ -646,7 +618,6 @@ out_err:
tmp = &ufence->fence;
vmw_fence_obj_unreference(&tmp);
out_no_object:
ttm_mem_global_free(mem_glob, fman->user_fence_size);
return ret;
}
@ -831,8 +802,7 @@ out:
*/
if (ret == 0 && (arg->wait_options & DRM_VMW_WAIT_OPTION_UNREF))
return ttm_ref_object_base_unref(tfile, arg->handle,
TTM_REF_USAGE);
return ttm_ref_object_base_unref(tfile, arg->handle);
return ret;
}
@ -874,8 +844,7 @@ int vmw_fence_obj_unref_ioctl(struct drm_device *dev, void *data,
(struct drm_vmw_fence_arg *) data;
return ttm_ref_object_base_unref(vmw_fpriv(file_priv)->tfile,
arg->handle,
TTM_REF_USAGE);
arg->handle);
}
/**
@ -1121,7 +1090,7 @@ int vmw_fence_event_ioctl(struct drm_device *dev, void *data,
if (user_fence_rep != NULL) {
ret = ttm_ref_object_add(vmw_fp->tfile, base,
TTM_REF_USAGE, NULL, false);
NULL, false);
if (unlikely(ret != 0)) {
DRM_ERROR("Failed to reference a fence "
"object.\n");
@ -1164,7 +1133,7 @@ int vmw_fence_event_ioctl(struct drm_device *dev, void *data,
return 0;
out_no_create:
if (user_fence_rep != NULL)
ttm_ref_object_base_unref(tfile, handle, TTM_REF_USAGE);
ttm_ref_object_base_unref(tfile, handle);
out_no_ref_obj:
vmw_fence_obj_unreference(&fence);
return ret;

View File

@ -0,0 +1,294 @@
/* SPDX-License-Identifier: GPL-2.0 OR MIT */
/*
* Copyright 2021 VMware, Inc.
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
* files (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy,
* modify, merge, publish, distribute, sublicense, and/or sell copies
* of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#include "vmwgfx_drv.h"
#include "drm/drm_prime.h"
#include "drm/drm_gem_ttm_helper.h"
/**
* vmw_buffer_object - Convert a struct ttm_buffer_object to a struct
* vmw_buffer_object.
*
* @bo: Pointer to the TTM buffer object.
* Return: Pointer to the struct vmw_buffer_object embedding the
* TTM buffer object.
*/
static struct vmw_buffer_object *
vmw_buffer_object(struct ttm_buffer_object *bo)
{
return container_of(bo, struct vmw_buffer_object, base);
}
static void vmw_gem_object_free(struct drm_gem_object *gobj)
{
struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gobj);
if (bo) {
ttm_bo_put(bo);
}
}
static int vmw_gem_object_open(struct drm_gem_object *obj,
struct drm_file *file_priv)
{
return 0;
}
static void vmw_gem_object_close(struct drm_gem_object *obj,
struct drm_file *file_priv)
{
}
static int vmw_gem_pin_private(struct drm_gem_object *obj, bool do_pin)
{
struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(obj);
struct vmw_buffer_object *vbo = vmw_buffer_object(bo);
int ret;
ret = ttm_bo_reserve(bo, false, false, NULL);
if (unlikely(ret != 0))
goto err;
vmw_bo_pin_reserved(vbo, do_pin);
ttm_bo_unreserve(bo);
err:
return ret;
}
static int vmw_gem_object_pin(struct drm_gem_object *obj)
{
return vmw_gem_pin_private(obj, true);
}
static void vmw_gem_object_unpin(struct drm_gem_object *obj)
{
vmw_gem_pin_private(obj, false);
}
static struct sg_table *vmw_gem_object_get_sg_table(struct drm_gem_object *obj)
{
struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(obj);
struct vmw_ttm_tt *vmw_tt =
container_of(bo->ttm, struct vmw_ttm_tt, dma_ttm);
if (vmw_tt->vsgt.sgt)
return vmw_tt->vsgt.sgt;
return drm_prime_pages_to_sg(obj->dev, vmw_tt->dma_ttm.pages, vmw_tt->dma_ttm.num_pages);
}
static const struct drm_gem_object_funcs vmw_gem_object_funcs = {
.free = vmw_gem_object_free,
.open = vmw_gem_object_open,
.close = vmw_gem_object_close,
.print_info = drm_gem_ttm_print_info,
.pin = vmw_gem_object_pin,
.unpin = vmw_gem_object_unpin,
.get_sg_table = vmw_gem_object_get_sg_table,
.vmap = drm_gem_ttm_vmap,
.vunmap = drm_gem_ttm_vunmap,
.mmap = drm_gem_ttm_mmap,
};
/**
* vmw_gem_destroy - vmw buffer object destructor
*
* @bo: Pointer to the embedded struct ttm_buffer_object
*/
void vmw_gem_destroy(struct ttm_buffer_object *bo)
{
struct vmw_buffer_object *vbo = vmw_buffer_object(bo);
WARN_ON(vbo->dirty);
WARN_ON(!RB_EMPTY_ROOT(&vbo->res_tree));
vmw_bo_unmap(vbo);
drm_gem_object_release(&vbo->base.base);
kfree(vbo);
}
int vmw_gem_object_create_with_handle(struct vmw_private *dev_priv,
struct drm_file *filp,
uint32_t size,
uint32_t *handle,
struct vmw_buffer_object **p_vbo)
{
int ret;
ret = vmw_bo_create(dev_priv, size,
(dev_priv->has_mob) ?
&vmw_sys_placement :
&vmw_vram_sys_placement,
true, false, &vmw_gem_destroy, p_vbo);
(*p_vbo)->base.base.funcs = &vmw_gem_object_funcs;
if (ret != 0)
goto out_no_bo;
ret = drm_gem_handle_create(filp, &(*p_vbo)->base.base, handle);
/* drop reference from allocate - handle holds it now */
drm_gem_object_put(&(*p_vbo)->base.base);
out_no_bo:
return ret;
}
int vmw_gem_object_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *filp)
{
struct vmw_private *dev_priv = vmw_priv(dev);
union drm_vmw_alloc_dmabuf_arg *arg =
(union drm_vmw_alloc_dmabuf_arg *)data;
struct drm_vmw_alloc_dmabuf_req *req = &arg->req;
struct drm_vmw_dmabuf_rep *rep = &arg->rep;
struct vmw_buffer_object *vbo;
uint32_t handle;
int ret;
ret = vmw_gem_object_create_with_handle(dev_priv, filp,
req->size, &handle, &vbo);
if (ret)
goto out_no_bo;
rep->handle = handle;
rep->map_handle = drm_vma_node_offset_addr(&vbo->base.base.vma_node);
rep->cur_gmr_id = handle;
rep->cur_gmr_offset = 0;
out_no_bo:
return ret;
}
#if defined(CONFIG_DEBUG_FS)
static void vmw_bo_print_info(int id, struct vmw_buffer_object *bo, struct seq_file *m)
{
const char *placement;
const char *type;
switch (bo->base.resource->mem_type) {
case TTM_PL_SYSTEM:
placement = " CPU";
break;
case VMW_PL_GMR:
placement = " GMR";
break;
case VMW_PL_MOB:
placement = " MOB";
break;
case VMW_PL_SYSTEM:
placement = "VCPU";
break;
case TTM_PL_VRAM:
placement = "VRAM";
break;
default:
placement = "None";
break;
}
switch (bo->base.type) {
case ttm_bo_type_device:
type = "device";
break;
case ttm_bo_type_kernel:
type = "kernel";
break;
case ttm_bo_type_sg:
type = "sg ";
break;
default:
type = "none ";
break;
}
seq_printf(m, "\t\t0x%08x: %12ld bytes %s, type = %s",
id, bo->base.base.size, placement, type);
seq_printf(m, ", priority = %u, pin_count = %u, GEM refs = %d, TTM refs = %d",
bo->base.priority,
bo->base.pin_count,
kref_read(&bo->base.base.refcount),
kref_read(&bo->base.kref));
seq_puts(m, "\n");
}
static int vmw_debugfs_gem_info_show(struct seq_file *m, void *unused)
{
struct vmw_private *vdev = (struct vmw_private *)m->private;
struct drm_device *dev = &vdev->drm;
struct drm_file *file;
int r;
r = mutex_lock_interruptible(&dev->filelist_mutex);
if (r)
return r;
list_for_each_entry(file, &dev->filelist, lhead) {
struct task_struct *task;
struct drm_gem_object *gobj;
int id;
/*
* Although we have a valid reference on file->pid, that does
* not guarantee that the task_struct who called get_pid() is
* still alive (e.g. get_pid(current) => fork() => exit()).
* Therefore, we need to protect this ->comm access using RCU.
*/
rcu_read_lock();
task = pid_task(file->pid, PIDTYPE_PID);
seq_printf(m, "pid %8d command %s:\n", pid_nr(file->pid),
task ? task->comm : "<unknown>");
rcu_read_unlock();
spin_lock(&file->table_lock);
idr_for_each_entry(&file->object_idr, gobj, id) {
struct vmw_buffer_object *bo = gem_to_vmw_bo(gobj);
vmw_bo_print_info(id, bo, m);
}
spin_unlock(&file->table_lock);
}
mutex_unlock(&dev->filelist_mutex);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(vmw_debugfs_gem_info);
#endif
void vmw_debugfs_gem_init(struct vmw_private *vdev)
{
#if defined(CONFIG_DEBUG_FS)
struct drm_minor *minor = vdev->drm.primary;
struct dentry *root = minor->debugfs_root;
debugfs_create_file("vmwgfx_gem_info", 0444, root, vdev,
&vmw_debugfs_gem_info_fops);
#endif
}

View File

@ -42,6 +42,7 @@ struct vmwgfx_gmrid_man {
uint32_t max_gmr_ids;
uint32_t max_gmr_pages;
uint32_t used_gmr_pages;
uint8_t type;
};
static struct vmwgfx_gmrid_man *to_gmrid_manager(struct ttm_resource_manager *man)
@ -132,6 +133,18 @@ static void vmw_gmrid_man_put_node(struct ttm_resource_manager *man,
kfree(res);
}
static void vmw_gmrid_man_debug(struct ttm_resource_manager *man,
struct drm_printer *printer)
{
struct vmwgfx_gmrid_man *gman = to_gmrid_manager(man);
BUG_ON(gman->type != VMW_PL_GMR && gman->type != VMW_PL_MOB);
drm_printf(printer, "%s's used: %u pages, max: %u pages, %u id's\n",
(gman->type == VMW_PL_MOB) ? "Mob" : "GMR",
gman->used_gmr_pages, gman->max_gmr_pages, gman->max_gmr_ids);
}
static const struct ttm_resource_manager_func vmw_gmrid_manager_func;
int vmw_gmrid_man_init(struct vmw_private *dev_priv, int type)
@ -146,12 +159,12 @@ int vmw_gmrid_man_init(struct vmw_private *dev_priv, int type)
man = &gman->manager;
man->func = &vmw_gmrid_manager_func;
/* TODO: This is most likely not correct */
man->use_tt = true;
ttm_resource_manager_init(man, 0);
spin_lock_init(&gman->lock);
gman->used_gmr_pages = 0;
ida_init(&gman->gmr_ida);
gman->type = type;
switch (type) {
case VMW_PL_GMR:
@ -190,4 +203,5 @@ void vmw_gmrid_man_fini(struct vmw_private *dev_priv, int type)
static const struct ttm_resource_manager_func vmw_gmrid_manager_func = {
.alloc = vmw_gmrid_man_get_node,
.free = vmw_gmrid_man_put_node,
.debug = vmw_gmrid_man_debug
};

View File

@ -105,6 +105,9 @@ int vmw_getparam_ioctl(struct drm_device *dev, void *data,
case DRM_VMW_PARAM_SM5:
param->value = has_sm5_context(dev_priv);
break;
case DRM_VMW_PARAM_GL43:
param->value = has_gl43_context(dev_priv);
break;
default:
return -EINVAL;
}

View File

@ -843,8 +843,6 @@ static void vmw_framebuffer_surface_destroy(struct drm_framebuffer *framebuffer)
drm_framebuffer_cleanup(framebuffer);
vmw_surface_unreference(&vfbs->surface);
if (vfbs->base.user_obj)
ttm_base_object_unref(&vfbs->base.user_obj);
kfree(vfbs);
}
@ -989,6 +987,16 @@ out_err1:
* Buffer-object framebuffer code
*/
static int vmw_framebuffer_bo_create_handle(struct drm_framebuffer *fb,
struct drm_file *file_priv,
unsigned int *handle)
{
struct vmw_framebuffer_bo *vfbd =
vmw_framebuffer_to_vfbd(fb);
return drm_gem_handle_create(file_priv, &vfbd->buffer->base.base, handle);
}
static void vmw_framebuffer_bo_destroy(struct drm_framebuffer *framebuffer)
{
struct vmw_framebuffer_bo *vfbd =
@ -996,8 +1004,6 @@ static void vmw_framebuffer_bo_destroy(struct drm_framebuffer *framebuffer)
drm_framebuffer_cleanup(framebuffer);
vmw_bo_unreference(&vfbd->buffer);
if (vfbd->base.user_obj)
ttm_base_object_unref(&vfbd->base.user_obj);
kfree(vfbd);
}
@ -1063,6 +1069,7 @@ static int vmw_framebuffer_bo_dirty_ext(struct drm_framebuffer *framebuffer,
}
static const struct drm_framebuffer_funcs vmw_framebuffer_bo_funcs = {
.create_handle = vmw_framebuffer_bo_create_handle,
.destroy = vmw_framebuffer_bo_destroy,
.dirty = vmw_framebuffer_bo_dirty_ext,
};
@ -1188,7 +1195,7 @@ static int vmw_create_bo_proxy(struct drm_device *dev,
metadata.base_size.depth = 1;
metadata.scanout = true;
ret = vmw_gb_surface_define(vmw_priv(dev), 0, &metadata, srf_out);
ret = vmw_gb_surface_define(vmw_priv(dev), &metadata, srf_out);
if (ret) {
DRM_ERROR("Failed to allocate proxy content buffer\n");
return ret;
@ -1251,6 +1258,7 @@ static int vmw_kms_new_framebuffer_bo(struct vmw_private *dev_priv,
goto out_err1;
}
vfbd->base.base.obj[0] = &bo->base.base;
drm_helper_mode_fill_fb_struct(dev, &vfbd->base.base, mode_cmd);
vfbd->base.bo = true;
vfbd->buffer = vmw_bo_reference(bo);
@ -1368,34 +1376,13 @@ static struct drm_framebuffer *vmw_kms_fb_create(struct drm_device *dev,
const struct drm_mode_fb_cmd2 *mode_cmd)
{
struct vmw_private *dev_priv = vmw_priv(dev);
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct vmw_framebuffer *vfb = NULL;
struct vmw_surface *surface = NULL;
struct vmw_buffer_object *bo = NULL;
struct ttm_base_object *user_obj;
int ret;
/*
* Take a reference on the user object of the resource
* backing the kms fb. This ensures that user-space handle
* lookups on that resource will always work as long as
* it's registered with a kms framebuffer. This is important,
* since vmw_execbuf_process identifies resources in the
* command stream using user-space handles.
*/
user_obj = ttm_base_object_lookup(tfile, mode_cmd->handles[0]);
if (unlikely(user_obj == NULL)) {
DRM_ERROR("Could not locate requested kms frame buffer.\n");
return ERR_PTR(-ENOENT);
}
/**
* End conditioned code.
*/
/* returns either a bo or surface */
ret = vmw_user_lookup_handle(dev_priv, tfile,
ret = vmw_user_lookup_handle(dev_priv, file_priv,
mode_cmd->handles[0],
&surface, &bo);
if (ret)
@ -1428,10 +1415,8 @@ err_out:
if (ret) {
DRM_ERROR("failed to create vmw_framebuffer: %i\n", ret);
ttm_base_object_unref(&user_obj);
return ERR_PTR(ret);
} else
vfb->user_obj = user_obj;
}
return &vfb->base;
}

View File

@ -219,7 +219,6 @@ struct vmw_framebuffer {
int (*pin)(struct vmw_framebuffer *fb);
int (*unpin)(struct vmw_framebuffer *fb);
bool bo;
struct ttm_base_object *user_obj;
uint32_t user_handle;
};

View File

@ -146,9 +146,6 @@ static int vmw_setup_otable_base(struct vmw_private *dev_priv,
if (otable->size <= PAGE_SIZE) {
mob->pt_level = VMW_MOBFMT_PTDEPTH_0;
mob->pt_root_page = vmw_piter_dma_addr(&iter);
} else if (vsgt->num_regions == 1) {
mob->pt_level = SVGA3D_MOBFMT_RANGE;
mob->pt_root_page = vmw_piter_dma_addr(&iter);
} else {
ret = vmw_mob_pt_populate(dev_priv, mob);
if (unlikely(ret != 0))
@ -413,10 +410,9 @@ struct vmw_mob *vmw_mob_create(unsigned long data_pages)
* @mob: Pointer to the mob the pagetable of which we want to
* populate.
*
* This function allocates memory to be used for the pagetable, and
* adjusts TTM memory accounting accordingly. Returns ENOMEM if
* memory resources aren't sufficient and may cause TTM buffer objects
* to be swapped out by using the TTM memory accounting function.
* This function allocates memory to be used for the pagetable.
* Returns ENOMEM if memory resources aren't sufficient and may
* cause TTM buffer objects to be swapped out.
*/
static int vmw_mob_pt_populate(struct vmw_private *dev_priv,
struct vmw_mob *mob)
@ -624,9 +620,6 @@ int vmw_mob_bind(struct vmw_private *dev_priv,
if (likely(num_data_pages == 1)) {
mob->pt_level = VMW_MOBFMT_PTDEPTH_0;
mob->pt_root_page = vmw_piter_dma_addr(&data_iter);
} else if (vsgt->num_regions == 1) {
mob->pt_level = SVGA3D_MOBFMT_RANGE;
mob->pt_root_page = vmw_piter_dma_addr(&data_iter);
} else if (unlikely(mob->pt_bo == NULL)) {
ret = vmw_mob_pt_populate(dev_priv, mob);
if (unlikely(ret != 0))

View File

@ -451,7 +451,7 @@ int vmw_overlay_ioctl(struct drm_device *dev, void *data,
goto out_unlock;
}
ret = vmw_user_bo_lookup(tfile, arg->handle, &buf, NULL);
ret = vmw_user_bo_lookup(file_priv, arg->handle, &buf);
if (ret)
goto out_unlock;

View File

@ -57,7 +57,6 @@ enum vmw_bo_dirty_method {
* @ref_count: Reference count for this structure
* @bitmap_size: The size of the bitmap in bits. Typically equal to the
* nuber of pages in the bo.
* @size: The accounting size for this struct.
* @bitmap: A bitmap where each bit represents a page. A set bit means a
* dirty page.
*/
@ -68,7 +67,6 @@ struct vmw_bo_dirty {
unsigned int change_count;
unsigned int ref_count;
unsigned long bitmap_size;
size_t size;
unsigned long bitmap[];
};
@ -233,12 +231,8 @@ int vmw_bo_dirty_add(struct vmw_buffer_object *vbo)
{
struct vmw_bo_dirty *dirty = vbo->dirty;
pgoff_t num_pages = vbo->base.resource->num_pages;
size_t size, acc_size;
size_t size;
int ret;
static struct ttm_operation_ctx ctx = {
.interruptible = false,
.no_wait_gpu = false
};
if (dirty) {
dirty->ref_count++;
@ -246,20 +240,12 @@ int vmw_bo_dirty_add(struct vmw_buffer_object *vbo)
}
size = sizeof(*dirty) + BITS_TO_LONGS(num_pages) * sizeof(long);
acc_size = ttm_round_pot(size);
ret = ttm_mem_global_alloc(&ttm_mem_glob, acc_size, &ctx);
if (ret) {
VMW_DEBUG_USER("Out of graphics memory for buffer object "
"dirty tracker.\n");
return ret;
}
dirty = kvzalloc(size, GFP_KERNEL);
if (!dirty) {
ret = -ENOMEM;
goto out_no_dirty;
}
dirty->size = acc_size;
dirty->bitmap_size = num_pages;
dirty->start = dirty->bitmap_size;
dirty->end = 0;
@ -285,7 +271,6 @@ int vmw_bo_dirty_add(struct vmw_buffer_object *vbo)
return 0;
out_no_dirty:
ttm_mem_global_free(&ttm_mem_glob, acc_size);
return ret;
}
@ -304,10 +289,7 @@ void vmw_bo_dirty_release(struct vmw_buffer_object *vbo)
struct vmw_bo_dirty *dirty = vbo->dirty;
if (dirty && --dirty->ref_count == 0) {
size_t acc_size = dirty->size;
kvfree(dirty);
ttm_mem_global_free(&ttm_mem_glob, acc_size);
vbo->dirty = NULL;
}
}

View File

@ -85,6 +85,5 @@ int vmw_prime_handle_to_fd(struct drm_device *dev,
int *prime_fd)
{
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
return ttm_prime_handle_to_fd(tfile, handle, flags, prime_fd);
}

View File

@ -320,11 +320,12 @@ vmw_user_resource_noref_lookup_handle(struct vmw_private *dev_priv,
* The pointer this pointed at by out_surf and out_buf needs to be null.
*/
int vmw_user_lookup_handle(struct vmw_private *dev_priv,
struct ttm_object_file *tfile,
struct drm_file *filp,
uint32_t handle,
struct vmw_surface **out_surf,
struct vmw_buffer_object **out_buf)
{
struct ttm_object_file *tfile = vmw_fpriv(filp)->tfile;
struct vmw_resource *res;
int ret;
@ -339,7 +340,7 @@ int vmw_user_lookup_handle(struct vmw_private *dev_priv,
}
*out_surf = NULL;
ret = vmw_user_bo_lookup(tfile, handle, out_buf, NULL);
ret = vmw_user_bo_lookup(filp, handle, out_buf);
return ret;
}
@ -362,14 +363,10 @@ static int vmw_resource_buf_alloc(struct vmw_resource *res,
return 0;
}
backup = kzalloc(sizeof(*backup), GFP_KERNEL);
if (unlikely(!backup))
return -ENOMEM;
ret = vmw_bo_init(res->dev_priv, backup, res->backup_size,
res->func->backup_placement,
interruptible, false,
&vmw_bo_bo_free);
ret = vmw_bo_create(res->dev_priv, res->backup_size,
res->func->backup_placement,
interruptible, false,
&vmw_bo_bo_free, &backup);
if (unlikely(ret != 0))
goto out_no_bo;

View File

@ -442,19 +442,15 @@ vmw_sou_primary_plane_prepare_fb(struct drm_plane *plane,
vps->bo_size = 0;
}
vps->bo = kzalloc(sizeof(*vps->bo), GFP_KERNEL);
if (!vps->bo)
return -ENOMEM;
vmw_svga_enable(dev_priv);
/* After we have alloced the backing store might not be able to
* resume the overlays, this is preferred to failing to alloc.
*/
vmw_overlay_pause_all(dev_priv);
ret = vmw_bo_init(dev_priv, vps->bo, size,
&vmw_vram_placement,
false, true, &vmw_bo_bo_free);
ret = vmw_bo_create(dev_priv, size,
&vmw_vram_placement,
false, true, &vmw_bo_bo_free, &vps->bo);
vmw_overlay_resume_all(dev_priv);
if (ret) {
vps->bo = NULL; /* vmw_bo_init frees on error */

View File

@ -53,10 +53,6 @@ struct vmw_dx_shader {
struct list_head cotable_head;
};
static uint64_t vmw_user_shader_size;
static uint64_t vmw_shader_size;
static size_t vmw_shader_dx_size;
static void vmw_user_shader_free(struct vmw_resource *res);
static struct vmw_resource *
vmw_user_shader_base_to_res(struct ttm_base_object *base);
@ -79,7 +75,6 @@ static void vmw_dx_shader_commit_notify(struct vmw_resource *res,
enum vmw_cmdbuf_res_state state);
static bool vmw_shader_id_ok(u32 user_key, SVGA3dShaderType shader_type);
static u32 vmw_shader_key(u32 user_key, SVGA3dShaderType shader_type);
static uint64_t vmw_user_shader_size;
static const struct vmw_user_resource_conv user_shader_conv = {
.object_type = VMW_RES_SHADER,
@ -563,16 +558,14 @@ void vmw_dx_shader_cotable_list_scrub(struct vmw_private *dev_priv,
*
* @res: The shader resource
*
* Frees the DX shader resource and updates memory accounting.
* Frees the DX shader resource.
*/
static void vmw_dx_shader_res_free(struct vmw_resource *res)
{
struct vmw_private *dev_priv = res->dev_priv;
struct vmw_dx_shader *shader = vmw_res_to_dx_shader(res);
vmw_resource_unreference(&shader->cotable);
kfree(shader);
ttm_mem_global_free(vmw_mem_glob(dev_priv), vmw_shader_dx_size);
}
/**
@ -594,30 +587,13 @@ int vmw_dx_shader_add(struct vmw_cmdbuf_res_manager *man,
struct vmw_dx_shader *shader;
struct vmw_resource *res;
struct vmw_private *dev_priv = ctx->dev_priv;
struct ttm_operation_ctx ttm_opt_ctx = {
.interruptible = true,
.no_wait_gpu = false
};
int ret;
if (!vmw_shader_dx_size)
vmw_shader_dx_size = ttm_round_pot(sizeof(*shader));
if (!vmw_shader_id_ok(user_key, shader_type))
return -EINVAL;
ret = ttm_mem_global_alloc(vmw_mem_glob(dev_priv), vmw_shader_dx_size,
&ttm_opt_ctx);
if (ret) {
if (ret != -ERESTARTSYS)
DRM_ERROR("Out of graphics memory for shader "
"creation.\n");
return ret;
}
shader = kmalloc(sizeof(*shader), GFP_KERNEL);
if (!shader) {
ttm_mem_global_free(vmw_mem_glob(dev_priv), vmw_shader_dx_size);
return -ENOMEM;
}
@ -669,21 +645,15 @@ static void vmw_user_shader_free(struct vmw_resource *res)
{
struct vmw_user_shader *ushader =
container_of(res, struct vmw_user_shader, shader.res);
struct vmw_private *dev_priv = res->dev_priv;
ttm_base_object_kfree(ushader, base);
ttm_mem_global_free(vmw_mem_glob(dev_priv),
vmw_user_shader_size);
}
static void vmw_shader_free(struct vmw_resource *res)
{
struct vmw_shader *shader = vmw_res_to_shader(res);
struct vmw_private *dev_priv = res->dev_priv;
kfree(shader);
ttm_mem_global_free(vmw_mem_glob(dev_priv),
vmw_shader_size);
}
/*
@ -706,8 +676,7 @@ int vmw_shader_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_vmw_shader_arg *arg = (struct drm_vmw_shader_arg *)data;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
return ttm_ref_object_base_unref(tfile, arg->handle,
TTM_REF_USAGE);
return ttm_ref_object_base_unref(tfile, arg->handle);
}
static int vmw_user_shader_alloc(struct vmw_private *dev_priv,
@ -722,31 +691,10 @@ static int vmw_user_shader_alloc(struct vmw_private *dev_priv,
{
struct vmw_user_shader *ushader;
struct vmw_resource *res, *tmp;
struct ttm_operation_ctx ctx = {
.interruptible = true,
.no_wait_gpu = false
};
int ret;
if (unlikely(vmw_user_shader_size == 0))
vmw_user_shader_size =
ttm_round_pot(sizeof(struct vmw_user_shader)) +
VMW_IDA_ACC_SIZE + TTM_OBJ_EXTRA_SIZE;
ret = ttm_mem_global_alloc(vmw_mem_glob(dev_priv),
vmw_user_shader_size,
&ctx);
if (unlikely(ret != 0)) {
if (ret != -ERESTARTSYS)
DRM_ERROR("Out of graphics memory for shader "
"creation.\n");
goto out;
}
ushader = kzalloc(sizeof(*ushader), GFP_KERNEL);
if (unlikely(!ushader)) {
ttm_mem_global_free(vmw_mem_glob(dev_priv),
vmw_user_shader_size);
ret = -ENOMEM;
goto out;
}
@ -769,7 +717,7 @@ static int vmw_user_shader_alloc(struct vmw_private *dev_priv,
tmp = vmw_resource_reference(res);
ret = ttm_base_object_init(tfile, &ushader->base, false,
VMW_RES_SHADER,
&vmw_user_shader_base_release, NULL);
&vmw_user_shader_base_release);
if (unlikely(ret != 0)) {
vmw_resource_unreference(&tmp);
@ -793,31 +741,10 @@ static struct vmw_resource *vmw_shader_alloc(struct vmw_private *dev_priv,
{
struct vmw_shader *shader;
struct vmw_resource *res;
struct ttm_operation_ctx ctx = {
.interruptible = true,
.no_wait_gpu = false
};
int ret;
if (unlikely(vmw_shader_size == 0))
vmw_shader_size =
ttm_round_pot(sizeof(struct vmw_shader)) +
VMW_IDA_ACC_SIZE;
ret = ttm_mem_global_alloc(vmw_mem_glob(dev_priv),
vmw_shader_size,
&ctx);
if (unlikely(ret != 0)) {
if (ret != -ERESTARTSYS)
DRM_ERROR("Out of graphics memory for shader "
"creation.\n");
goto out_err;
}
shader = kzalloc(sizeof(*shader), GFP_KERNEL);
if (unlikely(!shader)) {
ttm_mem_global_free(vmw_mem_glob(dev_priv),
vmw_shader_size);
ret = -ENOMEM;
goto out_err;
}
@ -849,8 +776,7 @@ static int vmw_shader_define(struct drm_device *dev, struct drm_file *file_priv,
int ret;
if (buffer_handle != SVGA3D_INVALID_ID) {
ret = vmw_user_bo_lookup(tfile, buffer_handle,
&buffer, NULL);
ret = vmw_user_bo_lookup(file_priv, buffer_handle, &buffer);
if (unlikely(ret != 0)) {
VMW_DEBUG_USER("Couldn't find buffer for shader creation.\n");
return ret;
@ -966,13 +892,8 @@ int vmw_compat_shader_add(struct vmw_private *dev_priv,
if (!vmw_shader_id_ok(user_key, shader_type))
return -EINVAL;
/* Allocate and pin a DMA buffer */
buf = kzalloc(sizeof(*buf), GFP_KERNEL);
if (unlikely(!buf))
return -ENOMEM;
ret = vmw_bo_init(dev_priv, buf, size, &vmw_sys_placement,
true, true, vmw_bo_bo_free);
ret = vmw_bo_create(dev_priv, size, &vmw_sys_placement,
true, true, vmw_bo_bo_free, &buf);
if (unlikely(ret != 0))
goto out;

View File

@ -32,12 +32,10 @@
* struct vmw_user_simple_resource - User-space simple resource struct
*
* @base: The TTM base object implementing user-space visibility.
* @account_size: How much memory was accounted for this object.
* @simple: The embedded struct vmw_simple_resource.
*/
struct vmw_user_simple_resource {
struct ttm_base_object base;
size_t account_size;
struct vmw_simple_resource simple;
/*
* Nothing to be placed after @simple, since size of @simple is
@ -91,18 +89,15 @@ static int vmw_simple_resource_init(struct vmw_private *dev_priv,
*
* @res: The struct vmw_resource member of the simple resource object.
*
* Frees memory and memory accounting for the object.
* Frees memory for the object.
*/
static void vmw_simple_resource_free(struct vmw_resource *res)
{
struct vmw_user_simple_resource *usimple =
container_of(res, struct vmw_user_simple_resource,
simple.res);
struct vmw_private *dev_priv = res->dev_priv;
size_t size = usimple->account_size;
ttm_base_object_kfree(usimple, base);
ttm_mem_global_free(vmw_mem_glob(dev_priv), size);
}
/**
@ -149,39 +144,19 @@ vmw_simple_resource_create_ioctl(struct drm_device *dev, void *data,
struct vmw_resource *res;
struct vmw_resource *tmp;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct ttm_operation_ctx ctx = {
.interruptible = true,
.no_wait_gpu = false
};
size_t alloc_size;
size_t account_size;
int ret;
alloc_size = offsetof(struct vmw_user_simple_resource, simple) +
func->size;
account_size = ttm_round_pot(alloc_size) + VMW_IDA_ACC_SIZE +
TTM_OBJ_EXTRA_SIZE;
ret = ttm_mem_global_alloc(vmw_mem_glob(dev_priv), account_size,
&ctx);
if (ret) {
if (ret != -ERESTARTSYS)
DRM_ERROR("Out of graphics memory for %s"
" creation.\n", func->res_func.type_name);
goto out_ret;
}
usimple = kzalloc(alloc_size, GFP_KERNEL);
if (!usimple) {
ttm_mem_global_free(vmw_mem_glob(dev_priv),
account_size);
ret = -ENOMEM;
goto out_ret;
}
usimple->simple.func = func;
usimple->account_size = account_size;
res = &usimple->simple.res;
usimple->base.shareable = false;
usimple->base.tfile = NULL;
@ -197,7 +172,7 @@ vmw_simple_resource_create_ioctl(struct drm_device *dev, void *data,
tmp = vmw_resource_reference(res);
ret = ttm_base_object_init(tfile, &usimple->base, false,
func->ttm_res_type,
&vmw_simple_resource_base_release, NULL);
&vmw_simple_resource_base_release);
if (ret) {
vmw_resource_unreference(&tmp);

View File

@ -279,18 +279,15 @@ static bool vmw_view_id_ok(u32 user_key, enum vmw_view_type view_type)
*
* @res: Pointer to a struct vmw_resource
*
* Frees memory and memory accounting held by a struct vmw_view.
* Frees memory held by the struct vmw_view.
*/
static void vmw_view_res_free(struct vmw_resource *res)
{
struct vmw_view *view = vmw_view(res);
size_t size = offsetof(struct vmw_view, cmd) + view->cmd_size;
struct vmw_private *dev_priv = res->dev_priv;
vmw_resource_unreference(&view->cotable);
vmw_resource_unreference(&view->srf);
kfree_rcu(view, rcu);
ttm_mem_global_free(vmw_mem_glob(dev_priv), size);
}
/**
@ -327,10 +324,6 @@ int vmw_view_add(struct vmw_cmdbuf_res_manager *man,
struct vmw_private *dev_priv = ctx->dev_priv;
struct vmw_resource *res;
struct vmw_view *view;
struct ttm_operation_ctx ttm_opt_ctx = {
.interruptible = true,
.no_wait_gpu = false
};
size_t size;
int ret;
@ -347,16 +340,8 @@ int vmw_view_add(struct vmw_cmdbuf_res_manager *man,
size = offsetof(struct vmw_view, cmd) + cmd_size;
ret = ttm_mem_global_alloc(vmw_mem_glob(dev_priv), size, &ttm_opt_ctx);
if (ret) {
if (ret != -ERESTARTSYS)
DRM_ERROR("Out of graphics memory for view creation\n");
return ret;
}
view = kmalloc(size, GFP_KERNEL);
if (!view) {
ttm_mem_global_free(vmw_mem_glob(dev_priv), size);
return -ENOMEM;
}
@ -582,4 +567,8 @@ static void vmw_so_build_asserts(void)
offsetof(SVGA3dCmdDXDefineRenderTargetView, sid));
BUILD_BUG_ON(offsetof(struct vmw_view_define, sid) !=
offsetof(SVGA3dCmdDXDefineDepthStencilView, sid));
BUILD_BUG_ON(offsetof(struct vmw_view_define, sid) !=
offsetof(SVGA3dCmdDXDefineUAView, sid));
BUILD_BUG_ON(offsetof(struct vmw_view_define, sid) !=
offsetof(SVGA3dCmdDXDefineDepthStencilView_v2, sid));
}

View File

@ -93,7 +93,10 @@ static inline enum vmw_view_type vmw_view_cmd_to_type(u32 id)
id == SVGA_3D_CMD_DX_DESTROY_UA_VIEW)
return vmw_view_ua;
if (tmp > (u32)vmw_view_max)
if (id == SVGA_3D_CMD_DX_DEFINE_DEPTHSTENCIL_VIEW_V2)
return vmw_view_ds;
if (tmp > (u32)vmw_view_ds)
return vmw_view_max;
return (enum vmw_view_type) tmp;
@ -123,6 +126,7 @@ static inline enum vmw_so_type vmw_so_cmd_to_type(u32 id)
case SVGA_3D_CMD_DX_DESTROY_DEPTHSTENCIL_STATE:
return vmw_so_ds;
case SVGA_3D_CMD_DX_DEFINE_RASTERIZER_STATE:
case SVGA_3D_CMD_DX_DEFINE_RASTERIZER_STATE_V2:
case SVGA_3D_CMD_DX_DESTROY_RASTERIZER_STATE:
return vmw_so_rs;
case SVGA_3D_CMD_DX_DEFINE_SAMPLER_STATE:

View File

@ -1123,7 +1123,7 @@ vmw_stdu_primary_plane_prepare_fb(struct drm_plane *plane,
}
if (!vps->surf) {
ret = vmw_gb_surface_define(dev_priv, 0, &metadata,
ret = vmw_gb_surface_define(dev_priv, &metadata,
&vps->surf);
if (ret != 0) {
DRM_ERROR("Couldn't allocate STDU surface.\n");

View File

@ -60,8 +60,6 @@ static int vmw_dx_streamoutput_unbind(struct vmw_resource *res, bool readback,
static void vmw_dx_streamoutput_commit_notify(struct vmw_resource *res,
enum vmw_cmdbuf_res_state state);
static size_t vmw_streamoutput_size;
static const struct vmw_res_func vmw_dx_streamoutput_func = {
.res_type = vmw_res_streamoutput,
.needs_backup = true,
@ -254,12 +252,10 @@ vmw_dx_streamoutput_lookup(struct vmw_cmdbuf_res_manager *man,
static void vmw_dx_streamoutput_res_free(struct vmw_resource *res)
{
struct vmw_private *dev_priv = res->dev_priv;
struct vmw_dx_streamoutput *so = vmw_res_to_dx_streamoutput(res);
vmw_resource_unreference(&so->cotable);
kfree(so);
ttm_mem_global_free(vmw_mem_glob(dev_priv), vmw_streamoutput_size);
}
static void vmw_dx_streamoutput_hw_destroy(struct vmw_resource *res)
@ -284,27 +280,10 @@ int vmw_dx_streamoutput_add(struct vmw_cmdbuf_res_manager *man,
struct vmw_dx_streamoutput *so;
struct vmw_resource *res;
struct vmw_private *dev_priv = ctx->dev_priv;
struct ttm_operation_ctx ttm_opt_ctx = {
.interruptible = true,
.no_wait_gpu = false
};
int ret;
if (!vmw_streamoutput_size)
vmw_streamoutput_size = ttm_round_pot(sizeof(*so));
ret = ttm_mem_global_alloc(vmw_mem_glob(dev_priv),
vmw_streamoutput_size, &ttm_opt_ctx);
if (ret) {
if (ret != -ERESTARTSYS)
DRM_ERROR("Out of graphics memory for streamout.\n");
return ret;
}
so = kmalloc(sizeof(*so), GFP_KERNEL);
if (!so) {
ttm_mem_global_free(vmw_mem_glob(dev_priv),
vmw_streamoutput_size);
return -ENOMEM;
}

View File

@ -45,16 +45,12 @@
* @prime: The TTM prime object.
* @base: The TTM base object handling user-space visibility.
* @srf: The surface metadata.
* @size: TTM accounting size for the surface.
* @master: Master of the creating client. Used for security check.
* @backup_base: The TTM base object of the backup buffer.
*/
struct vmw_user_surface {
struct ttm_prime_object prime;
struct vmw_surface srf;
uint32_t size;
struct drm_master *master;
struct ttm_base_object *backup_base;
};
/**
@ -74,13 +70,11 @@ struct vmw_surface_offset {
/**
* struct vmw_surface_dirty - Surface dirty-tracker
* @cache: Cached layout information of the surface.
* @size: Accounting size for the struct vmw_surface_dirty.
* @num_subres: Number of subresources.
* @boxes: Array of SVGA3dBoxes indicating dirty regions. One per subresource.
*/
struct vmw_surface_dirty {
struct vmw_surface_cache cache;
size_t size;
u32 num_subres;
SVGA3dBox boxes[];
};
@ -129,9 +123,6 @@ static const struct vmw_user_resource_conv user_surface_conv = {
const struct vmw_user_resource_conv *user_surface_converter =
&user_surface_conv;
static uint64_t vmw_user_surface_size;
static const struct vmw_res_func vmw_legacy_surface_func = {
.res_type = vmw_res_surface,
.needs_backup = false,
@ -359,7 +350,7 @@ static void vmw_surface_dma_encode(struct vmw_surface *srf,
* vmw_surface.
*
* Destroys a the device surface associated with a struct vmw_surface if
* any, and adjusts accounting and resource count accordingly.
* any, and adjusts resource count accordingly.
*/
static void vmw_hw_surface_destroy(struct vmw_resource *res)
{
@ -666,8 +657,6 @@ static void vmw_user_surface_free(struct vmw_resource *res)
struct vmw_surface *srf = vmw_res_to_srf(res);
struct vmw_user_surface *user_srf =
container_of(srf, struct vmw_user_surface, srf);
struct vmw_private *dev_priv = srf->res.dev_priv;
uint32_t size = user_srf->size;
WARN_ON_ONCE(res->dirty);
if (user_srf->master)
@ -676,7 +665,6 @@ static void vmw_user_surface_free(struct vmw_resource *res)
kfree(srf->metadata.sizes);
kfree(srf->snooper.image);
ttm_prime_object_kfree(user_srf, prime);
ttm_mem_global_free(vmw_mem_glob(dev_priv), size);
}
/**
@ -696,8 +684,6 @@ static void vmw_user_surface_base_release(struct ttm_base_object **p_base)
struct vmw_resource *res = &user_srf->srf.res;
*p_base = NULL;
if (user_srf->backup_base)
ttm_base_object_unref(&user_srf->backup_base);
vmw_resource_unreference(&res);
}
@ -715,7 +701,7 @@ int vmw_surface_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_vmw_surface_arg *arg = (struct drm_vmw_surface_arg *)data;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
return ttm_ref_object_base_unref(tfile, arg->sid, TTM_REF_USAGE);
return ttm_ref_object_base_unref(tfile, arg->sid);
}
/**
@ -740,23 +726,14 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
struct drm_vmw_surface_create_req *req = &arg->req;
struct drm_vmw_surface_arg *rep = &arg->rep;
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct ttm_operation_ctx ctx = {
.interruptible = true,
.no_wait_gpu = false
};
int ret;
int i, j;
uint32_t cur_bo_offset;
struct drm_vmw_size *cur_size;
struct vmw_surface_offset *cur_offset;
uint32_t num_sizes;
uint32_t size;
const SVGA3dSurfaceDesc *desc;
if (unlikely(vmw_user_surface_size == 0))
vmw_user_surface_size = ttm_round_pot(sizeof(*user_srf)) +
VMW_IDA_ACC_SIZE + TTM_OBJ_EXTRA_SIZE;
num_sizes = 0;
for (i = 0; i < DRM_VMW_MAX_SURFACE_FACES; ++i) {
if (req->mip_levels[i] > DRM_VMW_MAX_MIP_LEVELS)
@ -768,10 +745,6 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
num_sizes == 0)
return -EINVAL;
size = vmw_user_surface_size +
ttm_round_pot(num_sizes * sizeof(struct drm_vmw_size)) +
ttm_round_pot(num_sizes * sizeof(struct vmw_surface_offset));
desc = vmw_surface_get_desc(req->format);
if (unlikely(desc->blockDesc == SVGA3DBLOCKDESC_NONE)) {
VMW_DEBUG_USER("Invalid format %d for surface creation.\n",
@ -779,18 +752,10 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
return -EINVAL;
}
ret = ttm_mem_global_alloc(vmw_mem_glob(dev_priv),
size, &ctx);
if (unlikely(ret != 0)) {
if (ret != -ERESTARTSYS)
DRM_ERROR("Out of graphics memory for surface.\n");
goto out_unlock;
}
user_srf = kzalloc(sizeof(*user_srf), GFP_KERNEL);
if (unlikely(!user_srf)) {
ret = -ENOMEM;
goto out_no_user_srf;
goto out_unlock;
}
srf = &user_srf->srf;
@ -805,7 +770,6 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
memcpy(metadata->mip_levels, req->mip_levels,
sizeof(metadata->mip_levels));
metadata->num_sizes = num_sizes;
user_srf->size = size;
metadata->sizes =
memdup_user((struct drm_vmw_size __user *)(unsigned long)
req->size_addr,
@ -883,22 +847,22 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
if (dev_priv->has_mob && req->shareable) {
uint32_t backup_handle;
ret = vmw_user_bo_alloc(dev_priv, tfile,
res->backup_size,
true,
&backup_handle,
&res->backup,
&user_srf->backup_base);
ret = vmw_gem_object_create_with_handle(dev_priv,
file_priv,
res->backup_size,
&backup_handle,
&res->backup);
if (unlikely(ret != 0)) {
vmw_resource_unreference(&res);
goto out_unlock;
}
vmw_bo_reference(res->backup);
}
tmp = vmw_resource_reference(&srf->res);
ret = ttm_prime_object_init(tfile, res->backup_size, &user_srf->prime,
req->shareable, VMW_RES_SURFACE,
&vmw_user_surface_base_release, NULL);
&vmw_user_surface_base_release);
if (unlikely(ret != 0)) {
vmw_resource_unreference(&tmp);
@ -916,8 +880,6 @@ out_no_offsets:
kfree(metadata->sizes);
out_no_sizes:
ttm_prime_object_kfree(user_srf, prime);
out_no_user_srf:
ttm_mem_global_free(vmw_mem_glob(dev_priv), size);
out_unlock:
return ret;
}
@ -955,7 +917,6 @@ vmw_surface_handle_reference(struct vmw_private *dev_priv,
VMW_DEBUG_USER("Referenced object is not a surface.\n");
goto out_bad_resource;
}
if (handle_type != DRM_VMW_HANDLE_PRIME) {
bool require_exist = false;
@ -980,8 +941,7 @@ vmw_surface_handle_reference(struct vmw_private *dev_priv,
if (unlikely(drm_is_render_client(file_priv)))
require_exist = true;
ret = ttm_ref_object_add(tfile, base, TTM_REF_USAGE, NULL,
require_exist);
ret = ttm_ref_object_add(tfile, base, NULL, require_exist);
if (unlikely(ret != 0)) {
DRM_ERROR("Could not add a reference to a surface.\n");
goto out_bad_resource;
@ -995,7 +955,7 @@ out_bad_resource:
ttm_base_object_unref(&base);
out_no_lookup:
if (handle_type == DRM_VMW_HANDLE_PRIME)
(void) ttm_ref_object_base_unref(tfile, handle, TTM_REF_USAGE);
(void) ttm_ref_object_base_unref(tfile, handle);
return ret;
}
@ -1045,7 +1005,7 @@ int vmw_surface_reference_ioctl(struct drm_device *dev, void *data,
if (unlikely(ret != 0)) {
VMW_DEBUG_USER("copy_to_user failed %p %u\n", user_sizes,
srf->metadata.num_sizes);
ttm_ref_object_base_unref(tfile, base->handle, TTM_REF_USAGE);
ttm_ref_object_base_unref(tfile, base->handle);
ret = -EFAULT;
}
@ -1459,7 +1419,6 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
struct vmw_resource *res;
struct vmw_resource *tmp;
int ret = 0;
uint32_t size;
uint32_t backup_handle = 0;
SVGA3dSurfaceAllFlags svga3d_flags_64 =
SVGA3D_FLAGS_64(req->svga3d_flags_upper_32_bits,
@ -1506,12 +1465,6 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
return -EINVAL;
}
if (unlikely(vmw_user_surface_size == 0))
vmw_user_surface_size = ttm_round_pot(sizeof(*user_srf)) +
VMW_IDA_ACC_SIZE + TTM_OBJ_EXTRA_SIZE;
size = vmw_user_surface_size;
metadata.flags = svga3d_flags_64;
metadata.format = req->base.format;
metadata.mip_levels[0] = req->base.mip_levels;
@ -1526,7 +1479,7 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
drm_vmw_surface_flag_scanout;
/* Define a surface based on the parameters. */
ret = vmw_gb_surface_define(dev_priv, size, &metadata, &srf);
ret = vmw_gb_surface_define(dev_priv, &metadata, &srf);
if (ret != 0) {
VMW_DEBUG_USER("Failed to define surface.\n");
return ret;
@ -1539,9 +1492,8 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
res = &user_srf->srf.res;
if (req->base.buffer_handle != SVGA3D_INVALID_ID) {
ret = vmw_user_bo_lookup(tfile, req->base.buffer_handle,
&res->backup,
&user_srf->backup_base);
ret = vmw_user_bo_lookup(file_priv, req->base.buffer_handle,
&res->backup);
if (ret == 0) {
if (res->backup->base.base.size < res->backup_size) {
VMW_DEBUG_USER("Surface backup buffer too small.\n");
@ -1554,14 +1506,15 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
}
} else if (req->base.drm_surface_flags &
(drm_vmw_surface_flag_create_buffer |
drm_vmw_surface_flag_coherent))
ret = vmw_user_bo_alloc(dev_priv, tfile,
res->backup_size,
req->base.drm_surface_flags &
drm_vmw_surface_flag_shareable,
&backup_handle,
&res->backup,
&user_srf->backup_base);
drm_vmw_surface_flag_coherent)) {
ret = vmw_gem_object_create_with_handle(dev_priv, file_priv,
res->backup_size,
&backup_handle,
&res->backup);
if (ret == 0)
vmw_bo_reference(res->backup);
}
if (unlikely(ret != 0)) {
vmw_resource_unreference(&res);
@ -1593,7 +1546,7 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
req->base.drm_surface_flags &
drm_vmw_surface_flag_shareable,
VMW_RES_SURFACE,
&vmw_user_surface_base_release, NULL);
&vmw_user_surface_base_release);
if (unlikely(ret != 0)) {
vmw_resource_unreference(&tmp);
@ -1613,7 +1566,6 @@ vmw_gb_surface_define_internal(struct drm_device *dev,
rep->buffer_size = 0;
rep->buffer_handle = SVGA3D_INVALID_ID;
}
vmw_resource_unreference(&res);
out_unlock:
@ -1636,12 +1588,11 @@ vmw_gb_surface_reference_internal(struct drm_device *dev,
struct drm_file *file_priv)
{
struct vmw_private *dev_priv = vmw_priv(dev);
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct vmw_surface *srf;
struct vmw_user_surface *user_srf;
struct vmw_surface_metadata *metadata;
struct ttm_base_object *base;
uint32_t backup_handle;
u32 backup_handle;
int ret;
ret = vmw_surface_handle_reference(dev_priv, file_priv, req->sid,
@ -1658,14 +1609,12 @@ vmw_gb_surface_reference_internal(struct drm_device *dev,
metadata = &srf->metadata;
mutex_lock(&dev_priv->cmdbuf_mutex); /* Protect res->backup */
ret = vmw_user_bo_reference(tfile, srf->res.backup, &backup_handle);
ret = drm_gem_handle_create(file_priv, &srf->res.backup->base.base,
&backup_handle);
mutex_unlock(&dev_priv->cmdbuf_mutex);
if (unlikely(ret != 0)) {
DRM_ERROR("Could not add a reference to a GB surface "
"backup buffer.\n");
(void) ttm_ref_object_base_unref(tfile, base->handle,
TTM_REF_USAGE);
if (ret != 0) {
drm_err(dev, "Wasn't able to create a backing handle for surface sid = %u.\n",
req->sid);
goto out_bad_resource;
}
@ -1955,11 +1904,7 @@ static int vmw_surface_dirty_alloc(struct vmw_resource *res)
u32 num_mip;
u32 num_subres;
u32 num_samples;
size_t dirty_size, acc_size;
static struct ttm_operation_ctx ctx = {
.interruptible = false,
.no_wait_gpu = false
};
size_t dirty_size;
int ret;
if (metadata->array_size)
@ -1973,14 +1918,6 @@ static int vmw_surface_dirty_alloc(struct vmw_resource *res)
num_subres = num_layers * num_mip;
dirty_size = struct_size(dirty, boxes, num_subres);
acc_size = ttm_round_pot(dirty_size);
ret = ttm_mem_global_alloc(vmw_mem_glob(res->dev_priv),
acc_size, &ctx);
if (ret) {
VMW_DEBUG_USER("Out of graphics memory for surface "
"dirty tracker.\n");
return ret;
}
dirty = kvzalloc(dirty_size, GFP_KERNEL);
if (!dirty) {
@ -1990,13 +1927,12 @@ static int vmw_surface_dirty_alloc(struct vmw_resource *res)
num_samples = max_t(u32, 1, metadata->multisample_count);
ret = vmw_surface_setup_cache(&metadata->base_size, metadata->format,
num_mip, num_layers, num_samples,
&dirty->cache);
num_mip, num_layers, num_samples,
&dirty->cache);
if (ret)
goto out_no_cache;
dirty->num_subres = num_subres;
dirty->size = acc_size;
res->dirty = (struct vmw_resource_dirty *) dirty;
return 0;
@ -2004,7 +1940,6 @@ static int vmw_surface_dirty_alloc(struct vmw_resource *res)
out_no_cache:
kvfree(dirty);
out_no_dirty:
ttm_mem_global_free(vmw_mem_glob(res->dev_priv), acc_size);
return ret;
}
@ -2015,10 +1950,8 @@ static void vmw_surface_dirty_free(struct vmw_resource *res)
{
struct vmw_surface_dirty *dirty =
(struct vmw_surface_dirty *) res->dirty;
size_t acc_size = dirty->size;
kvfree(dirty);
ttm_mem_global_free(vmw_mem_glob(res->dev_priv), acc_size);
res->dirty = NULL;
}
@ -2051,8 +1984,6 @@ static int vmw_surface_clean(struct vmw_resource *res)
* vmw_gb_surface_define - Define a private GB surface
*
* @dev_priv: Pointer to a device private.
* @user_accounting_size: Used to track user-space memory usage, set
* to 0 for kernel mode only memory
* @metadata: Metadata representing the surface to create.
* @user_srf_out: allocated user_srf. Set to NULL on failure.
*
@ -2062,17 +1993,12 @@ static int vmw_surface_clean(struct vmw_resource *res)
* it available to user mode drivers.
*/
int vmw_gb_surface_define(struct vmw_private *dev_priv,
uint32_t user_accounting_size,
const struct vmw_surface_metadata *req,
struct vmw_surface **srf_out)
{
struct vmw_surface_metadata *metadata;
struct vmw_user_surface *user_srf;
struct vmw_surface *srf;
struct ttm_operation_ctx ctx = {
.interruptible = true,
.no_wait_gpu = false
};
u32 sample_count = 1;
u32 num_layers = 1;
int ret;
@ -2113,22 +2039,13 @@ int vmw_gb_surface_define(struct vmw_private *dev_priv,
if (req->sizes != NULL)
return -EINVAL;
ret = ttm_mem_global_alloc(vmw_mem_glob(dev_priv),
user_accounting_size, &ctx);
if (ret != 0) {
if (ret != -ERESTARTSYS)
DRM_ERROR("Out of graphics memory for surface.\n");
goto out_unlock;
}
user_srf = kzalloc(sizeof(*user_srf), GFP_KERNEL);
if (unlikely(!user_srf)) {
ret = -ENOMEM;
goto out_no_user_srf;
goto out_unlock;
}
*srf_out = &user_srf->srf;
user_srf->size = user_accounting_size;
user_srf->prime.base.shareable = false;
user_srf->prime.base.tfile = NULL;
@ -2179,9 +2096,6 @@ int vmw_gb_surface_define(struct vmw_private *dev_priv,
return ret;
out_no_user_srf:
ttm_mem_global_free(vmw_mem_glob(dev_priv), user_accounting_size);
out_unlock:
return ret;
}

View File

@ -167,19 +167,6 @@ struct ttm_placement vmw_nonfixed_placement = {
.busy_placement = &sys_placement_flags
};
struct vmw_ttm_tt {
struct ttm_tt dma_ttm;
struct vmw_private *dev_priv;
int gmr_id;
struct vmw_mob *mob;
int mem_type;
struct sg_table sgt;
struct vmw_sg_table vsgt;
uint64_t sg_alloc_size;
bool mapped;
bool bound;
};
const size_t vmw_tt_size = sizeof(struct vmw_ttm_tt);
/**
@ -300,17 +287,8 @@ static int vmw_ttm_map_for_dma(struct vmw_ttm_tt *vmw_tt)
static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
{
struct vmw_private *dev_priv = vmw_tt->dev_priv;
struct ttm_mem_global *glob = vmw_mem_glob(dev_priv);
struct vmw_sg_table *vsgt = &vmw_tt->vsgt;
struct ttm_operation_ctx ctx = {
.interruptible = true,
.no_wait_gpu = false
};
struct vmw_piter iter;
dma_addr_t old;
int ret = 0;
static size_t sgl_size;
static size_t sgt_size;
if (vmw_tt->mapped)
return 0;
@ -319,20 +297,12 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
vsgt->pages = vmw_tt->dma_ttm.pages;
vsgt->num_pages = vmw_tt->dma_ttm.num_pages;
vsgt->addrs = vmw_tt->dma_ttm.dma_address;
vsgt->sgt = &vmw_tt->sgt;
vsgt->sgt = NULL;
switch (dev_priv->map_mode) {
case vmw_dma_map_bind:
case vmw_dma_map_populate:
if (unlikely(!sgl_size)) {
sgl_size = ttm_round_pot(sizeof(struct scatterlist));
sgt_size = ttm_round_pot(sizeof(struct sg_table));
}
vmw_tt->sg_alloc_size = sgt_size + sgl_size * vsgt->num_pages;
ret = ttm_mem_global_alloc(glob, vmw_tt->sg_alloc_size, &ctx);
if (unlikely(ret != 0))
return ret;
vsgt->sgt = &vmw_tt->sgt;
ret = sg_alloc_table_from_pages_segment(
&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0,
(unsigned long)vsgt->num_pages << PAGE_SHIFT,
@ -340,15 +310,6 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
if (ret)
goto out_sg_alloc_fail;
if (vsgt->num_pages > vmw_tt->sgt.orig_nents) {
uint64_t over_alloc =
sgl_size * (vsgt->num_pages -
vmw_tt->sgt.orig_nents);
ttm_mem_global_free(glob, over_alloc);
vmw_tt->sg_alloc_size -= over_alloc;
}
ret = vmw_ttm_map_for_dma(vmw_tt);
if (unlikely(ret != 0))
goto out_map_fail;
@ -358,16 +319,6 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
break;
}
old = ~((dma_addr_t) 0);
vmw_tt->vsgt.num_regions = 0;
for (vmw_piter_start(&iter, vsgt, 0); vmw_piter_next(&iter);) {
dma_addr_t cur = vmw_piter_dma_addr(&iter);
if (cur != old + PAGE_SIZE)
vmw_tt->vsgt.num_regions++;
old = cur;
}
vmw_tt->mapped = true;
return 0;
@ -375,7 +326,6 @@ out_map_fail:
sg_free_table(vmw_tt->vsgt.sgt);
vmw_tt->vsgt.sgt = NULL;
out_sg_alloc_fail:
ttm_mem_global_free(glob, vmw_tt->sg_alloc_size);
return ret;
}
@ -401,8 +351,6 @@ static void vmw_ttm_unmap_dma(struct vmw_ttm_tt *vmw_tt)
vmw_ttm_unmap_from_dma(vmw_tt);
sg_free_table(vmw_tt->vsgt.sgt);
vmw_tt->vsgt.sgt = NULL;
ttm_mem_global_free(vmw_mem_glob(dev_priv),
vmw_tt->sg_alloc_size);
break;
default:
break;
@ -522,7 +470,6 @@ static void vmw_ttm_destroy(struct ttm_device *bdev, struct ttm_tt *ttm)
static int vmw_ttm_populate(struct ttm_device *bdev,
struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
{
unsigned int i;
int ret;
/* TODO: maybe completely drop this ? */
@ -530,22 +477,7 @@ static int vmw_ttm_populate(struct ttm_device *bdev,
return 0;
ret = ttm_pool_alloc(&bdev->pool, ttm, ctx);
if (ret)
return ret;
for (i = 0; i < ttm->num_pages; ++i) {
ret = ttm_mem_global_alloc_page(&ttm_mem_glob, ttm->pages[i],
PAGE_SIZE, ctx);
if (ret)
goto error;
}
return 0;
error:
while (i--)
ttm_mem_global_free_page(&ttm_mem_glob, ttm->pages[i],
PAGE_SIZE);
ttm_pool_free(&bdev->pool, ttm);
return ret;
}
@ -554,7 +486,6 @@ static void vmw_ttm_unpopulate(struct ttm_device *bdev,
{
struct vmw_ttm_tt *vmw_tt = container_of(ttm, struct vmw_ttm_tt,
dma_ttm);
unsigned int i;
vmw_ttm_unbind(bdev, ttm);
@ -565,10 +496,6 @@ static void vmw_ttm_unpopulate(struct ttm_device *bdev,
vmw_ttm_unmap_dma(vmw_tt);
for (i = 0; i < ttm->num_pages; ++i)
ttm_mem_global_free_page(&ttm_mem_glob, ttm->pages[i],
PAGE_SIZE);
ttm_pool_free(&bdev->pool, ttm);
}

View File

@ -27,30 +27,44 @@
#include "vmwgfx_drv.h"
static struct ttm_buffer_object *vmw_bo_vm_lookup(struct ttm_device *bdev,
unsigned long offset,
unsigned long pages)
static int vmw_bo_vm_lookup(struct ttm_device *bdev,
struct drm_file *filp,
unsigned long offset,
unsigned long pages,
struct ttm_buffer_object **p_bo)
{
struct vmw_private *dev_priv = container_of(bdev, struct vmw_private, bdev);
struct drm_device *drm = &dev_priv->drm;
struct drm_vma_offset_node *node;
struct ttm_buffer_object *bo = NULL;
int ret;
*p_bo = NULL;
drm_vma_offset_lock_lookup(bdev->vma_manager);
node = drm_vma_offset_lookup_locked(bdev->vma_manager, offset, pages);
if (likely(node)) {
bo = container_of(node, struct ttm_buffer_object,
*p_bo = container_of(node, struct ttm_buffer_object,
base.vma_node);
bo = ttm_bo_get_unless_zero(bo);
*p_bo = ttm_bo_get_unless_zero(*p_bo);
}
drm_vma_offset_unlock_lookup(bdev->vma_manager);
if (!bo)
if (!*p_bo) {
drm_err(drm, "Could not find buffer object to map\n");
return -EINVAL;
}
return bo;
if (!drm_vma_node_is_allowed(node, filp)) {
ret = -EACCES;
goto out_no_access;
}
return 0;
out_no_access:
ttm_bo_put(*p_bo);
return ret;
}
int vmw_mmap(struct file *filp, struct vm_area_struct *vma)
@ -64,7 +78,6 @@ int vmw_mmap(struct file *filp, struct vm_area_struct *vma)
};
struct drm_file *file_priv = filp->private_data;
struct vmw_private *dev_priv = vmw_priv(file_priv->minor->dev);
struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
struct ttm_device *bdev = &dev_priv->bdev;
struct ttm_buffer_object *bo;
int ret;
@ -72,13 +85,9 @@ int vmw_mmap(struct file *filp, struct vm_area_struct *vma)
if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET_START))
return -EINVAL;
bo = vmw_bo_vm_lookup(bdev, vma->vm_pgoff, vma_pages(vma));
if (unlikely(!bo))
return -EINVAL;
ret = vmw_user_bo_verify_access(bo, tfile);
ret = vmw_bo_vm_lookup(bdev, file_priv, vma->vm_pgoff, vma_pages(vma), &bo);
if (unlikely(ret != 0))
goto out_unref;
return ret;
ret = ttm_bo_mmap_obj(vma, bo);
if (unlikely(ret != 0))
@ -99,38 +108,3 @@ out_unref:
return ret;
}
/* struct vmw_validation_mem callback */
static int vmw_vmt_reserve(struct vmw_validation_mem *m, size_t size)
{
static struct ttm_operation_ctx ctx = {.interruptible = false,
.no_wait_gpu = false};
struct vmw_private *dev_priv = container_of(m, struct vmw_private, vvm);
return ttm_mem_global_alloc(vmw_mem_glob(dev_priv), size, &ctx);
}
/* struct vmw_validation_mem callback */
static void vmw_vmt_unreserve(struct vmw_validation_mem *m, size_t size)
{
struct vmw_private *dev_priv = container_of(m, struct vmw_private, vvm);
return ttm_mem_global_free(vmw_mem_glob(dev_priv), size);
}
/**
* vmw_validation_mem_init_ttm - Interface the validation memory tracker
* to ttm.
* @dev_priv: Pointer to struct vmw_private. The reason we choose a vmw private
* rather than a struct vmw_validation_mem is to make sure assumption in the
* callbacks that struct vmw_private derives from struct vmw_validation_mem
* holds true.
* @gran: The recommended allocation granularity
*/
void vmw_validation_mem_init_ttm(struct vmw_private *dev_priv, size_t gran)
{
struct vmw_validation_mem *vvm = &dev_priv->vvm;
vvm->reserve_mem = vmw_vmt_reserve;
vvm->unreserve_mem = vmw_vmt_unreserve;
vvm->gran = gran;
}

View File

@ -117,7 +117,7 @@ int vmw_stream_unref_ioctl(struct drm_device *dev, void *data,
struct drm_vmw_stream_arg *arg = (struct drm_vmw_stream_arg *)data;
return ttm_ref_object_base_unref(vmw_fpriv(file_priv)->tfile,
arg->stream_id, TTM_REF_USAGE);
arg->stream_id);
}
/**

View File

@ -29,6 +29,9 @@
#include "vmwgfx_validation.h"
#include "vmwgfx_drv.h"
#define VMWGFX_VALIDATION_MEM_GRAN (16*PAGE_SIZE)
/**
* struct vmw_validation_bo_node - Buffer object validation metadata.
* @base: Metadata used for TTM reservation- and validation.
@ -113,13 +116,8 @@ void *vmw_validation_mem_alloc(struct vmw_validation_context *ctx,
struct page *page;
if (ctx->vm && ctx->vm_size_left < PAGE_SIZE) {
int ret = ctx->vm->reserve_mem(ctx->vm, ctx->vm->gran);
if (ret)
return NULL;
ctx->vm_size_left += ctx->vm->gran;
ctx->total_mem += ctx->vm->gran;
ctx->vm_size_left += VMWGFX_VALIDATION_MEM_GRAN;
ctx->total_mem += VMWGFX_VALIDATION_MEM_GRAN;
}
page = alloc_page(GFP_KERNEL | __GFP_ZERO);
@ -159,7 +157,6 @@ static void vmw_validation_mem_free(struct vmw_validation_context *ctx)
ctx->mem_size_left = 0;
if (ctx->vm && ctx->total_mem) {
ctx->vm->unreserve_mem(ctx->vm, ctx->total_mem);
ctx->total_mem = 0;
ctx->vm_size_left = 0;
}

View File

@ -39,21 +39,6 @@
#define VMW_RES_DIRTY_SET BIT(0)
#define VMW_RES_DIRTY_CLEAR BIT(1)
/**
* struct vmw_validation_mem - Custom interface to provide memory reservations
* for the validation code.
* @reserve_mem: Callback to reserve memory
* @unreserve_mem: Callback to unreserve memory
* @gran: Reservation granularity. Contains a hint how much memory should
* be reserved in each call to @reserve_mem(). A slow implementation may want
* reservation to be done in large batches.
*/
struct vmw_validation_mem {
int (*reserve_mem)(struct vmw_validation_mem *m, size_t size);
void (*unreserve_mem)(struct vmw_validation_mem *m, size_t size);
size_t gran;
};
/**
* struct vmw_validation_context - Per command submission validation context
* @ht: Hash table used to find resource- or buffer object duplicates
@ -129,21 +114,6 @@ vmw_validation_has_bos(struct vmw_validation_context *ctx)
return !list_empty(&ctx->bo_list);
}
/**
* vmw_validation_set_val_mem - Register a validation mem object for
* validation memory reservation
* @ctx: The validation context
* @vm: Pointer to a struct vmw_validation_mem
*
* Must be set before the first attempt to allocate validation memory.
*/
static inline void
vmw_validation_set_val_mem(struct vmw_validation_context *ctx,
struct vmw_validation_mem *vm)
{
ctx->vm = vm;
}
/**
* vmw_validation_set_ht - Register a hash table for duplicate finding
* @ctx: The validation context
@ -190,22 +160,6 @@ vmw_validation_bo_fence(struct vmw_validation_context *ctx,
(void *) fence);
}
/**
* vmw_validation_context_init - Initialize a validation context
* @ctx: Pointer to the validation context to initialize
*
* This function initializes a validation context with @merge_dups set
* to false
*/
static inline void
vmw_validation_context_init(struct vmw_validation_context *ctx)
{
memset(ctx, 0, sizeof(*ctx));
INIT_LIST_HEAD(&ctx->resource_list);
INIT_LIST_HEAD(&ctx->resource_ctx_list);
INIT_LIST_HEAD(&ctx->bo_list);
}
/**
* vmw_validation_align - Align a validation memory allocation
* @val: The size to be aligned

View File

@ -540,6 +540,10 @@ static int __init of_platform_default_populate_init(void)
of_node_put(node);
}
node = of_get_compatible_child(of_chosen, "simple-framebuffer");
of_platform_device_create(node, NULL, NULL);
of_node_put(node);
/* Populate everything else. */
of_platform_default_populate(NULL, NULL, NULL);

View File

@ -541,26 +541,7 @@ static struct platform_driver simplefb_driver = {
.remove = simplefb_remove,
};
static int __init simplefb_init(void)
{
int ret;
struct device_node *np;
ret = platform_driver_register(&simplefb_driver);
if (ret)
return ret;
if (IS_ENABLED(CONFIG_OF_ADDRESS) && of_chosen) {
for_each_child_of_node(of_chosen, np) {
if (of_device_is_compatible(np, "simple-framebuffer"))
of_platform_device_create(np, NULL, NULL);
}
}
return 0;
}
fs_initcall(simplefb_init);
module_platform_driver(simplefb_driver);
MODULE_AUTHOR("Stephen Warren <swarren@wwwdotorg.org>");
MODULE_DESCRIPTION("Simple framebuffer driver");

View File

@ -33,6 +33,9 @@ void drm_fb_xrgb8888_to_rgb888(void *dst, unsigned int dst_pitch, const void *sr
void drm_fb_xrgb8888_to_rgb888_toio(void __iomem *dst, unsigned int dst_pitch,
const void *vaddr, const struct drm_framebuffer *fb,
const struct drm_rect *clip);
void drm_fb_xrgb8888_to_xrgb2101010_toio(void __iomem *dst, unsigned int dst_pitch,
const void *vaddr, const struct drm_framebuffer *fb,
const struct drm_rect *clip);
void drm_fb_xrgb8888_to_gray8(void *dst, unsigned int dst_pitch, const void *vaddr,
const struct drm_framebuffer *fb, const struct drm_rect *clip);

View File

@ -3,7 +3,7 @@
#ifndef DRM_GEM_TTM_HELPER_H
#define DRM_GEM_TTM_HELPER_H
#include <linux/kernel.h>
#include <linux/container_of.h>
#include <drm/drm_device.h>
#include <drm/drm_gem.h>

View File

@ -11,8 +11,8 @@
#include <drm/ttm/ttm_bo_api.h>
#include <drm/ttm/ttm_bo_driver.h>
#include <linux/container_of.h>
#include <linux/dma-buf-map.h>
#include <linux/kernel.h> /* for container_of() */
struct drm_mode_create_dumb;
struct drm_plane;

View File

@ -39,13 +39,15 @@
*/
#include <linux/bug.h>
#include <linux/rbtree.h>
#include <linux/kernel.h>
#include <linux/limits.h>
#include <linux/mm_types.h>
#include <linux/list.h>
#include <linux/spinlock.h>
#ifdef CONFIG_DRM_DEBUG_MM
#include <linux/stackdepot.h>
#endif
#include <linux/types.h>
#include <drm/drm_print.h>
#ifdef CONFIG_DRM_DEBUG_MM

View File

@ -1096,6 +1096,24 @@ extern "C" {
#define DRM_IOCTL_SYNCOBJ_TRANSFER DRM_IOWR(0xCC, struct drm_syncobj_transfer)
#define DRM_IOCTL_SYNCOBJ_TIMELINE_SIGNAL DRM_IOWR(0xCD, struct drm_syncobj_timeline_array)
/**
* DRM_IOCTL_MODE_GETFB2 - Get framebuffer metadata.
*
* This queries metadata about a framebuffer. User-space fills
* &drm_mode_fb_cmd2.fb_id as the input, and the kernels fills the rest of the
* struct as the output.
*
* If the client is DRM master or has &CAP_SYS_ADMIN, &drm_mode_fb_cmd2.handles
* will be filled with GEM buffer handles. Planes are valid until one has a
* zero handle -- this can be used to compute the number of planes.
*
* Otherwise, &drm_mode_fb_cmd2.handles will be zeroed and planes are valid
* until one has a zero &drm_mode_fb_cmd2.pitches.
*
* If the framebuffer has a format modifier, &DRM_MODE_FB_MODIFIERS will be set
* in &drm_mode_fb_cmd2.flags and &drm_mode_fb_cmd2.modifier will contain the
* modifier. Otherwise, user-space must ignore &drm_mode_fb_cmd2.modifier.
*/
#define DRM_IOCTL_MODE_GETFB2 DRM_IOWR(0xCE, struct drm_mode_fb_cmd2)
/*

View File

@ -314,6 +314,13 @@ extern "C" {
*/
#define DRM_FORMAT_P016 fourcc_code('P', '0', '1', '6') /* 2x2 subsampled Cr:Cb plane 16 bits per channel */
/* 2 plane YCbCr420.
* 3 10 bit components and 2 padding bits packed into 4 bytes.
* index 0 = Y plane, [31:0] x:Y2:Y1:Y0 2:10:10:10 little endian
* index 1 = Cr:Cb plane, [63:0] x:Cr2:Cb2:Cr1:x:Cb1:Cr0:Cb0 [2:10:10:10:2:10:10:10] little endian
*/
#define DRM_FORMAT_P030 fourcc_code('P', '0', '3', '0') /* 2x2 subsampled Cr:Cb plane 10 bits per channel packed */
/* 3 plane non-subsampled (444) YCbCr
* 16 bits per component, but only 10 bits are used and 6 bits are padded
* index 0: Y plane, [15:0] Y:x [10:6] little endian
@ -854,6 +861,10 @@ drm_fourcc_canonicalize_nvidia_format_mod(__u64 modifier)
* and UV. Some SAND-using hardware stores UV in a separate tiled
* image from Y to reduce the column height, which is not supported
* with these modifiers.
*
* The DRM_FORMAT_MOD_BROADCOM_SAND128_COL_HEIGHT modifier is also
* supported for DRM_FORMAT_P030 where the columns remain as 128 bytes
* wide, but as this is a 10 bpp format that translates to 96 pixels.
*/
#define DRM_FORMAT_MOD_BROADCOM_SAND32_COL_HEIGHT(v) \

View File

@ -110,6 +110,7 @@ extern "C" {
#define DRM_VMW_PARAM_HW_CAPS2 13
#define DRM_VMW_PARAM_SM4_1 14
#define DRM_VMW_PARAM_SM5 15
#define DRM_VMW_PARAM_GL43 16
/**
* enum drm_vmw_handle_type - handle type for ref ioctls