MTD changes:

- Nothing stands out for this merge window, mostly minor fixes, such as
   module descriptions, the use of debug macros and Makefile improvements.
 
 Raw NAND changes;
 
 - The Freescale MXC driver has been converted to the newer ->exec_op()
   interface. The meson driver now supports handling the boot ROM area
   with very specific ECC needs. Support for the iMX8QXP has been added
   to the GPMI driver. The lpx32xx driver now can get the DMA channels
   using DT entries. The Qcom binding has been improved to be more future
   proof by Rob. And then there is the usual load of misc and minor
   changes.
 
 SPI-NAND changes:
 
 - The Macronix vendor driver has been improved to support an extended ID
   to avoid conflicting with older devices after an ID reuse issue.
 
 SPI NOR changes:
 
 - Drop support for Xilinx S3AN flashes. These flashes are for the very
   old Xilinx Spartan 3 FPGAs and they need some awkward code in the core
   to support. Drop support for these flashes, along with the special
   handling we needed for them in the core like non-power-of-2 page size
   handling and the .setup() callback.
 
 - Fix regression for old w25q128 flashes without SFDP tables. Commit
   83e824a4a5 ("mtd: spi-nor: Correct flags for Winbond w25q128")
   dropped support for such devices under the assumption that they aren't
   being used anymore. Users have now surfaced [0] so fix the regression
   by supporting both kind of devices.
 
 - Core cleanups including removal of SPI_NOR_NO_FR flag and
   simplification of spi_nor_get_flash_info().
 
 [0] https://lore.kernel.org/r/CALxbwRo_-9CaJmt7r7ELgu+vOcgk=xZcGHobnKf=oT2=u4d4aA@mail.gmail.com/
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEE9HuaYnbmDhq/XIDIJWrqGEe9VoQFAmabrcwACgkQJWrqGEe9
 VoQr4wf/QTC3J4dbLhsaKr5btZt2A+RZ9YibONVvADzP4X+l6+eD7bWubGOlpsjE
 n7ne3l8XZuONkLKk6vOxzGEVk+mIAbzwLaB02vRZwHFMyIYcrdyxCihn3heR/AYV
 lx+/XG9xfoQueFc5XHcc8BJO/q9UB6eiEwyqp/HAnt1SDXOprr7H4HtjLuAMICmO
 4aINAYwZ9bpKwoDix6oDGd/CGiacOIW0Hu4Av5TlBnrggq6EDQfqxym9F2lknJ1g
 9VTztYjbgu/NF8bALWr+9OabMxuZmQB2v+xEnTSxnegOeU2uhLo4YR3sK9JR3PCL
 BtP6Ny9CJBEt64KQwXAeewDNbzKXJg==
 =CJNr
 -----END PGP SIGNATURE-----

Merge tag 'mtd/for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux

Pull MTD updates from Miquel Raynal:
 "Nothing stands out for this merge window, mostly minor fixes, such as
  module descriptions, the use of debug macros and Makefile
  improvements.

  Raw NAND changes;

   - The Freescale MXC driver has been converted to the newer
     '->exec_op()' interface

   - The meson driver now supports handling the boot ROM area with very
     specific ECC needs

   - Support for the iMX8QXP has been added to the GPMI driver

   - The lpx32xx driver now can get the DMA channels using DT entries

   - The Qcom binding has been improved to be more future proof by Rob

   - And then there is the usual load of misc and minor changes

  SPI-NAND changes:

   - The Macronix vendor driver has been improved to support an extended
     ID to avoid conflicting with older devices after an ID reuse issue

  SPI NOR changes:

   - Drop support for Xilinx S3AN flashes. These flashes are for the
     very old Xilinx Spartan 3 FPGAs and they need some awkward code in
     the core to support.

     Drop support for these flashes, along with the special handling we
     needed for them in the core like non-power-of-2 page size handling
     and the .setup() callback.

   - Fix regression for old w25q128 flashes without SFDP tables.

     Commit 83e824a4a5 ("mtd: spi-nor: Correct flags for Winbond
     w25q128") dropped support for such devices under the assumption
     that they aren't being used anymore. Users have now surfaced [0] so
     fix the regression by supporting both kind of devices.

   - Core cleanups including removal of SPI_NOR_NO_FR flag and
     simplification of spi_nor_get_flash_info()"

Link: https://lore.kernel.org/r/CALxbwRo_-9CaJmt7r7ELgu+vOcgk=xZcGHobnKf=oT2=u4d4aA@mail.gmail.com/ [0]

* tag 'mtd/for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux: (28 commits)
  mtd: rawnand: lpx32xx: Fix dma_request_chan() error checks
  mtd: spinand: macronix: Add support for serial NAND flash
  mtd: spinand: macronix: Add support for reading Device ID 2
  mtd: rawnand: lpx32xx: Request DMA channels using DT entries
  dt-bindings: mtd: qcom,nandc: Define properties at top-level
  mtd: rawnand: intel: use 'time_left' variable with wait_for_completion_timeout()
  mtd: rawnand: mxc: use 'time_left' variable with wait_for_completion_timeout()
  mtd: rawnand: gpmi: add iMX8QXP support.
  mtd: rawnand: gpmi: add 'support_edo_timing' in gpmi_devdata
  mtd: cmdlinepart: Replace `dbg()` macro with `pr_debug()`
  mtd: add missing MODULE_DESCRIPTION() macros
  mtd: make mtd_test.c a separate module
  dt-bindings: mtd: gpmi-nand: Add 'fsl,imx8qxp-gpmi-nand' compatible string
  mtd: rawnand: cadence: remove unused struct 'ecc_info'
  mtd: rawnand: mxc: support software ECC
  mtd: rawnand: mxc: implement exec_op
  mtd: rawnand: mxc: separate page read from ecc calc
  mtd: spi-nor: winbond: fix w25q128 regression
  mtd: spi-nor: simplify spi_nor_get_flash_info()
  mtd: spi-nor: get rid of SPI_NOR_NO_FR
  ...
This commit is contained in:
Linus Torvalds 2024-07-20 11:52:17 -07:00
commit c43a20e4a5
28 changed files with 715 additions and 810 deletions

View File

@ -64,11 +64,29 @@ patternProperties:
items:
maximum: 0
amlogic,boot-pages:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Number of pages starting from offset 0, where a special ECC
configuration must be used because it is accessed by the ROM
code. This ECC configuration uses 384 bytes data blocks.
Also scrambling mode is enabled for such pages.
amlogic,boot-page-step:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Interval between pages, accessed by the ROM code. For example
we have 8 pages [0, 7]. Pages 0,2,4,6 are accessed by the
ROM code, so this field will be 2 (e.g. every 2nd page). Rest
of pages - 1,3,5,7 are read/written without this mode.
unevaluatedProperties: false
dependencies:
nand-ecc-strength: [nand-ecc-step-size]
nand-ecc-step-size: [nand-ecc-strength]
amlogic,boot-pages: [nand-is-boot-medium, "amlogic,boot-page-step"]
amlogic,boot-page-step: [nand-is-boot-medium, "amlogic,boot-pages"]
required:

View File

@ -24,6 +24,7 @@ properties:
- fsl,imx6q-gpmi-nand
- fsl,imx6sx-gpmi-nand
- fsl,imx7d-gpmi-nand
- fsl,imx8qxp-gpmi-nand
- items:
- enum:
- fsl,imx8mm-gpmi-nand
@ -151,6 +152,27 @@ allOf:
- const: gpmi_io
- const: gpmi_bch_apb
- if:
properties:
compatible:
contains:
enum:
- fsl,imx8qxp-gpmi-nand
then:
properties:
clocks:
items:
- description: SoC gpmi io clock
- description: SoC gpmi apb clock
- description: SoC gpmi bch clock
- description: SoC gpmi bch apb clock
clock-names:
items:
- const: gpmi_io
- const: gpmi_apb
- const: gpmi_bch
- const: gpmi_bch_apb
examples:
- |
nand-controller@8000c000 {

View File

@ -31,6 +31,18 @@ properties:
- const: core
- const: aon
qcom,cmd-crci:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Must contain the ADM command type CRCI block instance number specified for
the NAND controller on the given platform
qcom,data-crci:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Must contain the ADM data type CRCI block instance number specified for
the NAND controller on the given platform
patternProperties:
"^nand@[a-f0-9]$":
type: object
@ -83,18 +95,6 @@ allOf:
items:
- const: rxtx
qcom,cmd-crci:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Must contain the ADM command type CRCI block instance number
specified for the NAND controller on the given platform
qcom,data-crci:
$ref: /schemas/types.yaml#/definitions/uint32
description:
Must contain the ADM data type CRCI block instance number
specified for the NAND controller on the given platform
- if:
properties:
compatible:
@ -119,19 +119,9 @@ allOf:
- const: rx
- const: cmd
- if:
properties:
compatible:
contains:
enum:
- qcom,ipq806x-nand
qcom,cmd-crci: false
qcom,data-crci: false
then:
patternProperties:
"^nand@[a-f0-9]$":
properties:
qcom,boot-partitions: true
else:
patternProperties:
"^nand@[a-f0-9]$":
properties:

View File

@ -1399,4 +1399,5 @@ static void cfi_staa_destroy(struct mtd_info *mtd)
kfree(cfi);
}
MODULE_DESCRIPTION("MTD chip driver for ST Advanced Architecture Command Set (ID 0x0020)");
MODULE_LICENSE("GPL");

View File

@ -441,4 +441,5 @@ int cfi_varsize_frob(struct mtd_info *mtd, varsize_frob_t frob,
EXPORT_SYMBOL(cfi_varsize_frob);
MODULE_DESCRIPTION("Common Flash Interface Generic utility functions");
MODULE_LICENSE("GPL");

View File

@ -17,13 +17,12 @@ obj-$(CONFIG_MTD_ICHXROM) += ichxrom.o
obj-$(CONFIG_MTD_CK804XROM) += ck804xrom.o
obj-$(CONFIG_MTD_TSUNAMI) += tsunami_flash.o
obj-$(CONFIG_MTD_PXA2XX) += pxa2xx-flash.o
physmap-objs-y += physmap-core.o
physmap-objs-$(CONFIG_MTD_PHYSMAP_BT1_ROM) += physmap-bt1-rom.o
physmap-objs-$(CONFIG_MTD_PHYSMAP_VERSATILE) += physmap-versatile.o
physmap-objs-$(CONFIG_MTD_PHYSMAP_GEMINI) += physmap-gemini.o
physmap-objs-$(CONFIG_MTD_PHYSMAP_IXP4XX) += physmap-ixp4xx.o
physmap-objs := $(physmap-objs-y)
obj-$(CONFIG_MTD_PHYSMAP) += physmap.o
physmap-y := physmap-core.o
physmap-$(CONFIG_MTD_PHYSMAP_BT1_ROM) += physmap-bt1-rom.o
physmap-$(CONFIG_MTD_PHYSMAP_VERSATILE) += physmap-versatile.o
physmap-$(CONFIG_MTD_PHYSMAP_GEMINI) += physmap-gemini.o
physmap-$(CONFIG_MTD_PHYSMAP_IXP4XX) += physmap-ixp4xx.o
obj-$(CONFIG_MTD_PISMO) += pismo.o
obj-$(CONFIG_MTD_PCMCIA) += pcmciamtd.o
obj-$(CONFIG_MTD_SA1100) += sa1100-flash.o

View File

@ -41,4 +41,5 @@ void simple_map_init(struct map_info *map)
}
EXPORT_SYMBOL(simple_map_init);
MODULE_DESCRIPTION("Out-of-line map I/O");
MODULE_LICENSE("GPL");

View File

@ -531,11 +531,6 @@ struct cdns_nand_chip {
u8 cs[] __counted_by(nsels);
};
struct ecc_info {
int (*calc_ecc_bytes)(int step_size, int strength);
int max_step_size;
};
static inline struct
cdns_nand_chip *to_cdns_nand_chip(struct nand_chip *chip)
{

View File

@ -983,7 +983,7 @@ static int gpmi_setup_interface(struct nand_chip *chip, int chipnr,
return PTR_ERR(sdr);
/* Only MX28/MX6 GPMI controller can reach EDO timings */
if (sdr->tRC_min <= 25000 && !GPMI_IS_MX28(this) && !GPMI_IS_MX6(this))
if (sdr->tRC_min <= 25000 && !this->devdata->support_edo_timing)
return -ENOTSUPP;
/* Stop here if this call was just a check */
@ -1142,6 +1142,7 @@ static const struct gpmi_devdata gpmi_devdata_imx28 = {
.type = IS_MX28,
.bch_max_ecc_strength = 20,
.max_chain_delay = 16000,
.support_edo_timing = true,
.clks = gpmi_clks_for_mx2x,
.clks_count = ARRAY_SIZE(gpmi_clks_for_mx2x),
};
@ -1154,6 +1155,7 @@ static const struct gpmi_devdata gpmi_devdata_imx6q = {
.type = IS_MX6Q,
.bch_max_ecc_strength = 40,
.max_chain_delay = 12000,
.support_edo_timing = true,
.clks = gpmi_clks_for_mx6,
.clks_count = ARRAY_SIZE(gpmi_clks_for_mx6),
};
@ -1162,6 +1164,7 @@ static const struct gpmi_devdata gpmi_devdata_imx6sx = {
.type = IS_MX6SX,
.bch_max_ecc_strength = 62,
.max_chain_delay = 12000,
.support_edo_timing = true,
.clks = gpmi_clks_for_mx6,
.clks_count = ARRAY_SIZE(gpmi_clks_for_mx6),
};
@ -1174,10 +1177,24 @@ static const struct gpmi_devdata gpmi_devdata_imx7d = {
.type = IS_MX7D,
.bch_max_ecc_strength = 62,
.max_chain_delay = 12000,
.support_edo_timing = true,
.clks = gpmi_clks_for_mx7d,
.clks_count = ARRAY_SIZE(gpmi_clks_for_mx7d),
};
static const char *gpmi_clks_for_mx8qxp[GPMI_CLK_MAX] = {
"gpmi_io", "gpmi_apb", "gpmi_bch", "gpmi_bch_apb",
};
static const struct gpmi_devdata gpmi_devdata_imx8qxp = {
.type = IS_MX8QXP,
.bch_max_ecc_strength = 62,
.max_chain_delay = 12000,
.support_edo_timing = true,
.clks = gpmi_clks_for_mx8qxp,
.clks_count = ARRAY_SIZE(gpmi_clks_for_mx8qxp),
};
static int acquire_register_block(struct gpmi_nand_data *this,
const char *res_name)
{
@ -2721,6 +2738,7 @@ static const struct of_device_id gpmi_nand_id_table[] = {
{ .compatible = "fsl,imx6q-gpmi-nand", .data = &gpmi_devdata_imx6q, },
{ .compatible = "fsl,imx6sx-gpmi-nand", .data = &gpmi_devdata_imx6sx, },
{ .compatible = "fsl,imx7d-gpmi-nand", .data = &gpmi_devdata_imx7d,},
{ .compatible = "fsl,imx8qxp-gpmi-nand", .data = &gpmi_devdata_imx8qxp, },
{}
};
MODULE_DEVICE_TABLE(of, gpmi_nand_id_table);

View File

@ -78,6 +78,7 @@ enum gpmi_type {
IS_MX6Q,
IS_MX6SX,
IS_MX7D,
IS_MX8QXP,
};
struct gpmi_devdata {
@ -86,6 +87,7 @@ struct gpmi_devdata {
int max_chain_delay; /* See the SDR EDO mode */
const char * const *clks;
const int clks_count;
bool support_edo_timing;
};
/**
@ -172,8 +174,10 @@ struct gpmi_nand_data {
#define GPMI_IS_MX6Q(x) ((x)->devdata->type == IS_MX6Q)
#define GPMI_IS_MX6SX(x) ((x)->devdata->type == IS_MX6SX)
#define GPMI_IS_MX7D(x) ((x)->devdata->type == IS_MX7D)
#define GPMI_IS_MX8QXP(x) ((x)->devdata->type == IS_MX8QXP)
#define GPMI_IS_MX6(x) (GPMI_IS_MX6Q(x) || GPMI_IS_MX6SX(x) || \
GPMI_IS_MX7D(x))
GPMI_IS_MX7D(x) || GPMI_IS_MX8QXP(x))
#define GPMI_IS_MXS(x) (GPMI_IS_MX23(x) || GPMI_IS_MX28(x))
#endif

View File

@ -295,7 +295,7 @@ static int ebu_dma_start(struct ebu_nand_controller *ebu_host, u32 dir,
unsigned long flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
dma_addr_t buf_dma;
int ret;
u32 timeout;
unsigned long time_left;
if (dir == DMA_DEV_TO_MEM) {
chan = ebu_host->dma_rx;
@ -335,8 +335,8 @@ static int ebu_dma_start(struct ebu_nand_controller *ebu_host, u32 dir,
dma_async_issue_pending(chan);
/* Wait DMA to finish the data transfer.*/
timeout = wait_for_completion_timeout(dma_completion, msecs_to_jiffies(1000));
if (!timeout) {
time_left = wait_for_completion_timeout(dma_completion, msecs_to_jiffies(1000));
if (!time_left) {
dev_err(ebu_host->dev, "I/O Error in DMA RX (status %d)\n",
dmaengine_tx_status(chan, cookie, NULL));
dmaengine_terminate_sync(chan);

View File

@ -574,18 +574,22 @@ static int lpc32xx_dma_setup(struct lpc32xx_nand_host *host)
struct mtd_info *mtd = nand_to_mtd(&host->nand_chip);
dma_cap_mask_t mask;
if (!host->pdata || !host->pdata->dma_filter) {
dev_err(mtd->dev.parent, "no DMA platform data\n");
return -ENOENT;
}
host->dma_chan = dma_request_chan(mtd->dev.parent, "rx-tx");
if (IS_ERR(host->dma_chan)) {
/* fallback to request using platform data */
if (!host->pdata || !host->pdata->dma_filter) {
dev_err(mtd->dev.parent, "no DMA platform data\n");
return -ENOENT;
}
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
host->dma_chan = dma_request_channel(mask, host->pdata->dma_filter,
"nand-mlc");
if (!host->dma_chan) {
dev_err(mtd->dev.parent, "Failed to request DMA channel\n");
return -EBUSY;
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
host->dma_chan = dma_request_channel(mask, host->pdata->dma_filter, "nand-mlc");
if (!host->dma_chan) {
dev_err(mtd->dev.parent, "Failed to request DMA channel\n");
return -EBUSY;
}
}
/*

View File

@ -721,18 +721,22 @@ static int lpc32xx_nand_dma_setup(struct lpc32xx_nand_host *host)
struct mtd_info *mtd = nand_to_mtd(&host->nand_chip);
dma_cap_mask_t mask;
if (!host->pdata || !host->pdata->dma_filter) {
dev_err(mtd->dev.parent, "no DMA platform data\n");
return -ENOENT;
}
host->dma_chan = dma_request_chan(mtd->dev.parent, "rx-tx");
if (IS_ERR(host->dma_chan)) {
/* fallback to request using platform data */
if (!host->pdata || !host->pdata->dma_filter) {
dev_err(mtd->dev.parent, "no DMA platform data\n");
return -ENOENT;
}
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
host->dma_chan = dma_request_channel(mask, host->pdata->dma_filter,
"nand-slc");
if (!host->dma_chan) {
dev_err(mtd->dev.parent, "Failed to request DMA channel\n");
return -EBUSY;
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
host->dma_chan = dma_request_channel(mask, host->pdata->dma_filter, "nand-slc");
if (!host->dma_chan) {
dev_err(mtd->dev.parent, "Failed to request DMA channel\n");
return -EBUSY;
}
}
return 0;

View File

@ -35,6 +35,7 @@
#define NFC_CMD_RB BIT(20)
#define NFC_CMD_SCRAMBLER_ENABLE BIT(19)
#define NFC_CMD_SCRAMBLER_DISABLE 0
#define NFC_CMD_SHORTMODE_ENABLE 1
#define NFC_CMD_SHORTMODE_DISABLE 0
#define NFC_CMD_RB_INT BIT(14)
#define NFC_CMD_RB_INT_NO_PIN ((0xb << 10) | BIT(18) | BIT(16))
@ -78,6 +79,8 @@
#define DMA_DIR(dir) ((dir) ? NFC_CMD_N2M : NFC_CMD_M2N)
#define DMA_ADDR_ALIGN 8
#define NFC_SHORT_MODE_ECC_SZ 384
#define ECC_CHECK_RETURN_FF (-1)
#define NAND_CE0 (0xe << 10)
@ -125,6 +128,8 @@ struct meson_nfc_nand_chip {
u32 twb;
u32 tadl;
u32 tbers_max;
u32 boot_pages;
u32 boot_page_step;
u32 bch_mode;
u8 *data_buf;
@ -298,28 +303,49 @@ static void meson_nfc_cmd_seed(struct meson_nfc *nfc, u32 seed)
nfc->reg_base + NFC_REG_CMD);
}
static void meson_nfc_cmd_access(struct nand_chip *nand, int raw, bool dir,
int scrambler)
static int meson_nfc_is_boot_page(struct nand_chip *nand, int page)
{
const struct meson_nfc_nand_chip *meson_chip = to_meson_nand(nand);
return (nand->options & NAND_IS_BOOT_MEDIUM) &&
!(page % meson_chip->boot_page_step) &&
(page < meson_chip->boot_pages);
}
static void meson_nfc_cmd_access(struct nand_chip *nand, int raw, bool dir, int page)
{
const struct meson_nfc_nand_chip *meson_chip = to_meson_nand(nand);
struct mtd_info *mtd = nand_to_mtd(nand);
struct meson_nfc *nfc = nand_get_controller_data(mtd_to_nand(mtd));
struct meson_nfc_nand_chip *meson_chip = to_meson_nand(nand);
u32 bch = meson_chip->bch_mode, cmd;
int len = mtd->writesize, pagesize, pages;
int scrambler;
u32 cmd;
pagesize = nand->ecc.size;
if (nand->options & NAND_NEED_SCRAMBLING)
scrambler = NFC_CMD_SCRAMBLER_ENABLE;
else
scrambler = NFC_CMD_SCRAMBLER_DISABLE;
if (raw) {
len = mtd->writesize + mtd->oobsize;
cmd = len | scrambler | DMA_DIR(dir);
writel(cmd, nfc->reg_base + NFC_REG_CMD);
return;
} else if (meson_nfc_is_boot_page(nand, page)) {
pagesize = NFC_SHORT_MODE_ECC_SZ >> 3;
pages = mtd->writesize / 512;
scrambler = NFC_CMD_SCRAMBLER_ENABLE;
cmd = CMDRWGEN(DMA_DIR(dir), scrambler, NFC_ECC_BCH8_1K,
NFC_CMD_SHORTMODE_ENABLE, pagesize, pages);
} else {
pagesize = nand->ecc.size >> 3;
pages = len / nand->ecc.size;
cmd = CMDRWGEN(DMA_DIR(dir), scrambler, meson_chip->bch_mode,
NFC_CMD_SHORTMODE_DISABLE, pagesize, pages);
}
pages = len / nand->ecc.size;
cmd = CMDRWGEN(DMA_DIR(dir), scrambler, bch,
NFC_CMD_SHORTMODE_DISABLE, pagesize, pages);
if (scrambler == NFC_CMD_SCRAMBLER_ENABLE)
meson_nfc_cmd_seed(nfc, page);
writel(cmd, nfc->reg_base + NFC_REG_CMD);
}
@ -743,14 +769,7 @@ static int meson_nfc_write_page_sub(struct nand_chip *nand,
if (ret)
return ret;
if (nand->options & NAND_NEED_SCRAMBLING) {
meson_nfc_cmd_seed(nfc, page);
meson_nfc_cmd_access(nand, raw, DIRWRITE,
NFC_CMD_SCRAMBLER_ENABLE);
} else {
meson_nfc_cmd_access(nand, raw, DIRWRITE,
NFC_CMD_SCRAMBLER_DISABLE);
}
meson_nfc_cmd_access(nand, raw, DIRWRITE, page);
cmd = nfc->param.chip_select | NFC_CMD_CLE | NAND_CMD_PAGEPROG;
writel(cmd, nfc->reg_base + NFC_REG_CMD);
@ -829,14 +848,7 @@ static int meson_nfc_read_page_sub(struct nand_chip *nand,
if (ret)
return ret;
if (nand->options & NAND_NEED_SCRAMBLING) {
meson_nfc_cmd_seed(nfc, page);
meson_nfc_cmd_access(nand, raw, DIRREAD,
NFC_CMD_SCRAMBLER_ENABLE);
} else {
meson_nfc_cmd_access(nand, raw, DIRREAD,
NFC_CMD_SCRAMBLER_DISABLE);
}
meson_nfc_cmd_access(nand, raw, DIRREAD, page);
ret = meson_nfc_wait_dma_finish(nfc);
meson_nfc_check_ecc_pages_valid(nfc, nand, raw);
@ -1431,6 +1443,26 @@ meson_nfc_nand_chip_init(struct device *dev,
if (ret)
return ret;
if (nand->options & NAND_IS_BOOT_MEDIUM) {
ret = of_property_read_u32(np, "amlogic,boot-pages",
&meson_chip->boot_pages);
if (ret) {
dev_err(dev, "could not retrieve 'amlogic,boot-pages' property: %d",
ret);
nand_cleanup(nand);
return ret;
}
ret = of_property_read_u32(np, "amlogic,boot-page-step",
&meson_chip->boot_page_step);
if (ret) {
dev_err(dev, "could not retrieve 'amlogic,boot-page-step' property: %d",
ret);
nand_cleanup(nand);
return ret;
}
}
ret = mtd_device_register(mtd, NULL, 0);
if (ret) {
dev_err(dev, "failed to register MTD device: %d\n", ret);

View File

@ -20,6 +20,7 @@
#include <linux/irq.h>
#include <linux/completion.h>
#include <linux/of.h>
#include <linux/bitfield.h>
#define DRIVER_NAME "mxc_nand"
@ -47,6 +48,8 @@
#define NFC_V1_V2_CONFIG1 (host->regs + 0x1a)
#define NFC_V1_V2_CONFIG2 (host->regs + 0x1c)
#define NFC_V1_V2_ECC_STATUS_RESULT_ERM GENMASK(3, 2)
#define NFC_V2_CONFIG1_ECC_MODE_4 (1 << 0)
#define NFC_V1_V2_CONFIG1_SP_EN (1 << 2)
#define NFC_V1_V2_CONFIG1_ECC_EN (1 << 3)
@ -123,8 +126,7 @@ struct mxc_nand_host;
struct mxc_nand_devtype_data {
void (*preset)(struct mtd_info *);
int (*read_page)(struct nand_chip *chip, void *buf, void *oob, bool ecc,
int page);
int (*read_page)(struct nand_chip *chip);
void (*send_cmd)(struct mxc_nand_host *, uint16_t, int);
void (*send_addr)(struct mxc_nand_host *, uint16_t, int);
void (*send_page)(struct mtd_info *, unsigned int);
@ -132,7 +134,7 @@ struct mxc_nand_devtype_data {
uint16_t (*get_dev_status)(struct mxc_nand_host *);
int (*check_int)(struct mxc_nand_host *);
void (*irq_control)(struct mxc_nand_host *, int);
u32 (*get_ecc_status)(struct mxc_nand_host *);
u32 (*get_ecc_status)(struct nand_chip *);
const struct mtd_ooblayout_ops *ooblayout;
void (*select_chip)(struct nand_chip *chip, int cs);
int (*setup_interface)(struct nand_chip *chip, int csline,
@ -175,11 +177,11 @@ struct mxc_nand_host {
int eccsize;
int used_oobsize;
int active_cs;
unsigned int ecc_stats_v1;
struct completion op_completion;
uint8_t *data_buf;
unsigned int buf_start;
void *data_buf;
const struct mxc_nand_devtype_data *devtype_data;
};
@ -281,63 +283,6 @@ static void copy_spare(struct mtd_info *mtd, bool bfrom, void *buf)
}
}
/*
* MXC NANDFC can only perform full page+spare or spare-only read/write. When
* the upper layers perform a read/write buf operation, the saved column address
* is used to index into the full page. So usually this function is called with
* column == 0 (unless no column cycle is needed indicated by column == -1)
*/
static void mxc_do_addr_cycle(struct mtd_info *mtd, int column, int page_addr)
{
struct nand_chip *nand_chip = mtd_to_nand(mtd);
struct mxc_nand_host *host = nand_get_controller_data(nand_chip);
/* Write out column address, if necessary */
if (column != -1) {
host->devtype_data->send_addr(host, column & 0xff,
page_addr == -1);
if (mtd->writesize > 512)
/* another col addr cycle for 2k page */
host->devtype_data->send_addr(host,
(column >> 8) & 0xff,
false);
}
/* Write out page address, if necessary */
if (page_addr != -1) {
/* paddr_0 - p_addr_7 */
host->devtype_data->send_addr(host, (page_addr & 0xff), false);
if (mtd->writesize > 512) {
if (mtd->size >= 0x10000000) {
/* paddr_8 - paddr_15 */
host->devtype_data->send_addr(host,
(page_addr >> 8) & 0xff,
false);
host->devtype_data->send_addr(host,
(page_addr >> 16) & 0xff,
true);
} else
/* paddr_8 - paddr_15 */
host->devtype_data->send_addr(host,
(page_addr >> 8) & 0xff, true);
} else {
if (nand_chip->options & NAND_ROW_ADDR_3) {
/* paddr_8 - paddr_15 */
host->devtype_data->send_addr(host,
(page_addr >> 8) & 0xff,
false);
host->devtype_data->send_addr(host,
(page_addr >> 16) & 0xff,
true);
} else
/* paddr_8 - paddr_15 */
host->devtype_data->send_addr(host,
(page_addr >> 8) & 0xff, true);
}
}
}
static int check_int_v3(struct mxc_nand_host *host)
{
uint32_t tmp;
@ -406,19 +351,81 @@ static void irq_control(struct mxc_nand_host *host, int activate)
}
}
static u32 get_ecc_status_v1(struct mxc_nand_host *host)
static u32 get_ecc_status_v1(struct nand_chip *chip)
{
return readw(NFC_V1_V2_ECC_STATUS_RESULT);
struct mtd_info *mtd = nand_to_mtd(chip);
struct mxc_nand_host *host = nand_get_controller_data(chip);
unsigned int ecc_stats, max_bitflips = 0;
int no_subpages, i;
no_subpages = mtd->writesize >> 9;
ecc_stats = host->ecc_stats_v1;
for (i = 0; i < no_subpages; i++) {
switch (ecc_stats & 0x3) {
case 0:
default:
break;
case 1:
mtd->ecc_stats.corrected++;
max_bitflips = 1;
break;
case 2:
mtd->ecc_stats.failed++;
break;
}
ecc_stats >>= 2;
}
return max_bitflips;
}
static u32 get_ecc_status_v2(struct mxc_nand_host *host)
static u32 get_ecc_status_v2_v3(struct nand_chip *chip, unsigned int ecc_stat)
{
return readl(NFC_V1_V2_ECC_STATUS_RESULT);
struct mtd_info *mtd = nand_to_mtd(chip);
struct mxc_nand_host *host = nand_get_controller_data(chip);
u8 ecc_bit_mask, err_limit;
unsigned int max_bitflips = 0;
int no_subpages, err;
ecc_bit_mask = (host->eccsize == 4) ? 0x7 : 0xf;
err_limit = (host->eccsize == 4) ? 0x4 : 0x8;
no_subpages = mtd->writesize >> 9;
do {
err = ecc_stat & ecc_bit_mask;
if (err > err_limit) {
mtd->ecc_stats.failed++;
} else {
mtd->ecc_stats.corrected += err;
max_bitflips = max_t(unsigned int, max_bitflips, err);
}
ecc_stat >>= 4;
} while (--no_subpages);
return max_bitflips;
}
static u32 get_ecc_status_v3(struct mxc_nand_host *host)
static u32 get_ecc_status_v2(struct nand_chip *chip)
{
return readl(NFC_V3_ECC_STATUS_RESULT);
struct mxc_nand_host *host = nand_get_controller_data(chip);
u32 ecc_stat = readl(NFC_V1_V2_ECC_STATUS_RESULT);
return get_ecc_status_v2_v3(chip, ecc_stat);
}
static u32 get_ecc_status_v3(struct nand_chip *chip)
{
struct mxc_nand_host *host = nand_get_controller_data(chip);
u32 ecc_stat = readl(NFC_V3_ECC_STATUS_RESULT);
return get_ecc_status_v2_v3(chip, ecc_stat);
}
static irqreturn_t mxc_nfc_irq(int irq, void *dev_id)
@ -450,14 +457,14 @@ static int wait_op_done(struct mxc_nand_host *host, int useirq)
return 0;
if (useirq) {
unsigned long timeout;
unsigned long time_left;
reinit_completion(&host->op_completion);
irq_control(host, 1);
timeout = wait_for_completion_timeout(&host->op_completion, HZ);
if (!timeout && !host->devtype_data->check_int(host)) {
time_left = wait_for_completion_timeout(&host->op_completion, HZ);
if (!time_left && !host->devtype_data->check_int(host)) {
dev_dbg(host->dev, "timeout waiting for irq\n");
ret = -ETIMEDOUT;
}
@ -697,38 +704,21 @@ static void mxc_nand_enable_hwecc_v3(struct nand_chip *chip, bool enable)
writel(config2, NFC_V3_CONFIG2);
}
/* This functions is used by upper layer to checks if device is ready */
static int mxc_nand_dev_ready(struct nand_chip *chip)
{
/*
* NFC handles R/B internally. Therefore, this function
* always returns status as ready.
*/
return 1;
}
static int mxc_nand_read_page_v1(struct nand_chip *chip, void *buf, void *oob,
bool ecc, int page)
static int mxc_nand_read_page_v1(struct nand_chip *chip)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct mxc_nand_host *host = nand_get_controller_data(chip);
unsigned int bitflips_corrected = 0;
int no_subpages;
int i;
unsigned int ecc_stats = 0;
host->devtype_data->enable_hwecc(chip, ecc);
host->devtype_data->send_cmd(host, NAND_CMD_READ0, false);
mxc_do_addr_cycle(mtd, 0, page);
if (mtd->writesize > 512)
host->devtype_data->send_cmd(host, NAND_CMD_READSTART, true);
no_subpages = mtd->writesize >> 9;
if (mtd->writesize)
no_subpages = mtd->writesize >> 9;
else
/* READ PARAMETER PAGE is called when mtd->writesize is not yet set */
no_subpages = 1;
for (i = 0; i < no_subpages; i++) {
uint16_t ecc_stats;
/* NANDFC buffer 0 is used for page read/write */
writew((host->active_cs << 4) | i, NFC_V1_V2_BUF_ADDR);
@ -737,135 +727,74 @@ static int mxc_nand_read_page_v1(struct nand_chip *chip, void *buf, void *oob,
/* Wait for operation to complete */
wait_op_done(host, true);
ecc_stats = get_ecc_status_v1(host);
ecc_stats >>= 2;
if (buf && ecc) {
switch (ecc_stats & 0x3) {
case 0:
default:
break;
case 1:
mtd->ecc_stats.corrected++;
bitflips_corrected = 1;
break;
case 2:
mtd->ecc_stats.failed++;
break;
}
}
ecc_stats |= FIELD_GET(NFC_V1_V2_ECC_STATUS_RESULT_ERM,
readw(NFC_V1_V2_ECC_STATUS_RESULT)) << i * 2;
}
if (buf)
memcpy32_fromio(buf, host->main_area0, mtd->writesize);
if (oob)
copy_spare(mtd, true, oob);
host->ecc_stats_v1 = ecc_stats;
return bitflips_corrected;
return 0;
}
static int mxc_nand_read_page_v2_v3(struct nand_chip *chip, void *buf,
void *oob, bool ecc, int page)
static int mxc_nand_read_page_v2_v3(struct nand_chip *chip)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct mxc_nand_host *host = nand_get_controller_data(chip);
unsigned int max_bitflips = 0;
u32 ecc_stat, err;
int no_subpages;
u8 ecc_bit_mask, err_limit;
host->devtype_data->enable_hwecc(chip, ecc);
host->devtype_data->send_cmd(host, NAND_CMD_READ0, false);
mxc_do_addr_cycle(mtd, 0, page);
if (mtd->writesize > 512)
host->devtype_data->send_cmd(host,
NAND_CMD_READSTART, true);
host->devtype_data->send_page(mtd, NFC_OUTPUT);
if (buf)
memcpy32_fromio(buf, host->main_area0, mtd->writesize);
if (oob)
copy_spare(mtd, true, oob);
ecc_bit_mask = (host->eccsize == 4) ? 0x7 : 0xf;
err_limit = (host->eccsize == 4) ? 0x4 : 0x8;
no_subpages = mtd->writesize >> 9;
ecc_stat = host->devtype_data->get_ecc_status(host);
do {
err = ecc_stat & ecc_bit_mask;
if (err > err_limit) {
mtd->ecc_stats.failed++;
} else {
mtd->ecc_stats.corrected += err;
max_bitflips = max_t(unsigned int, max_bitflips, err);
}
ecc_stat >>= 4;
} while (--no_subpages);
return max_bitflips;
return 0;
}
static int mxc_nand_read_page(struct nand_chip *chip, uint8_t *buf,
int oob_required, int page)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct mxc_nand_host *host = nand_get_controller_data(chip);
void *oob_buf;
int ret;
host->devtype_data->enable_hwecc(chip, true);
ret = nand_read_page_op(chip, page, 0, buf, mtd->writesize);
host->devtype_data->enable_hwecc(chip, false);
if (ret)
return ret;
if (oob_required)
oob_buf = chip->oob_poi;
else
oob_buf = NULL;
copy_spare(mtd, true, chip->oob_poi);
return host->devtype_data->read_page(chip, buf, oob_buf, 1, page);
return host->devtype_data->get_ecc_status(chip);
}
static int mxc_nand_read_page_raw(struct nand_chip *chip, uint8_t *buf,
int oob_required, int page)
{
struct mxc_nand_host *host = nand_get_controller_data(chip);
void *oob_buf;
struct mtd_info *mtd = nand_to_mtd(chip);
int ret;
ret = nand_read_page_op(chip, page, 0, buf, mtd->writesize);
if (ret)
return ret;
if (oob_required)
oob_buf = chip->oob_poi;
else
oob_buf = NULL;
copy_spare(mtd, true, chip->oob_poi);
return host->devtype_data->read_page(chip, buf, oob_buf, 0, page);
return 0;
}
static int mxc_nand_read_oob(struct nand_chip *chip, int page)
{
struct mxc_nand_host *host = nand_get_controller_data(chip);
return host->devtype_data->read_page(chip, NULL, chip->oob_poi, 0,
page);
}
static int mxc_nand_write_page(struct nand_chip *chip, const uint8_t *buf,
bool ecc, int page)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct mxc_nand_host *host = nand_get_controller_data(chip);
int ret;
host->devtype_data->enable_hwecc(chip, ecc);
ret = nand_read_page_op(chip, page, 0, host->data_buf, mtd->writesize);
if (ret)
return ret;
host->devtype_data->send_cmd(host, NAND_CMD_SEQIN, false);
mxc_do_addr_cycle(mtd, 0, page);
memcpy32_toio(host->main_area0, buf, mtd->writesize);
copy_spare(mtd, false, chip->oob_poi);
host->devtype_data->send_page(mtd, NFC_INPUT);
host->devtype_data->send_cmd(host, NAND_CMD_PAGEPROG, true);
mxc_do_addr_cycle(mtd, 0, page);
copy_spare(mtd, true, chip->oob_poi);
return 0;
}
@ -873,13 +802,29 @@ static int mxc_nand_write_page(struct nand_chip *chip, const uint8_t *buf,
static int mxc_nand_write_page_ecc(struct nand_chip *chip, const uint8_t *buf,
int oob_required, int page)
{
return mxc_nand_write_page(chip, buf, true, page);
struct mtd_info *mtd = nand_to_mtd(chip);
struct mxc_nand_host *host = nand_get_controller_data(chip);
int ret;
copy_spare(mtd, false, chip->oob_poi);
host->devtype_data->enable_hwecc(chip, true);
ret = nand_prog_page_op(chip, page, 0, buf, mtd->writesize);
host->devtype_data->enable_hwecc(chip, false);
return ret;
}
static int mxc_nand_write_page_raw(struct nand_chip *chip, const uint8_t *buf,
int oob_required, int page)
{
return mxc_nand_write_page(chip, buf, false, page);
struct mtd_info *mtd = nand_to_mtd(chip);
copy_spare(mtd, false, chip->oob_poi);
return nand_prog_page_op(chip, page, 0, buf, mtd->writesize);
}
static int mxc_nand_write_oob(struct nand_chip *chip, int page)
@ -888,68 +833,9 @@ static int mxc_nand_write_oob(struct nand_chip *chip, int page)
struct mxc_nand_host *host = nand_get_controller_data(chip);
memset(host->data_buf, 0xff, mtd->writesize);
copy_spare(mtd, false, chip->oob_poi);
return mxc_nand_write_page(chip, host->data_buf, false, page);
}
static u_char mxc_nand_read_byte(struct nand_chip *nand_chip)
{
struct mxc_nand_host *host = nand_get_controller_data(nand_chip);
uint8_t ret;
/* Check for status request */
if (host->status_request)
return host->devtype_data->get_dev_status(host) & 0xFF;
if (nand_chip->options & NAND_BUSWIDTH_16) {
/* only take the lower byte of each word */
ret = *(uint16_t *)(host->data_buf + host->buf_start);
host->buf_start += 2;
} else {
ret = *(uint8_t *)(host->data_buf + host->buf_start);
host->buf_start++;
}
dev_dbg(host->dev, "%s: ret=0x%hhx (start=%u)\n", __func__, ret, host->buf_start);
return ret;
}
/* Write data of length len to buffer buf. The data to be
* written on NAND Flash is first copied to RAMbuffer. After the Data Input
* Operation by the NFC, the data is written to NAND Flash */
static void mxc_nand_write_buf(struct nand_chip *nand_chip, const u_char *buf,
int len)
{
struct mtd_info *mtd = nand_to_mtd(nand_chip);
struct mxc_nand_host *host = nand_get_controller_data(nand_chip);
u16 col = host->buf_start;
int n = mtd->oobsize + mtd->writesize - col;
n = min(n, len);
memcpy(host->data_buf + col, buf, n);
host->buf_start += n;
}
/* Read the data buffer from the NAND Flash. To read the data from NAND
* Flash first the data output cycle is initiated by the NFC, which copies
* the data to RAMbuffer. This data of length len is then copied to buffer buf.
*/
static void mxc_nand_read_buf(struct nand_chip *nand_chip, u_char *buf,
int len)
{
struct mtd_info *mtd = nand_to_mtd(nand_chip);
struct mxc_nand_host *host = nand_get_controller_data(nand_chip);
u16 col = host->buf_start;
int n = mtd->oobsize + mtd->writesize - col;
n = min(n, len);
memcpy(buf, host->data_buf + col, n);
host->buf_start += n;
return nand_prog_page_op(chip, page, 0, host->data_buf, mtd->writesize);
}
/* This function is used by upper layer for select and
@ -1328,107 +1214,6 @@ static void preset_v3(struct mtd_info *mtd)
writel(0, NFC_V3_DELAY_LINE);
}
/* Used by the upper layer to write command to NAND Flash for
* different operations to be carried out on NAND Flash */
static void mxc_nand_command(struct nand_chip *nand_chip, unsigned command,
int column, int page_addr)
{
struct mtd_info *mtd = nand_to_mtd(nand_chip);
struct mxc_nand_host *host = nand_get_controller_data(nand_chip);
dev_dbg(host->dev, "mxc_nand_command (cmd = 0x%x, col = 0x%x, page = 0x%x)\n",
command, column, page_addr);
/* Reset command state information */
host->status_request = false;
/* Command pre-processing step */
switch (command) {
case NAND_CMD_RESET:
host->devtype_data->preset(mtd);
host->devtype_data->send_cmd(host, command, false);
break;
case NAND_CMD_STATUS:
host->buf_start = 0;
host->status_request = true;
host->devtype_data->send_cmd(host, command, true);
WARN_ONCE(column != -1 || page_addr != -1,
"Unexpected column/row value (cmd=%u, col=%d, row=%d)\n",
command, column, page_addr);
mxc_do_addr_cycle(mtd, column, page_addr);
break;
case NAND_CMD_READID:
host->devtype_data->send_cmd(host, command, true);
mxc_do_addr_cycle(mtd, column, page_addr);
host->devtype_data->send_read_id(host);
host->buf_start = 0;
break;
case NAND_CMD_ERASE1:
case NAND_CMD_ERASE2:
host->devtype_data->send_cmd(host, command, false);
WARN_ONCE(column != -1,
"Unexpected column value (cmd=%u, col=%d)\n",
command, column);
mxc_do_addr_cycle(mtd, column, page_addr);
break;
case NAND_CMD_PARAM:
host->devtype_data->send_cmd(host, command, false);
mxc_do_addr_cycle(mtd, column, page_addr);
host->devtype_data->send_page(mtd, NFC_OUTPUT);
memcpy32_fromio(host->data_buf, host->main_area0, 512);
host->buf_start = 0;
break;
default:
WARN_ONCE(1, "Unimplemented command (cmd=%u)\n",
command);
break;
}
}
static int mxc_nand_set_features(struct nand_chip *chip, int addr,
u8 *subfeature_param)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct mxc_nand_host *host = nand_get_controller_data(chip);
int i;
host->buf_start = 0;
for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; ++i)
chip->legacy.write_byte(chip, subfeature_param[i]);
memcpy32_toio(host->main_area0, host->data_buf, mtd->writesize);
host->devtype_data->send_cmd(host, NAND_CMD_SET_FEATURES, false);
mxc_do_addr_cycle(mtd, addr, -1);
host->devtype_data->send_page(mtd, NFC_INPUT);
return 0;
}
static int mxc_nand_get_features(struct nand_chip *chip, int addr,
u8 *subfeature_param)
{
struct mtd_info *mtd = nand_to_mtd(chip);
struct mxc_nand_host *host = nand_get_controller_data(chip);
int i;
host->devtype_data->send_cmd(host, NAND_CMD_GET_FEATURES, false);
mxc_do_addr_cycle(mtd, addr, -1);
host->devtype_data->send_page(mtd, NFC_OUTPUT);
memcpy32_fromio(host->data_buf, host->main_area0, 512);
host->buf_start = 0;
for (i = 0; i < ONFI_SUBFEATURE_PARAM_LEN; ++i)
*subfeature_param++ = chip->legacy.read_byte(chip);
return 0;
}
/*
* The generic flash bbt descriptors overlap with our ecc
* hardware, so define some i.MX specific ones.
@ -1617,10 +1402,10 @@ static int mxcnd_attach_chip(struct nand_chip *chip)
chip->ecc.bytes = host->devtype_data->eccbytes;
host->eccsize = host->devtype_data->eccsize;
chip->ecc.size = 512;
mtd_set_ooblayout(mtd, host->devtype_data->ooblayout);
switch (chip->ecc.engine_type) {
case NAND_ECC_ENGINE_TYPE_ON_HOST:
mtd_set_ooblayout(mtd, host->devtype_data->ooblayout);
chip->ecc.read_page = mxc_nand_read_page;
chip->ecc.read_page_raw = mxc_nand_read_page_raw;
chip->ecc.read_oob = mxc_nand_read_oob;
@ -1630,6 +1415,8 @@ static int mxcnd_attach_chip(struct nand_chip *chip)
break;
case NAND_ECC_ENGINE_TYPE_SOFT:
chip->ecc.write_page_raw = nand_monolithic_write_page_raw;
chip->ecc.read_page_raw = nand_monolithic_read_page_raw;
break;
default:
@ -1685,9 +1472,217 @@ static int mxcnd_setup_interface(struct nand_chip *chip, int chipnr,
return host->devtype_data->setup_interface(chip, chipnr, conf);
}
static void memff16_toio(void *buf, int n)
{
__iomem u16 *t = buf;
int i;
for (i = 0; i < (n >> 1); i++)
__raw_writew(0xffff, t++);
}
static void copy_page_to_sram(struct mtd_info *mtd, const void *buf, int buf_len)
{
struct nand_chip *this = mtd_to_nand(mtd);
struct mxc_nand_host *host = nand_get_controller_data(this);
unsigned int no_subpages = mtd->writesize / 512;
int oob_per_subpage, i;
oob_per_subpage = (mtd->oobsize / no_subpages) & ~1;
/*
* During a page write the i.MX NAND controller will read 512b from
* main_area0 SRAM, then oob_per_subpage bytes from spare0 SRAM, then
* 512b from main_area1 SRAM and so on until the full page is written.
* For software ECC we want to have a 1:1 mapping between the raw page
* data on the NAND chip and the view of the NAND core. This is
* necessary to make the NAND_CMD_RNDOUT read the data it expects.
* To accomplish this we have to write the data in the order the controller
* reads it. This is reversed in copy_page_from_sram() below.
*
* buf_len can either be the full page including the OOB or user data only.
* When it's user data only make sure that we fill up the rest of the
* SRAM with 0xff.
*/
for (i = 0; i < no_subpages; i++) {
int now = min(buf_len, 512);
if (now)
memcpy16_toio(host->main_area0 + i * 512, buf, now);
if (now < 512)
memff16_toio(host->main_area0 + i * 512 + now, 512 - now);
buf += 512;
buf_len -= now;
now = min(buf_len, oob_per_subpage);
if (now)
memcpy16_toio(host->spare0 + i * host->devtype_data->spare_len,
buf, now);
if (now < oob_per_subpage)
memff16_toio(host->spare0 + i * host->devtype_data->spare_len + now,
oob_per_subpage - now);
buf += oob_per_subpage;
buf_len -= now;
}
}
static void copy_page_from_sram(struct mtd_info *mtd)
{
struct nand_chip *this = mtd_to_nand(mtd);
struct mxc_nand_host *host = nand_get_controller_data(this);
void *buf = host->data_buf;
unsigned int no_subpages = mtd->writesize / 512;
int oob_per_subpage, i;
/* mtd->writesize is not set during ident scanning */
if (!no_subpages)
no_subpages = 1;
oob_per_subpage = (mtd->oobsize / no_subpages) & ~1;
for (i = 0; i < no_subpages; i++) {
memcpy16_fromio(buf, host->main_area0 + i * 512, 512);
buf += 512;
memcpy16_fromio(buf, host->spare0 + i * host->devtype_data->spare_len,
oob_per_subpage);
buf += oob_per_subpage;
}
}
static int mxcnd_do_exec_op(struct nand_chip *chip,
const struct nand_subop *op)
{
struct mxc_nand_host *host = nand_get_controller_data(chip);
struct mtd_info *mtd = nand_to_mtd(chip);
int i, j, buf_len;
void *buf_read = NULL;
const void *buf_write = NULL;
const struct nand_op_instr *instr;
bool readid = false;
bool statusreq = false;
for (i = 0; i < op->ninstrs; i++) {
instr = &op->instrs[i];
switch (instr->type) {
case NAND_OP_WAITRDY_INSTR:
/* NFC handles R/B internally, nothing to do here */
break;
case NAND_OP_CMD_INSTR:
host->devtype_data->send_cmd(host, instr->ctx.cmd.opcode, true);
if (instr->ctx.cmd.opcode == NAND_CMD_READID)
readid = true;
if (instr->ctx.cmd.opcode == NAND_CMD_STATUS)
statusreq = true;
break;
case NAND_OP_ADDR_INSTR:
for (j = 0; j < instr->ctx.addr.naddrs; j++) {
bool islast = j == instr->ctx.addr.naddrs - 1;
host->devtype_data->send_addr(host, instr->ctx.addr.addrs[j], islast);
}
break;
case NAND_OP_DATA_OUT_INSTR:
buf_write = instr->ctx.data.buf.out;
buf_len = instr->ctx.data.len;
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST)
memcpy32_toio(host->main_area0, buf_write, buf_len);
else
copy_page_to_sram(mtd, buf_write, buf_len);
host->devtype_data->send_page(mtd, NFC_INPUT);
break;
case NAND_OP_DATA_IN_INSTR:
buf_read = instr->ctx.data.buf.in;
buf_len = instr->ctx.data.len;
if (readid) {
host->devtype_data->send_read_id(host);
readid = false;
memcpy32_fromio(host->data_buf, host->main_area0, buf_len * 2);
if (chip->options & NAND_BUSWIDTH_16) {
u8 *bufr = buf_read;
u16 *bufw = host->data_buf;
for (j = 0; j < buf_len; j++)
bufr[j] = bufw[j];
} else {
memcpy(buf_read, host->data_buf, buf_len);
}
break;
}
if (statusreq) {
*(u8*)buf_read = host->devtype_data->get_dev_status(host);
statusreq = false;
break;
}
host->devtype_data->read_page(chip);
if (chip->ecc.engine_type == NAND_ECC_ENGINE_TYPE_ON_HOST) {
if (IS_ALIGNED(buf_len, 4)) {
memcpy32_fromio(buf_read, host->main_area0, buf_len);
} else {
memcpy32_fromio(host->data_buf, host->main_area0, mtd->writesize);
memcpy(buf_read, host->data_buf, buf_len);
}
} else {
copy_page_from_sram(mtd);
memcpy(buf_read, host->data_buf, buf_len);
}
break;
}
}
return 0;
}
#define MAX_DATA_SIZE (4096 + 512)
static const struct nand_op_parser mxcnd_op_parser = NAND_OP_PARSER(
NAND_OP_PARSER_PATTERN(mxcnd_do_exec_op,
NAND_OP_PARSER_PAT_CMD_ELEM(false),
NAND_OP_PARSER_PAT_ADDR_ELEM(true, 7),
NAND_OP_PARSER_PAT_CMD_ELEM(true),
NAND_OP_PARSER_PAT_WAITRDY_ELEM(true),
NAND_OP_PARSER_PAT_DATA_IN_ELEM(true, MAX_DATA_SIZE)),
NAND_OP_PARSER_PATTERN(mxcnd_do_exec_op,
NAND_OP_PARSER_PAT_CMD_ELEM(false),
NAND_OP_PARSER_PAT_ADDR_ELEM(false, 7),
NAND_OP_PARSER_PAT_DATA_OUT_ELEM(false, MAX_DATA_SIZE),
NAND_OP_PARSER_PAT_CMD_ELEM(false),
NAND_OP_PARSER_PAT_WAITRDY_ELEM(true)),
NAND_OP_PARSER_PATTERN(mxcnd_do_exec_op,
NAND_OP_PARSER_PAT_CMD_ELEM(false),
NAND_OP_PARSER_PAT_ADDR_ELEM(false, 7),
NAND_OP_PARSER_PAT_DATA_OUT_ELEM(false, MAX_DATA_SIZE),
NAND_OP_PARSER_PAT_CMD_ELEM(true),
NAND_OP_PARSER_PAT_WAITRDY_ELEM(true)),
);
static int mxcnd_exec_op(struct nand_chip *chip,
const struct nand_operation *op, bool check_only)
{
return nand_op_parser_exec_op(chip, &mxcnd_op_parser,
op, check_only);
}
static const struct nand_controller_ops mxcnd_controller_ops = {
.attach_chip = mxcnd_attach_chip,
.setup_interface = mxcnd_setup_interface,
.exec_op = mxcnd_exec_op,
};
static int mxcnd_probe(struct platform_device *pdev)
@ -1720,13 +1715,6 @@ static int mxcnd_probe(struct platform_device *pdev)
nand_set_controller_data(this, host);
nand_set_flash_node(this, pdev->dev.of_node);
this->legacy.dev_ready = mxc_nand_dev_ready;
this->legacy.cmdfunc = mxc_nand_command;
this->legacy.read_byte = mxc_nand_read_byte;
this->legacy.write_buf = mxc_nand_write_buf;
this->legacy.read_buf = mxc_nand_read_buf;
this->legacy.set_features = mxc_nand_set_features;
this->legacy.get_features = mxc_nand_get_features;
host->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(host->clk))

View File

@ -121,7 +121,7 @@ static const struct spinand_info macronix_spinand_table[] = {
SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
SPINAND_INFO("MX35LF2GE4AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x26),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x26, 0x03),
NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
@ -131,7 +131,7 @@ static const struct spinand_info macronix_spinand_table[] = {
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35LF4GE4AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x37),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x37, 0x03),
NAND_MEMORG(1, 4096, 128, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
@ -141,7 +141,7 @@ static const struct spinand_info macronix_spinand_table[] = {
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35LF1G24AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x14),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x14, 0x03),
NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
@ -150,7 +150,7 @@ static const struct spinand_info macronix_spinand_table[] = {
SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
SPINAND_INFO("MX35LF2G24AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x24),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x24, 0x03),
NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 2, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
@ -158,8 +158,17 @@ static const struct spinand_info macronix_spinand_table[] = {
&update_cache_variants),
SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
SPINAND_INFO("MX35LF2G24AD-Z4I8",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x64, 0x03),
NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
SPINAND_INFO("MX35LF4G24AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x35),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x35, 0x03),
NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 2, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
@ -167,6 +176,15 @@ static const struct spinand_info macronix_spinand_table[] = {
&update_cache_variants),
SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
SPINAND_INFO("MX35LF4G24AD-Z4I8",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x75, 0x03),
NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
SPINAND_INFO("MX31LF1GE4BC",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x1e),
NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
@ -199,7 +217,7 @@ static const struct spinand_info macronix_spinand_table[] = {
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35UF4G24AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xb5),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xb5, 0x03),
NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 2, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
@ -208,8 +226,18 @@ static const struct spinand_info macronix_spinand_table[] = {
SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35UF4G24AD-Z4I8",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xf5, 0x03),
NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35UF4GE4AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xb7),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xb7, 0x03),
NAND_MEMORG(1, 4096, 256, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
@ -229,7 +257,7 @@ static const struct spinand_info macronix_spinand_table[] = {
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35UF2G24AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa4),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa4, 0x03),
NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 2, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
@ -238,8 +266,18 @@ static const struct spinand_info macronix_spinand_table[] = {
SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35UF2G24AD-Z4I8",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xe4, 0x03),
NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
&write_cache_variants,
&update_cache_variants),
SPINAND_HAS_QE_BIT,
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35UF2GE4AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa6),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa6, 0x03),
NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
@ -249,7 +287,7 @@ static const struct spinand_info macronix_spinand_table[] = {
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35UF2GE4AC",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa2),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xa2, 0x01),
NAND_MEMORG(1, 2048, 64, 64, 2048, 40, 1, 1, 1),
NAND_ECCREQ(4, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
@ -269,7 +307,7 @@ static const struct spinand_info macronix_spinand_table[] = {
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35UF1G24AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x94),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x94, 0x03),
NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
@ -279,7 +317,7 @@ static const struct spinand_info macronix_spinand_table[] = {
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35UF1GE4AD",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x96),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x96, 0x03),
NAND_MEMORG(1, 2048, 128, 64, 1024, 20, 1, 1, 1),
NAND_ECCREQ(8, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
@ -289,7 +327,7 @@ static const struct spinand_info macronix_spinand_table[] = {
SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout,
mx35lf1ge4ab_ecc_get_status)),
SPINAND_INFO("MX35UF1GE4AC",
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x92),
SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0x92, 0x01),
NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
NAND_ECCREQ(4, 512),
SPINAND_INFO_OP_VARIANTS(&read_cache_variants,

View File

@ -81,4 +81,5 @@ static struct mtd_part_parser brcm_u_boot_mtd_parser = {
};
module_mtd_part_parser(brcm_u_boot_mtd_parser);
MODULE_DESCRIPTION("Broadcom's U-Boot partition parser");
MODULE_LICENSE("GPL");

View File

@ -44,14 +44,6 @@
#include <linux/module.h>
#include <linux/err.h>
/* debug macro */
#if 0
#define dbg(x) do { printk("DEBUG-CMDLINE-PART: "); printk x; } while(0)
#else
#define dbg(x)
#endif
/* special size referring to all the remaining space in a partition */
#define SIZE_REMAINING ULLONG_MAX
#define OFFSET_CONTINUOUS ULLONG_MAX
@ -199,9 +191,9 @@ static struct mtd_partition * newpart(char *s,
parts[this_part].name = extra_mem;
extra_mem += name_len + 1;
dbg(("partition %d: name <%s>, offset %llx, size %llx, mask flags %x\n",
pr_debug("partition %d: name <%s>, offset %llx, size %llx, mask flags %x\n",
this_part, parts[this_part].name, parts[this_part].offset,
parts[this_part].size, parts[this_part].mask_flags));
parts[this_part].size, parts[this_part].mask_flags);
/* return (updated) pointer to extra_mem memory */
if (extra_mem_ptr)
@ -267,7 +259,7 @@ static int mtdpart_setup_real(char *s)
}
mtd_id_len = p - mtd_id;
dbg(("parsing <%s>\n", p+1));
pr_debug("parsing <%s>\n", p+1);
/*
* parse one mtd. have it reserve memory for the
@ -304,8 +296,8 @@ static int mtdpart_setup_real(char *s)
this_mtd->next = partitions;
partitions = this_mtd;
dbg(("mtdid=<%s> num_parts=<%d>\n",
this_mtd->mtd_id, this_mtd->num_parts));
pr_debug("mtdid=<%s> num_parts=<%d>\n",
this_mtd->mtd_id, this_mtd->num_parts);
/* EOS - we're done */

View File

@ -149,4 +149,5 @@ static struct mtd_part_parser mtd_parser_tplink_safeloader = {
};
module_mtd_part_parser(mtd_parser_tplink_safeloader);
MODULE_DESCRIPTION("TP-Link Safeloader partitions parser");
MODULE_LICENSE("GPL");

View File

@ -13,7 +13,6 @@ spi-nor-objs += micron-st.o
spi-nor-objs += spansion.o
spi-nor-objs += sst.o
spi-nor-objs += winbond.o
spi-nor-objs += xilinx.o
spi-nor-objs += xmc.o
spi-nor-$(CONFIG_DEBUG_FS) += debugfs.o
obj-$(CONFIG_MTD_SPI_NOR) += spi-nor.o

View File

@ -1463,14 +1463,6 @@ static void spi_nor_unlock_and_unprep_rd(struct spi_nor *nor, loff_t start, size
spi_nor_unprep(nor);
}
static u32 spi_nor_convert_addr(struct spi_nor *nor, loff_t addr)
{
if (!nor->params->convert_addr)
return addr;
return nor->params->convert_addr(nor, addr);
}
/*
* Initiate the erasure of a single sector
*/
@ -1478,8 +1470,6 @@ int spi_nor_erase_sector(struct spi_nor *nor, u32 addr)
{
int i;
addr = spi_nor_convert_addr(nor, addr);
if (nor->spimem) {
struct spi_mem_op op =
SPI_NOR_SECTOR_ERASE_OP(nor->erase_opcode,
@ -1986,7 +1976,6 @@ static const struct spi_nor_manufacturer *manufacturers[] = {
&spi_nor_spansion,
&spi_nor_sst,
&spi_nor_winbond,
&spi_nor_xilinx,
&spi_nor_xmc,
};
@ -2065,8 +2054,6 @@ static int spi_nor_read(struct mtd_info *mtd, loff_t from, size_t len,
while (len) {
loff_t addr = from;
addr = spi_nor_convert_addr(nor, addr);
ret = spi_nor_read_data(nor, addr, len, buf);
if (ret == 0) {
/* We shouldn't see 0-length reads */
@ -2099,7 +2086,7 @@ static int spi_nor_write(struct mtd_info *mtd, loff_t to, size_t len,
size_t *retlen, const u_char *buf)
{
struct spi_nor *nor = mtd_to_spi_nor(mtd);
size_t page_offset, page_remain, i;
size_t i;
ssize_t ret;
u32 page_size = nor->params->page_size;
@ -2112,23 +2099,9 @@ static int spi_nor_write(struct mtd_info *mtd, loff_t to, size_t len,
for (i = 0; i < len; ) {
ssize_t written;
loff_t addr = to + i;
/*
* If page_size is a power of two, the offset can be quickly
* calculated with an AND operation. On the other cases we
* need to do a modulus operation (more expensive).
*/
if (is_power_of_2(page_size)) {
page_offset = addr & (page_size - 1);
} else {
u64 aux = addr;
page_offset = do_div(aux, page_size);
}
size_t page_offset = addr & (page_size - 1);
/* the size of data remaining on the first page */
page_remain = min_t(size_t, page_size - page_offset, len - i);
addr = spi_nor_convert_addr(nor, addr);
size_t page_remain = min_t(size_t, page_size - page_offset, len - i);
ret = spi_nor_lock_device(nor);
if (ret)
@ -2581,8 +2554,51 @@ static int spi_nor_select_erase(struct spi_nor *nor)
return 0;
}
static int spi_nor_default_setup(struct spi_nor *nor,
const struct spi_nor_hwcaps *hwcaps)
static int spi_nor_set_addr_nbytes(struct spi_nor *nor)
{
if (nor->params->addr_nbytes) {
nor->addr_nbytes = nor->params->addr_nbytes;
} else if (nor->read_proto == SNOR_PROTO_8_8_8_DTR) {
/*
* In 8D-8D-8D mode, one byte takes half a cycle to transfer. So
* in this protocol an odd addr_nbytes cannot be used because
* then the address phase would only span a cycle and a half.
* Half a cycle would be left over. We would then have to start
* the dummy phase in the middle of a cycle and so too the data
* phase, and we will end the transaction with half a cycle left
* over.
*
* Force all 8D-8D-8D flashes to use an addr_nbytes of 4 to
* avoid this situation.
*/
nor->addr_nbytes = 4;
} else if (nor->info->addr_nbytes) {
nor->addr_nbytes = nor->info->addr_nbytes;
} else {
nor->addr_nbytes = 3;
}
if (nor->addr_nbytes == 3 && nor->params->size > 0x1000000) {
/* enable 4-byte addressing if the device exceeds 16MiB */
nor->addr_nbytes = 4;
}
if (nor->addr_nbytes > SPI_NOR_MAX_ADDR_NBYTES) {
dev_dbg(nor->dev, "The number of address bytes is too large: %u\n",
nor->addr_nbytes);
return -EINVAL;
}
/* Set 4byte opcodes when possible. */
if (nor->addr_nbytes == 4 && nor->flags & SNOR_F_4B_OPCODES &&
!(nor->flags & SNOR_F_HAS_4BAIT))
spi_nor_set_4byte_opcodes(nor);
return 0;
}
static int spi_nor_setup(struct spi_nor *nor,
const struct spi_nor_hwcaps *hwcaps)
{
struct spi_nor_flash_parameter *params = nor->params;
u32 ignored_mask, shared_mask;
@ -2639,64 +2655,6 @@ static int spi_nor_default_setup(struct spi_nor *nor,
return err;
}
return 0;
}
static int spi_nor_set_addr_nbytes(struct spi_nor *nor)
{
if (nor->params->addr_nbytes) {
nor->addr_nbytes = nor->params->addr_nbytes;
} else if (nor->read_proto == SNOR_PROTO_8_8_8_DTR) {
/*
* In 8D-8D-8D mode, one byte takes half a cycle to transfer. So
* in this protocol an odd addr_nbytes cannot be used because
* then the address phase would only span a cycle and a half.
* Half a cycle would be left over. We would then have to start
* the dummy phase in the middle of a cycle and so too the data
* phase, and we will end the transaction with half a cycle left
* over.
*
* Force all 8D-8D-8D flashes to use an addr_nbytes of 4 to
* avoid this situation.
*/
nor->addr_nbytes = 4;
} else if (nor->info->addr_nbytes) {
nor->addr_nbytes = nor->info->addr_nbytes;
} else {
nor->addr_nbytes = 3;
}
if (nor->addr_nbytes == 3 && nor->params->size > 0x1000000) {
/* enable 4-byte addressing if the device exceeds 16MiB */
nor->addr_nbytes = 4;
}
if (nor->addr_nbytes > SPI_NOR_MAX_ADDR_NBYTES) {
dev_dbg(nor->dev, "The number of address bytes is too large: %u\n",
nor->addr_nbytes);
return -EINVAL;
}
/* Set 4byte opcodes when possible. */
if (nor->addr_nbytes == 4 && nor->flags & SNOR_F_4B_OPCODES &&
!(nor->flags & SNOR_F_HAS_4BAIT))
spi_nor_set_4byte_opcodes(nor);
return 0;
}
static int spi_nor_setup(struct spi_nor *nor,
const struct spi_nor_hwcaps *hwcaps)
{
int ret;
if (nor->params->setup)
ret = nor->params->setup(nor, hwcaps);
else
ret = spi_nor_default_setup(nor, hwcaps);
if (ret)
return ret;
return spi_nor_set_addr_nbytes(nor);
}
@ -2965,15 +2923,10 @@ static void spi_nor_init_default_params(struct spi_nor *nor)
params->page_size = info->page_size ?: SPI_NOR_DEFAULT_PAGE_SIZE;
params->n_banks = info->n_banks ?: SPI_NOR_DEFAULT_N_BANKS;
if (!(info->flags & SPI_NOR_NO_FR)) {
/* Default to Fast Read for DT and non-DT platform devices. */
/* Default to Fast Read for non-DT and enable it if requested by DT. */
if (!np || of_property_read_bool(np, "m25p,fast-read"))
params->hwcaps.mask |= SNOR_HWCAPS_READ_FAST;
/* Mask out Fast Read if not requested at DT instantiation. */
if (np && !of_property_read_bool(np, "m25p,fast-read"))
params->hwcaps.mask &= ~SNOR_HWCAPS_READ_FAST;
}
/* (Fast) Read settings. */
params->hwcaps.mask |= SNOR_HWCAPS_READ;
spi_nor_set_read_settings(&params->reads[SNOR_CMD_READ],
@ -3055,7 +3008,14 @@ static int spi_nor_init_params(struct spi_nor *nor)
spi_nor_init_params_deprecated(nor);
}
return spi_nor_late_init_params(nor);
ret = spi_nor_late_init_params(nor);
if (ret)
return ret;
if (WARN_ON(!is_power_of_2(nor->params->page_size)))
return -EINVAL;
return 0;
}
/** spi_nor_set_octal_dtr() - enable or disable Octal DTR I/O.
@ -3338,32 +3298,28 @@ static const struct flash_info *spi_nor_get_flash_info(struct spi_nor *nor,
if (name)
info = spi_nor_match_name(nor, name);
/* Try to auto-detect if chip name wasn't specified or not found */
if (!info)
return spi_nor_detect(nor);
/*
* If caller has specified name of flash model that can normally be
* detected using JEDEC, let's verify it.
* Auto-detect if chip name wasn't specified or not found, or the chip
* has an ID. If the chip supposedly has an ID, we also do an
* auto-detection to compare it later.
*/
if (name && info->id) {
if (!info || info->id) {
const struct flash_info *jinfo;
jinfo = spi_nor_detect(nor);
if (IS_ERR(jinfo)) {
if (IS_ERR(jinfo))
return jinfo;
} else if (jinfo != info) {
/*
* JEDEC knows better, so overwrite platform ID. We
* can't trust partitions any longer, but we'll let
* mtd apply them anyway, since some partitions may be
* marked read-only, and we don't want to loose that
* information, even if it's not 100% accurate.
*/
/*
* If caller has specified name of flash model that can normally
* be detected using JEDEC, let's verify it.
*/
if (info && jinfo != info)
dev_warn(nor->dev, "found %s, expected %s\n",
jinfo->name, info->name);
info = jinfo;
}
/* If info was set before, JEDEC knows better. */
info = jinfo;
}
return info;

View File

@ -366,13 +366,6 @@ struct spi_nor_otp {
* @set_octal_dtr: enables or disables SPI NOR octal DTR mode.
* @quad_enable: enables SPI NOR quad mode.
* @set_4byte_addr_mode: puts the SPI NOR in 4 byte addressing mode.
* @convert_addr: converts an absolute address into something the flash
* will understand. Particularly useful when pagesize is
* not a power-of-2.
* @setup: (optional) configures the SPI NOR memory. Useful for
* SPI NOR flashes that have peculiarities to the SPI NOR
* standard e.g. different opcodes, specific address
* calculation, page size, etc.
* @ready: (optional) flashes might use a different mechanism
* than reading the status register to indicate they
* are ready for a new command
@ -403,8 +396,6 @@ struct spi_nor_flash_parameter {
int (*set_octal_dtr)(struct spi_nor *nor, bool enable);
int (*quad_enable)(struct spi_nor *nor);
int (*set_4byte_addr_mode)(struct spi_nor *nor, bool enable);
u32 (*convert_addr)(struct spi_nor *nor, u32 addr);
int (*setup)(struct spi_nor *nor, const struct spi_nor_hwcaps *hwcaps);
int (*ready)(struct spi_nor *nor);
const struct spi_nor_locking_ops *locking_ops;
@ -479,7 +470,6 @@ struct spi_nor_id {
* Usually these will power-up in a write-protected
* state.
* SPI_NOR_NO_ERASE: no erase command needed.
* SPI_NOR_NO_FR: can't do fastread.
* SPI_NOR_QUAD_PP: flash supports Quad Input Page Program.
* SPI_NOR_RWW: flash supports reads while write.
*
@ -528,7 +518,6 @@ struct flash_info {
#define SPI_NOR_BP3_SR_BIT6 BIT(4)
#define SPI_NOR_SWP_IS_VOLATILE BIT(5)
#define SPI_NOR_NO_ERASE BIT(6)
#define SPI_NOR_NO_FR BIT(7)
#define SPI_NOR_QUAD_PP BIT(8)
#define SPI_NOR_RWW BIT(9)
@ -603,7 +592,6 @@ extern const struct spi_nor_manufacturer spi_nor_st;
extern const struct spi_nor_manufacturer spi_nor_spansion;
extern const struct spi_nor_manufacturer spi_nor_sst;
extern const struct spi_nor_manufacturer spi_nor_winbond;
extern const struct spi_nor_manufacturer spi_nor_xilinx;
extern const struct spi_nor_manufacturer spi_nor_xmc;
extern const struct attribute_group *spi_nor_sysfs_groups[];

View File

@ -14,28 +14,39 @@ static const struct flash_info everspin_nor_parts[] = {
.size = SZ_16K,
.sector_size = SZ_16K,
.addr_nbytes = 2,
.flags = SPI_NOR_NO_ERASE | SPI_NOR_NO_FR,
.flags = SPI_NOR_NO_ERASE,
}, {
.name = "mr25h256",
.size = SZ_32K,
.sector_size = SZ_32K,
.addr_nbytes = 2,
.flags = SPI_NOR_NO_ERASE | SPI_NOR_NO_FR,
.flags = SPI_NOR_NO_ERASE,
}, {
.name = "mr25h10",
.size = SZ_128K,
.sector_size = SZ_128K,
.flags = SPI_NOR_NO_ERASE | SPI_NOR_NO_FR,
.flags = SPI_NOR_NO_ERASE,
}, {
.name = "mr25h40",
.size = SZ_512K,
.sector_size = SZ_512K,
.flags = SPI_NOR_NO_ERASE | SPI_NOR_NO_FR,
.flags = SPI_NOR_NO_ERASE,
}
};
static void everspin_nor_default_init(struct spi_nor *nor)
{
/* Everspin FRAMs don't support the fast read opcode. */
nor->params->hwcaps.mask &= ~SNOR_HWCAPS_READ_FAST;
}
static const struct spi_nor_fixups everspin_nor_fixups = {
.default_init = everspin_nor_default_init,
};
const struct spi_nor_manufacturer spi_nor_everspin = {
.name = "everspin",
.parts = everspin_nor_parts,
.nparts = ARRAY_SIZE(everspin_nor_parts),
.fixups = &everspin_nor_fixups,
};

View File

@ -105,7 +105,9 @@ static const struct flash_info winbond_nor_parts[] = {
}, {
.id = SNOR_ID(0xef, 0x40, 0x18),
.name = "w25q128",
.size = SZ_16M,
.flags = SPI_NOR_HAS_LOCK | SPI_NOR_HAS_TB,
.no_sfdp_flags = SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ,
}, {
.id = SNOR_ID(0xef, 0x40, 0x19),
.name = "w25q256",

View File

@ -1,169 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2005, Intec Automation Inc.
* Copyright (C) 2014, Freescale Semiconductor, Inc.
*/
#include <linux/mtd/spi-nor.h>
#include "core.h"
#define XILINX_OP_SE 0x50 /* Sector erase */
#define XILINX_OP_PP 0x82 /* Page program */
#define XILINX_OP_RDSR 0xd7 /* Read status register */
#define XSR_PAGESIZE BIT(0) /* Page size in Po2 or Linear */
#define XSR_RDY BIT(7) /* Ready */
#define XILINX_RDSR_OP(buf) \
SPI_MEM_OP(SPI_MEM_OP_CMD(XILINX_OP_RDSR, 0), \
SPI_MEM_OP_NO_ADDR, \
SPI_MEM_OP_NO_DUMMY, \
SPI_MEM_OP_DATA_IN(1, buf, 0))
#define S3AN_FLASH(_id, _name, _n_sectors, _page_size) \
.id = _id, \
.name = _name, \
.size = 8 * (_page_size) * (_n_sectors), \
.sector_size = (8 * (_page_size)), \
.page_size = (_page_size), \
.flags = SPI_NOR_NO_FR
/* Xilinx S3AN share MFR with Atmel SPI NOR */
static const struct flash_info xilinx_nor_parts[] = {
/* Xilinx S3AN Internal Flash */
{ S3AN_FLASH(SNOR_ID(0x1f, 0x22, 0x00), "3S50AN", 64, 264) },
{ S3AN_FLASH(SNOR_ID(0x1f, 0x24, 0x00), "3S200AN", 256, 264) },
{ S3AN_FLASH(SNOR_ID(0x1f, 0x24, 0x00), "3S400AN", 256, 264) },
{ S3AN_FLASH(SNOR_ID(0x1f, 0x25, 0x00), "3S700AN", 512, 264) },
{ S3AN_FLASH(SNOR_ID(0x1f, 0x26, 0x00), "3S1400AN", 512, 528) },
};
/*
* This code converts an address to the Default Address Mode, that has non
* power of two page sizes. We must support this mode because it is the default
* mode supported by Xilinx tools, it can access the whole flash area and
* changing over to the Power-of-two mode is irreversible and corrupts the
* original data.
* Addr can safely be unsigned int, the biggest S3AN device is smaller than
* 4 MiB.
*/
static u32 s3an_nor_convert_addr(struct spi_nor *nor, u32 addr)
{
u32 page_size = nor->params->page_size;
u32 offset, page;
offset = addr % page_size;
page = addr / page_size;
page <<= (page_size > 512) ? 10 : 9;
return page | offset;
}
/**
* xilinx_nor_read_sr() - Read the Status Register on S3AN flashes.
* @nor: pointer to 'struct spi_nor'.
* @sr: pointer to a DMA-able buffer where the value of the
* Status Register will be written.
*
* Return: 0 on success, -errno otherwise.
*/
static int xilinx_nor_read_sr(struct spi_nor *nor, u8 *sr)
{
int ret;
if (nor->spimem) {
struct spi_mem_op op = XILINX_RDSR_OP(sr);
spi_nor_spimem_setup_op(nor, &op, nor->reg_proto);
ret = spi_mem_exec_op(nor->spimem, &op);
} else {
ret = spi_nor_controller_ops_read_reg(nor, XILINX_OP_RDSR, sr,
1);
}
if (ret)
dev_dbg(nor->dev, "error %d reading SR\n", ret);
return ret;
}
/**
* xilinx_nor_sr_ready() - Query the Status Register of the S3AN flash to see
* if the flash is ready for new commands.
* @nor: pointer to 'struct spi_nor'.
*
* Return: 1 if ready, 0 if not ready, -errno on errors.
*/
static int xilinx_nor_sr_ready(struct spi_nor *nor)
{
int ret;
ret = xilinx_nor_read_sr(nor, nor->bouncebuf);
if (ret)
return ret;
return !!(nor->bouncebuf[0] & XSR_RDY);
}
static int xilinx_nor_setup(struct spi_nor *nor,
const struct spi_nor_hwcaps *hwcaps)
{
u32 page_size;
int ret;
ret = xilinx_nor_read_sr(nor, nor->bouncebuf);
if (ret)
return ret;
nor->erase_opcode = XILINX_OP_SE;
nor->program_opcode = XILINX_OP_PP;
nor->read_opcode = SPINOR_OP_READ;
nor->flags |= SNOR_F_NO_OP_CHIP_ERASE;
/*
* This flashes have a page size of 264 or 528 bytes (known as
* Default addressing mode). It can be changed to a more standard
* Power of two mode where the page size is 256/512. This comes
* with a price: there is 3% less of space, the data is corrupted
* and the page size cannot be changed back to default addressing
* mode.
*
* The current addressing mode can be read from the XRDSR register
* and should not be changed, because is a destructive operation.
*/
if (nor->bouncebuf[0] & XSR_PAGESIZE) {
/* Flash in Power of 2 mode */
page_size = (nor->params->page_size == 264) ? 256 : 512;
nor->params->page_size = page_size;
nor->mtd.writebufsize = page_size;
nor->params->size = nor->info->size;
nor->mtd.erasesize = 8 * page_size;
} else {
/* Flash in Default addressing mode */
nor->params->convert_addr = s3an_nor_convert_addr;
nor->mtd.erasesize = nor->info->sector_size;
}
return 0;
}
static int xilinx_nor_late_init(struct spi_nor *nor)
{
nor->params->setup = xilinx_nor_setup;
nor->params->ready = xilinx_nor_sr_ready;
return 0;
}
static const struct spi_nor_fixups xilinx_nor_fixups = {
.late_init = xilinx_nor_late_init,
};
const struct spi_nor_manufacturer spi_nor_xilinx = {
.name = "xilinx",
.parts = xilinx_nor_parts,
.nparts = ARRAY_SIZE(xilinx_nor_parts),
.fixups = &xilinx_nor_fixups,
};

View File

@ -1,19 +1,19 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_MTD_TESTS) += mtd_oobtest.o
obj-$(CONFIG_MTD_TESTS) += mtd_pagetest.o
obj-$(CONFIG_MTD_TESTS) += mtd_readtest.o
obj-$(CONFIG_MTD_TESTS) += mtd_speedtest.o
obj-$(CONFIG_MTD_TESTS) += mtd_stresstest.o
obj-$(CONFIG_MTD_TESTS) += mtd_subpagetest.o
obj-$(CONFIG_MTD_TESTS) += mtd_torturetest.o
obj-$(CONFIG_MTD_TESTS) += mtd_nandecctest.o
obj-$(CONFIG_MTD_TESTS) += mtd_nandbiterrs.o
obj-$(CONFIG_MTD_TESTS) += mtd_oobtest.o mtd_test.o
obj-$(CONFIG_MTD_TESTS) += mtd_pagetest.o mtd_test.o
obj-$(CONFIG_MTD_TESTS) += mtd_readtest.o mtd_test.o
obj-$(CONFIG_MTD_TESTS) += mtd_speedtest.o mtd_test.o
obj-$(CONFIG_MTD_TESTS) += mtd_stresstest.o mtd_test.o
obj-$(CONFIG_MTD_TESTS) += mtd_subpagetest.o mtd_test.o
obj-$(CONFIG_MTD_TESTS) += mtd_torturetest.o mtd_test.o
obj-$(CONFIG_MTD_TESTS) += mtd_nandecctest.o mtd_test.o
obj-$(CONFIG_MTD_TESTS) += mtd_nandbiterrs.o mtd_test.o
mtd_oobtest-objs := oobtest.o mtd_test.o
mtd_pagetest-objs := pagetest.o mtd_test.o
mtd_readtest-objs := readtest.o mtd_test.o
mtd_speedtest-objs := speedtest.o mtd_test.o
mtd_stresstest-objs := stresstest.o mtd_test.o
mtd_subpagetest-objs := subpagetest.o mtd_test.o
mtd_torturetest-objs := torturetest.o mtd_test.o
mtd_nandbiterrs-objs := nandbiterrs.o mtd_test.o
mtd_oobtest-objs := oobtest.o
mtd_pagetest-objs := pagetest.o
mtd_readtest-objs := readtest.o
mtd_speedtest-objs := speedtest.o
mtd_stresstest-objs := stresstest.o
mtd_subpagetest-objs := subpagetest.o
mtd_torturetest-objs := torturetest.o
mtd_nandbiterrs-objs := nandbiterrs.o

View File

@ -25,6 +25,7 @@ int mtdtest_erase_eraseblock(struct mtd_info *mtd, unsigned int ebnum)
return 0;
}
EXPORT_SYMBOL_GPL(mtdtest_erase_eraseblock);
static int is_block_bad(struct mtd_info *mtd, unsigned int ebnum)
{
@ -57,6 +58,7 @@ int mtdtest_scan_for_bad_eraseblocks(struct mtd_info *mtd, unsigned char *bbt,
return 0;
}
EXPORT_SYMBOL_GPL(mtdtest_scan_for_bad_eraseblocks);
int mtdtest_erase_good_eraseblocks(struct mtd_info *mtd, unsigned char *bbt,
unsigned int eb, int ebcnt)
@ -75,6 +77,7 @@ int mtdtest_erase_good_eraseblocks(struct mtd_info *mtd, unsigned char *bbt,
return 0;
}
EXPORT_SYMBOL_GPL(mtdtest_erase_good_eraseblocks);
int mtdtest_read(struct mtd_info *mtd, loff_t addr, size_t size, void *buf)
{
@ -92,6 +95,7 @@ int mtdtest_read(struct mtd_info *mtd, loff_t addr, size_t size, void *buf)
return err;
}
EXPORT_SYMBOL_GPL(mtdtest_read);
int mtdtest_write(struct mtd_info *mtd, loff_t addr, size_t size,
const void *buf)
@ -107,3 +111,8 @@ int mtdtest_write(struct mtd_info *mtd, loff_t addr, size_t size,
return err;
}
EXPORT_SYMBOL_GPL(mtdtest_write);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("MTD function test helpers");
MODULE_AUTHOR("Akinobu Mita");

View File

@ -308,32 +308,32 @@ static inline uint8_t cfi_read_query(struct map_info *map, uint32_t addr)
{
map_word val = map_read(map, addr);
if (map_bankwidth_is_1(map)) {
if (map_bankwidth_is_1(map))
return val.x[0];
} else if (map_bankwidth_is_2(map)) {
if (map_bankwidth_is_2(map))
return cfi16_to_cpu(map, val.x[0]);
} else {
/* No point in a 64-bit byteswap since that would just be
swapping the responses from different chips, and we are
only interested in one chip (a representative sample) */
return cfi32_to_cpu(map, val.x[0]);
}
/*
* No point in a 64-bit byteswap since that would just be
* swapping the responses from different chips, and we are
* only interested in one chip (a representative sample)
*/
return cfi32_to_cpu(map, val.x[0]);
}
static inline uint16_t cfi_read_query16(struct map_info *map, uint32_t addr)
{
map_word val = map_read(map, addr);
if (map_bankwidth_is_1(map)) {
if (map_bankwidth_is_1(map))
return val.x[0] & 0xff;
} else if (map_bankwidth_is_2(map)) {
if (map_bankwidth_is_2(map))
return cfi16_to_cpu(map, val.x[0]);
} else {
/* No point in a 64-bit byteswap since that would just be
swapping the responses from different chips, and we are
only interested in one chip (a representative sample) */
return cfi32_to_cpu(map, val.x[0]);
}
/*
* No point in a 64-bit byteswap since that would just be
* swapping the responses from different chips, and we are
* only interested in one chip (a representative sample)
*/
return cfi32_to_cpu(map, val.x[0]);
}
void cfi_udelay(int us);