Add comment to ecc_digits_from_bytes kdoc that the first byte is expected
to hold the most significant bits of the large integer that is converted
into an array of digits.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The SM2 algorithm has a single user in the kernel. However, it's
never been integrated properly with that user: asymmetric_keys.
The crux of the issue is that the way it computes its digest with
sm3 does not fit into the architecture of asymmetric_keys. As no
solution has been proposed, remove this algorithm.
It can be resubmitted when it is integrated properly into the
asymmetric_keys subsystem.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
fix in qat.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmZHN+8ACgkQxycdCkmx
i6dUtw//fOLT4uoLt99XHSiRF8HV+saMklfGEn6pAnAE4L9StQDYerErDA+R7kWs
xwqFLlETUXvfvmDRSgwpFQCEZc7loenISEg0RjsWHsdEUYcCg9ty7YwJxKpKIMmq
U+rX1FfuogXNl0G/TQkoCC02IkNn3j0ZNwg9MrLqg9R4dH2eQPrXwVgkWXYd++Gt
8quDbDrSZQSZyYu6WFkPmb0pVGSjjNLPACToW76tqDVhed9MFZo/rkKFo9IHFETF
YyIVZXhJ2mI0143RkAfTgS5H2C3+8xWxw/LgjKt62ZBCWDHP5iBxiO7a/aHrdAdx
a1YuHb7XwhTWQtjwOgpNHRgdu2LcmYj0jLyjAPQ5sNyuf346ri0b4TqZVMi8WJqj
XYsApM08Fu/jJV/UJVWQ9dvjMgybt48SZ2pTw+M2bOpvjD3rmQRDdtEpVolMb6wk
GkIckfR/U9aS76Qqr4bTf8Fabehik7jF7joQxKAO2su3aH5Xpnlsyy47Mj8AqEIm
FWleteaYg8mIoTSA7E/UNTQbWmf+Ga+ZjJ+eB3w7QRatXOoH5wpvDoucERpG5ieC
OMXc3kkib8z6W85BzGVyddWvisu28XoLvKeRf2ZFc6AjX8UXbA4LfDF2AVZPLiXr
gwwFzL0lOTaveLCwXE/LxiCD0/TVaxqv00/rSw9OnU04vbRWdhI=
=zPgG
-----END PGP SIGNATURE-----
Merge tag 'v6.10-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
"Fix a bug in the new ecc P521 code as well as a buggy fix in qat"
* tag 'v6.10-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: ecc - Prevent ecc_digits_from_bytes from reading too many bytes
crypto: qat - Fix ADF_DEV_RESET_SYNC memory leak
documented (hopefully adequately) in the respective changelogs. Notable
series include:
- Lucas Stach has provided some page-mapping
cleanup/consolidation/maintainability work in the series "mm/treewide:
Remove pXd_huge() API".
- In the series "Allow migrate on protnone reference with
MPOL_PREFERRED_MANY policy", Donet Tom has optimized mempolicy's
MPOL_PREFERRED_MANY mode, yielding almost doubled performance in one
test.
- In their series "Memory allocation profiling" Kent Overstreet and
Suren Baghdasaryan have contributed a means of determining (via
/proc/allocinfo) whereabouts in the kernel memory is being allocated:
number of calls and amount of memory.
- Matthew Wilcox has provided the series "Various significant MM
patches" which does a number of rather unrelated things, but in largely
similar code sites.
- In his series "mm: page_alloc: freelist migratetype hygiene" Johannes
Weiner has fixed the page allocator's handling of migratetype requests,
with resulting improvements in compaction efficiency.
- In the series "make the hugetlb migration strategy consistent" Baolin
Wang has fixed a hugetlb migration issue, which should improve hugetlb
allocation reliability.
- Liu Shixin has hit an I/O meltdown caused by readahead in a
memory-tight memcg. Addressed in the series "Fix I/O high when memory
almost met memcg limit".
- In the series "mm/filemap: optimize folio adding and splitting" Kairui
Song has optimized pagecache insertion, yielding ~10% performance
improvement in one test.
- Baoquan He has cleaned up and consolidated the early zone
initialization code in the series "mm/mm_init.c: refactor
free_area_init_core()".
- Baoquan has also redone some MM initializatio code in the series
"mm/init: minor clean up and improvement".
- MM helper cleanups from Christoph Hellwig in his series "remove
follow_pfn".
- More cleanups from Matthew Wilcox in the series "Various page->flags
cleanups".
- Vlastimil Babka has contributed maintainability improvements in the
series "memcg_kmem hooks refactoring".
- More folio conversions and cleanups in Matthew Wilcox's series
"Convert huge_zero_page to huge_zero_folio"
"khugepaged folio conversions"
"Remove page_idle and page_young wrappers"
"Use folio APIs in procfs"
"Clean up __folio_put()"
"Some cleanups for memory-failure"
"Remove page_mapping()"
"More folio compat code removal"
- David Hildenbrand chipped in with "fs/proc/task_mmu: convert hugetlb
functions to work on folis".
- Code consolidation and cleanup work related to GUP's handling of
hugetlbs in Peter Xu's series "mm/gup: Unify hugetlb, part 2".
- Rick Edgecombe has developed some fixes to stack guard gaps in the
series "Cover a guard gap corner case".
- Jinjiang Tu has fixed KSM's behaviour after a fork+exec in the series
"mm/ksm: fix ksm exec support for prctl".
- Baolin Wang has implemented NUMA balancing for multi-size THPs. This
is a simple first-cut implementation for now. The series is "support
multi-size THP numa balancing".
- Cleanups to vma handling helper functions from Matthew Wilcox in the
series "Unify vma_address and vma_pgoff_address".
- Some selftests maintenance work from Dev Jain in the series
"selftests/mm: mremap_test: Optimizations and style fixes".
- Improvements to the swapping of multi-size THPs from Ryan Roberts in
the series "Swap-out mTHP without splitting".
- Kefeng Wang has significantly optimized the handling of arm64's
permission page faults in the series
"arch/mm/fault: accelerate pagefault when badaccess"
"mm: remove arch's private VM_FAULT_BADMAP/BADACCESS"
- GUP cleanups from David Hildenbrand in "mm/gup: consistently call it
GUP-fast".
- hugetlb fault code cleanups from Vishal Moola in "Hugetlb fault path to
use struct vm_fault".
- selftests build fixes from John Hubbard in the series "Fix
selftests/mm build without requiring "make headers"".
- Memory tiering fixes/improvements from Ho-Ren (Jack) Chuang in the
series "Improved Memory Tier Creation for CPUless NUMA Nodes". Fixes
the initialization code so that migration between different memory types
works as intended.
- David Hildenbrand has improved follow_pte() and fixed an errant driver
in the series "mm: follow_pte() improvements and acrn follow_pte()
fixes".
- David also did some cleanup work on large folio mapcounts in his
series "mm: mapcount for large folios + page_mapcount() cleanups".
- Folio conversions in KSM in Alex Shi's series "transfer page to folio
in KSM".
- Barry Song has added some sysfs stats for monitoring multi-size THP's
in the series "mm: add per-order mTHP alloc and swpout counters".
- Some zswap cleanups from Yosry Ahmed in the series "zswap same-filled
and limit checking cleanups".
- Matthew Wilcox has been looking at buffer_head code and found the
documentation to be lacking. The series is "Improve buffer head
documentation".
- Multi-size THPs get more work, this time from Lance Yang. His series
"mm/madvise: enhance lazyfreeing with mTHP in madvise_free" optimizes
the freeing of these things.
- Kemeng Shi has added more userspace-visible writeback instrumentation
in the series "Improve visibility of writeback".
- Kemeng Shi then sent some maintenance work on top in the series "Fix
and cleanups to page-writeback".
- Matthew Wilcox reduces mmap_lock traffic in the anon vma code in the
series "Improve anon_vma scalability for anon VMAs". Intel's test bot
reported an improbable 3x improvement in one test.
- SeongJae Park adds some DAMON feature work in the series
"mm/damon: add a DAMOS filter type for page granularity access recheck"
"selftests/damon: add DAMOS quota goal test"
- Also some maintenance work in the series
"mm/damon/paddr: simplify page level access re-check for pageout"
"mm/damon: misc fixes and improvements"
- David Hildenbrand has disabled some known-to-fail selftests ni the
series "selftests: mm: cow: flag vmsplice() hugetlb tests as XFAIL".
- memcg metadata storage optimizations from Shakeel Butt in "memcg:
reduce memory consumption by memcg stats".
- DAX fixes and maintenance work from Vishal Verma in the series
"dax/bus.c: Fixups for dax-bus locking".
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCZkgQYwAKCRDdBJ7gKXxA
jrdKAP9WVJdpEcXxpoub/vVE0UWGtffr8foifi9bCwrQrGh5mgEAx7Yf0+d/oBZB
nvA4E0DcPrUAFy144FNM0NTCb7u9vAw=
=V3R/
-----END PGP SIGNATURE-----
Merge tag 'mm-stable-2024-05-17-19-19' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull mm updates from Andrew Morton:
"The usual shower of singleton fixes and minor series all over MM,
documented (hopefully adequately) in the respective changelogs.
Notable series include:
- Lucas Stach has provided some page-mapping cleanup/consolidation/
maintainability work in the series "mm/treewide: Remove pXd_huge()
API".
- In the series "Allow migrate on protnone reference with
MPOL_PREFERRED_MANY policy", Donet Tom has optimized mempolicy's
MPOL_PREFERRED_MANY mode, yielding almost doubled performance in
one test.
- In their series "Memory allocation profiling" Kent Overstreet and
Suren Baghdasaryan have contributed a means of determining (via
/proc/allocinfo) whereabouts in the kernel memory is being
allocated: number of calls and amount of memory.
- Matthew Wilcox has provided the series "Various significant MM
patches" which does a number of rather unrelated things, but in
largely similar code sites.
- In his series "mm: page_alloc: freelist migratetype hygiene"
Johannes Weiner has fixed the page allocator's handling of
migratetype requests, with resulting improvements in compaction
efficiency.
- In the series "make the hugetlb migration strategy consistent"
Baolin Wang has fixed a hugetlb migration issue, which should
improve hugetlb allocation reliability.
- Liu Shixin has hit an I/O meltdown caused by readahead in a
memory-tight memcg. Addressed in the series "Fix I/O high when
memory almost met memcg limit".
- In the series "mm/filemap: optimize folio adding and splitting"
Kairui Song has optimized pagecache insertion, yielding ~10%
performance improvement in one test.
- Baoquan He has cleaned up and consolidated the early zone
initialization code in the series "mm/mm_init.c: refactor
free_area_init_core()".
- Baoquan has also redone some MM initializatio code in the series
"mm/init: minor clean up and improvement".
- MM helper cleanups from Christoph Hellwig in his series "remove
follow_pfn".
- More cleanups from Matthew Wilcox in the series "Various
page->flags cleanups".
- Vlastimil Babka has contributed maintainability improvements in the
series "memcg_kmem hooks refactoring".
- More folio conversions and cleanups in Matthew Wilcox's series:
"Convert huge_zero_page to huge_zero_folio"
"khugepaged folio conversions"
"Remove page_idle and page_young wrappers"
"Use folio APIs in procfs"
"Clean up __folio_put()"
"Some cleanups for memory-failure"
"Remove page_mapping()"
"More folio compat code removal"
- David Hildenbrand chipped in with "fs/proc/task_mmu: convert
hugetlb functions to work on folis".
- Code consolidation and cleanup work related to GUP's handling of
hugetlbs in Peter Xu's series "mm/gup: Unify hugetlb, part 2".
- Rick Edgecombe has developed some fixes to stack guard gaps in the
series "Cover a guard gap corner case".
- Jinjiang Tu has fixed KSM's behaviour after a fork+exec in the
series "mm/ksm: fix ksm exec support for prctl".
- Baolin Wang has implemented NUMA balancing for multi-size THPs.
This is a simple first-cut implementation for now. The series is
"support multi-size THP numa balancing".
- Cleanups to vma handling helper functions from Matthew Wilcox in
the series "Unify vma_address and vma_pgoff_address".
- Some selftests maintenance work from Dev Jain in the series
"selftests/mm: mremap_test: Optimizations and style fixes".
- Improvements to the swapping of multi-size THPs from Ryan Roberts
in the series "Swap-out mTHP without splitting".
- Kefeng Wang has significantly optimized the handling of arm64's
permission page faults in the series
"arch/mm/fault: accelerate pagefault when badaccess"
"mm: remove arch's private VM_FAULT_BADMAP/BADACCESS"
- GUP cleanups from David Hildenbrand in "mm/gup: consistently call
it GUP-fast".
- hugetlb fault code cleanups from Vishal Moola in "Hugetlb fault
path to use struct vm_fault".
- selftests build fixes from John Hubbard in the series "Fix
selftests/mm build without requiring "make headers"".
- Memory tiering fixes/improvements from Ho-Ren (Jack) Chuang in the
series "Improved Memory Tier Creation for CPUless NUMA Nodes".
Fixes the initialization code so that migration between different
memory types works as intended.
- David Hildenbrand has improved follow_pte() and fixed an errant
driver in the series "mm: follow_pte() improvements and acrn
follow_pte() fixes".
- David also did some cleanup work on large folio mapcounts in his
series "mm: mapcount for large folios + page_mapcount() cleanups".
- Folio conversions in KSM in Alex Shi's series "transfer page to
folio in KSM".
- Barry Song has added some sysfs stats for monitoring multi-size
THP's in the series "mm: add per-order mTHP alloc and swpout
counters".
- Some zswap cleanups from Yosry Ahmed in the series "zswap
same-filled and limit checking cleanups".
- Matthew Wilcox has been looking at buffer_head code and found the
documentation to be lacking. The series is "Improve buffer head
documentation".
- Multi-size THPs get more work, this time from Lance Yang. His
series "mm/madvise: enhance lazyfreeing with mTHP in madvise_free"
optimizes the freeing of these things.
- Kemeng Shi has added more userspace-visible writeback
instrumentation in the series "Improve visibility of writeback".
- Kemeng Shi then sent some maintenance work on top in the series
"Fix and cleanups to page-writeback".
- Matthew Wilcox reduces mmap_lock traffic in the anon vma code in
the series "Improve anon_vma scalability for anon VMAs". Intel's
test bot reported an improbable 3x improvement in one test.
- SeongJae Park adds some DAMON feature work in the series
"mm/damon: add a DAMOS filter type for page granularity access recheck"
"selftests/damon: add DAMOS quota goal test"
- Also some maintenance work in the series
"mm/damon/paddr: simplify page level access re-check for pageout"
"mm/damon: misc fixes and improvements"
- David Hildenbrand has disabled some known-to-fail selftests ni the
series "selftests: mm: cow: flag vmsplice() hugetlb tests as
XFAIL".
- memcg metadata storage optimizations from Shakeel Butt in "memcg:
reduce memory consumption by memcg stats".
- DAX fixes and maintenance work from Vishal Verma in the series
"dax/bus.c: Fixups for dax-bus locking""
* tag 'mm-stable-2024-05-17-19-19' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (426 commits)
memcg, oom: cleanup unused memcg_oom_gfp_mask and memcg_oom_order
selftests/mm: hugetlb_madv_vs_map: avoid test skipping by querying hugepage size at runtime
mm/hugetlb: add missing VM_FAULT_SET_HINDEX in hugetlb_wp
mm/hugetlb: add missing VM_FAULT_SET_HINDEX in hugetlb_fault
selftests: cgroup: add tests to verify the zswap writeback path
mm: memcg: make alloc_mem_cgroup_per_node_info() return bool
mm/damon/core: fix return value from damos_wmark_metric_value
mm: do not update memcg stats for NR_{FILE/SHMEM}_PMDMAPPED
selftests: cgroup: remove redundant enabling of memory controller
Docs/mm/damon/maintainer-profile: allow posting patches based on damon/next tree
Docs/mm/damon/maintainer-profile: change the maintainer's timezone from PST to PT
Docs/mm/damon/design: use a list for supported filters
Docs/admin-guide/mm/damon/usage: fix wrong schemes effective quota update command
Docs/admin-guide/mm/damon/usage: fix wrong example of DAMOS filter matching sysfs file
selftests/damon: classify tests for functionalities and regressions
selftests/damon/_damon_sysfs: use 'is' instead of '==' for 'None'
selftests/damon/_damon_sysfs: find sysfs mount point from /proc/mounts
selftests/damon/_damon_sysfs: check errors from nr_schemes file reads
mm/damon/core: initialize ->esz_bp from damos_quota_init_priv()
selftests/damon: add a test for DAMOS quota goal
...
-----BEGIN PGP SIGNATURE-----
iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAmZFcFwQHGF4Ym9lQGtl
cm5lbC5kawAKCRD301j7KXHgpuP1EADKJOJRtcvO/av2cUR+HZFDC+s/jBwHIJK+
4UY633zQlxjqc7dt4rX7zk/uk4mkhZnsGY+wS6xH08kB3VO9YksrwREVt6Ur9lP8
UXNVPpPcZ7fFcIp41rYkZX9pTDp2N8z2qsVg7V8wcXJ7EeTXd6L4ZLjhfCiHvs2s
i6yIEwLrW+voYuqSFV7vWBIM3mSXSRTIiO2DqRAOtT2lsj374DOthvP2lOSSb5wq
6TF4s4z3HMGs+HF3rjP5kJ6ic6RdC6i31lzEivUMhwCiKN1AZXdp96KXaC+NVPRV
t5//EdS+pSenQgkg6XH7d5kzFoCUFJfVZt05w0GCqMA081Q9ySjUymN1zedJbGd9
8CDlW01N8XLqG6+F9yakJLSFY+mUFGduPuueTNiUJWP8kTkQCtYIRzZDeyjxQrE5
c17NW5S1uWkf26Ucyi1r+gxw9N4kGkuB3+NitC6DOc7BW5CocEIoqLWi/UH7cEZe
0v6loTakqBAdgh03RCDMUj9Rt/37pQs2KFT9/CazVpbkvkKsue4xK4K2CUFsxqOj
qcoc/LD62at4S3AUWwhUIs3YaQ7v/6AY5hIktqAwsFHmDffUbPdRrXWY1keKIprJ
4qS/sY0M+kvKGnp+80fPVHab9l6/fMLfabIyFuh0M3W/M4eHGt2YfKWreoGEy/1x
xLq2iq+ehw==
=S6Xt
-----END PGP SIGNATURE-----
Merge tag 'net-accept-more-20240515' of git://git.kernel.dk/linux
Pull more io_uring updates from Jens Axboe:
"This adds support for IORING_CQE_F_SOCK_NONEMPTY for io_uring accept
requests.
This is very similar to previous work that enabled the same hint for
doing receives on sockets. By far the majority of the work here is
refactoring to enable the networking side to pass back whether or not
the socket had more pending requests after accepting the current one,
the last patch just wires it up for io_uring.
Not only does this enable applications to know whether there are more
connections to accept right now, it also enables smarter logic for
io_uring multishot accept on whether to retry immediately or wait for
a poll trigger"
* tag 'net-accept-more-20240515' of git://git.kernel.dk/linux:
io_uring/net: wire up IORING_CQE_F_SOCK_NONEMPTY for accept
net: pass back whether socket was empty post accept
net: have do_accept() take a struct proto_accept_arg argument
net: change proto and proto_ops accept type
Prevent ecc_digits_from_bytes from reading too many bytes from the input
byte array in case an insufficient number of bytes is provided to fill the
output digit array of ndigits. Therefore, initialize the most significant
digits with 0 to avoid trying to read too many bytes later on. Convert the
function into a regular function since it is getting too big for an inline
function.
If too many bytes are provided on the input byte array the extra bytes
are ignored since the input variable 'ndigits' limits the number of digits
that will be filled.
Fixes: d67c96fb97 ("crypto: ecdsa - Convert byte arrays with key coordinates to digits")
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than pass in flags, error pointer, and whether this is a kernel
invocation or not, add a struct proto_accept_arg struct as the argument.
This then holds all of these arguments, and prepares accept for being
able to pass back more information.
No functional changes in this patch.
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
API:
- Remove crypto stats interface.
Algorithms:
- Add faster AES-XTS on modern x86_64 CPUs.
- Forbid curves with order less than 224 bits in ecc (FIPS 186-5).
- Add ECDSA NIST P521.
Drivers:
- Expose otp zone in atmel.
- Add dh fallback for primes > 4K in qat.
- Add interface for live migration in qat.
- Use dma for aes requests in starfive.
- Add full DMA support for stm32mpx in stm32.
- Add Tegra Security Engine driver.
Others:
- Introduce scope-based x509_certificate allocation.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmZBjXMACgkQxycdCkmx
i6cQ7g/+JPKnzQedhpJSK5AnkAkqO9kJ16JdeB7AtdSeZZA/EIFxuXZ3Fv1fH44y
1CCibowc5zdss8F/1iOqPc57u5vy2Mjyw8qlhs7JlmcYf/lo7CBGfT8Uxo7BK/S9
n+/+y47Xu5p3yt/c6ldrwqjOaWaYuaCKICZtS91XVvrxM80iVnmDSQCNkcch4KQ4
nsdcVJhS4lOStBNjKtkhWlgufqdp8RPzKYH2B6GbW9z6en8WeTbnoMhgqjqQ3UID
/DHtixyee0MDUDReQrixyCM3XMV5er/qBMoDrCxipBuVrr4GMd2GlCEaZbXfTUW0
3K8Nle4KMMqi81lBAQKiD/hRjrC68FHOvVRGHtZntR0+NZ/nlinXCVWv4iHwRzAB
7BOqRTC3mfv+uMhTvgwQAkXCHAhivMokSzTaDCIrzPLjKIx2BOfVZKmPBt98LxeW
8/JfgEK4gX6wxe4GRftueEApCfWQrwYK60j5bIkescaJ/mI7M5bEByvTTob1lAka
Fw5kGDy8dVnrG9HagLwnXoI1pIGmca8hV1t24Vf1OCdWLgOW+GTCIuyutL2c9AWv
0vEbytGZl69XJlIgQGVcv9RM6NlIXxHwfSHU59N/SHTXhlHjm1XWi3HCiJaZ1b6+
pcILMJ29FMs8LobiN7PT+rNu6fboaH0/o+R7OK9mKRut864xFTk=
=NDS0
-----END PGP SIGNATURE-----
Merge tag 'v6.10-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Remove crypto stats interface
Algorithms:
- Add faster AES-XTS on modern x86_64 CPUs
- Forbid curves with order less than 224 bits in ecc (FIPS 186-5)
- Add ECDSA NIST P521
Drivers:
- Expose otp zone in atmel
- Add dh fallback for primes > 4K in qat
- Add interface for live migration in qat
- Use dma for aes requests in starfive
- Add full DMA support for stm32mpx in stm32
- Add Tegra Security Engine driver
Others:
- Introduce scope-based x509_certificate allocation"
* tag 'v6.10-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (123 commits)
crypto: atmel-sha204a - provide the otp content
crypto: atmel-sha204a - add reading from otp zone
crypto: atmel-i2c - rename read function
crypto: atmel-i2c - add missing arg description
crypto: iaa - Use kmemdup() instead of kzalloc() and memcpy()
crypto: sahara - use 'time_left' variable with wait_for_completion_timeout()
crypto: api - use 'time_left' variable with wait_for_completion_killable_timeout()
crypto: caam - i.MX8ULP donot have CAAM page0 access
crypto: caam - init-clk based on caam-page0-access
crypto: starfive - Use fallback for unaligned dma access
crypto: starfive - Do not free stack buffer
crypto: starfive - Skip unneeded fallback allocation
crypto: starfive - Skip dma setup for zeroed message
crypto: hisilicon/sec2 - fix for register offset
crypto: hisilicon/debugfs - mask the unnecessary info from the dump
crypto: qat - specify firmware files for 402xx
crypto: x86/aes-gcm - simplify GCM hash subkey derivation
crypto: x86/aes-gcm - delete unused GCM assembly code
crypto: x86/aes-xts - simplify loop in xts_crypt_slowpath()
hwrng: stm32 - repair clock handling
...
Implement AES in CFB mode using the existing, mostly constant-time
generic AES library implementation. This will be used by the TPM code
to encrypt communications with TPM hardware, which is often a discrete
component connected using sniffable wires or traces.
While a CFB template does exist, using a skcipher is a major pain for
non-performance critical synchronous crypto where the algorithm is known
at compile time and the data is in contiguous buffers with valid kernel
virtual addresses.
Tested-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Reviewed-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Link: https://lore.kernel.org/all/20230216201410.15010-1-James.Bottomley@HansenPartnership.com/
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Tested-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
The private key in ctx->private_key is currently initialized in reverse
byte order in ecdh_set_secret and whenever the key is needed in proper
byte order the variable priv is introduced and the bytes from
ctx->private_key are copied into priv while being byte-swapped
(ecc_swap_digits). To get rid of the unnecessary byte swapping initialize
ctx->private_key in proper byte order and clean up all functions that were
previously using priv or were called with ctx->private_key:
- ecc_gen_privkey: Directly initialize the passed ctx->private_key with
random bytes filling all the digits of the private key. Get rid of the
priv variable. This function only has ecdh_set_secret as a caller to
create NIST P192/256/384 private keys.
- crypto_ecdh_shared_secret: Called only from ecdh_compute_value with
ctx->private_key. Get rid of the priv variable and work with the passed
private_key directly.
- ecc_make_pub_key: Called only from ecdh_compute_value with
ctx->private_key. Get rid of the priv variable and work with the passed
private_key directly.
Cc: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Main goal of memory allocation profiling patchset is to provide accounting
that is cheap enough to run in production. To achieve that we inject
counters using codetags at the allocation call sites to account every time
allocation is made. This injection allows us to perform accounting
efficiently because injected counters are immediately available as opposed
to the alternative methods, such as using _RET_IP_, which would require
counter lookup and appropriate locking that makes accounting much more
expensive. This method requires all allocation functions to inject
separate counters at their call sites so that their callers can be
individually accounted. Counter injection is implemented by allocation
hooks which should wrap all allocation functions.
Inlined functions which perform allocations but do not use allocation
hooks are directly charged for the allocations they perform. In most
cases these functions are just specialized allocation wrappers used from
multiple places to allocate objects of a specific type. It would be more
useful to do the accounting at their call sites instead. Instrument these
helpers to do accounting at the call site. Simple inlined allocation
wrappers are converted directly into macros. More complex allocators or
allocators with documentation are converted into _noprof versions and
allocation hooks are added. This allows memory allocation profiling
mechanism to charge allocations to the callers of these functions.
Link: https://lkml.kernel.org/r/20240415020731.1152108-1-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Acked-by: Jan Kara <jack@suse.cz> [jbd2]
Cc: Anna Schumaker <anna@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Jakub Sitnicki <jakub@cloudflare.com>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Add the parameters for the NIST P521 curve and define a new curve ID
for it. Make the curve available in ecc_get_curve.
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement vli_mmod_fast_521 following the description for how to calculate
the modulus for NIST P521 in the NIST publication "Recommendations for
Discrete Logarithm-Based Cryptography: Elliptic Curve Domain Parameters"
section G.1.4.
NIST p521 requires 9 64bit digits, so increase the ECC_MAX_DIGITS so that
the vli digit array provides enough elements to fit the larger integers
required by this curve.
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the number of bits a curve has to the ecc_curve definition to be able
to derive the number of bytes a curve requires for its coordinates from it.
It also allows one to identify a curve by its particular size. Set the
number of bits on all curve definitions.
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For NIST P192/256/384 the public key's x and y parameters could be copied
directly from a given array since both parameters filled 'ndigits' of
digits (a 'digit' is a u64). For support of NIST P521 the key parameters
need to have leading zeros prepended to the most significant digit since
only 2 bytes of the most significant digit are provided.
Therefore, implement ecc_digits_from_bytes to convert a byte array into an
array of digits and use this function in ecdsa_set_pub_key where an input
byte array needs to be converted into digits.
Suggested-by: Lukas Wunner <lukas@wunner.de>
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove support for the "Crypto usage statistics" feature
(CONFIG_CRYPTO_STATS). This feature does not appear to have ever been
used, and it is harmful because it significantly reduces performance and
is a large maintenance burden.
Covering each of these points in detail:
1. Feature is not being used
Since these generic crypto statistics are only readable using netlink,
it's fairly straightforward to look for programs that use them. I'm
unable to find any evidence that any such programs exist. For example,
Debian Code Search returns no hits except the kernel header and kernel
code itself and translations of the kernel header:
https://codesearch.debian.net/search?q=CRYPTOCFGA_STAT&literal=1&perpkg=1
The patch series that added this feature in 2018
(https://lore.kernel.org/linux-crypto/1537351855-16618-1-git-send-email-clabbe@baylibre.com/)
said "The goal is to have an ifconfig for crypto device." This doesn't
appear to have happened.
It's not clear that there is real demand for crypto statistics. Just
because the kernel provides other types of statistics such as I/O and
networking statistics and some people find those useful does not mean
that crypto statistics are useful too.
Further evidence that programs are not using CONFIG_CRYPTO_STATS is that
it was able to be disabled in RHEL and Fedora as a bug fix
(https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2947).
Even further evidence comes from the fact that there are and have been
bugs in how the stats work, but they were never reported. For example,
before Linux v6.7 hash stats were double-counted in most cases.
There has also never been any documentation for this feature, so it
might be hard to use even if someone wanted to.
2. CONFIG_CRYPTO_STATS significantly reduces performance
Enabling CONFIG_CRYPTO_STATS significantly reduces the performance of
the crypto API, even if no program ever retrieves the statistics. This
primarily affects systems with a large number of CPUs. For example,
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2039576 reported
that Lustre client encryption performance improved from 21.7GB/s to
48.2GB/s by disabling CONFIG_CRYPTO_STATS.
It can be argued that this means that CONFIG_CRYPTO_STATS should be
optimized with per-cpu counters similar to many of the networking
counters. But no one has done this in 5+ years. This is consistent
with the fact that the feature appears to be unused, so there seems to
be little interest in improving it as opposed to just disabling it.
It can be argued that because CONFIG_CRYPTO_STATS is off by default,
performance doesn't matter. But Linux distros tend to error on the side
of enabling options. The option is enabled in Ubuntu and Arch Linux,
and until recently was enabled in RHEL and Fedora (see above). So, even
just having the option available is harmful to users.
3. CONFIG_CRYPTO_STATS is a large maintenance burden
There are over 1000 lines of code associated with CONFIG_CRYPTO_STATS,
spread among 32 files. It significantly complicates much of the
implementation of the crypto API. After the initial submission, many
fixes and refactorings have consumed effort of multiple people to keep
this feature "working". We should be spending this effort elsewhere.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
API:
- Avoid unnecessary copying in scomp for trivial SG lists.
Algorithms:
- Optimise NEON CCM implementation on ARM64.
Drivers:
- Add queue stop/query debugfs support in hisilicon/qm.
-
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmXxDbcACgkQxycdCkmx
i6dCzw//dI5CuszRMalZu8fwvh7K48d4GoY/5uuc1vPK6yLyc5dl6CzPsrKhmBGO
Ip2fUCFoO+ViWl5tw8SjOOEnPRztETrIGlOM5PHOIxRuSA6lbcsdo+EQja4WBDo1
fGUNaNVW9eY9XsaUw46uwxaicd+xOqRGJURKB2XuzSHACln/QrNy74aVdbc6VhaK
Jnh4o2blsEh7jcVAK2ahNmmDm739w8C462Go4UIl8xcM693HtjUUFx7TALpI0iC+
BWxctAV9IzA/siUwztvwwlo+V1KZbjZFD5XfC/erkV0gFs7ll8IcuDFnZ1suMvt2
Z+6OVhghkkvMJJEl1qKNxVJITOUdXH0yKWqTbEHuFlV7VV+9hjUlWlvRVm7ZIrXf
ZXi/OpBJRAIBB6fJqy1RAvfIZUR8Dl8i8RKLVawdqFLbB+XCOiIK7a8+5hCKaWsU
0DVFNCfiJFPIiByzxXV+4Jt/hzxx/qseIbTiUShdoXeHOVWMgH5H255MWJv5qjVy
aSwgQLX6MlY9Pj+w5cPHAUjOiGjiAa3iKwsbnmo5lszsPl4KR407gaJnV7VI4s6R
fhqAgIjJ4Ik2lHzQRM89QU2OOogVBPs+FHEkk98lak5CATpjUq97iLbTharAg3dc
l8FLoSB9+NM+RCbnzUXuOIvoU/okSM4+j6a3xAnLQH6OxBOtk3Y=
=Uw2R
-----END PGP SIGNATURE-----
Merge tag 'v6.9-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Avoid unnecessary copying in scomp for trivial SG lists
Algorithms:
- Optimise NEON CCM implementation on ARM64
Drivers:
- Add queue stop/query debugfs support in hisilicon/qm
- Intel qat updates and cleanups"
* tag 'v6.9-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (79 commits)
Revert "crypto: remove CONFIG_CRYPTO_STATS"
crypto: scomp - remove memcpy if sg_nents is 1 and pages are lowmem
crypto: tcrypt - add ffdhe2048(dh) test
crypto: iaa - fix the missing CRYPTO_ALG_ASYNC in cra_flags
crypto: hisilicon/zip - fix the missing CRYPTO_ALG_ASYNC in cra_flags
hwrng: hisi - use dev_err_probe
MAINTAINERS: Remove T Ambarus from few mchp entries
crypto: iaa - Fix comp/decomp delay statistics
crypto: iaa - Fix async_disable descriptor leak
dt-bindings: rng: atmel,at91-trng: add sam9x7 TRNG
dt-bindings: crypto: add sam9x7 in Atmel TDES
dt-bindings: crypto: add sam9x7 in Atmel SHA
dt-bindings: crypto: add sam9x7 in Atmel AES
crypto: remove CONFIG_CRYPTO_STATS
crypto: dh - Make public key test FIPS-only
crypto: rockchip - fix to check return value
crypto: jitter - fix CRYPTO_JITTERENTROPY help text
crypto: qat - make ring to service map common for QAT GEN4
crypto: qat - fix ring to service map for dcc in 420xx
crypto: qat - fix ring to service map for dcc in 4xxx
...
acomp's users might want to know if acomp is really async to optimize
themselves. One typical user which can benefit from exposed async stat is
zswap.
In zswap, zsmalloc is the most commonly used allocator for (and perhaps
the only one). For zsmalloc, we cannot sleep while we map the compressed
memory, so we copy it to a temporary buffer. By knowing the alg won't
sleep can help zswap to avoid the need for a buffer. This shows
noticeable improvement in load/store latency of zswap.
Link: https://lkml.kernel.org/r/20240222081135.173040-2-21cnbao@gmail.com
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Chris Li <chrisl@kernel.org>
Cc: Chengming Zhou <zhouchengming@bytedance.com>
Cc: Dan Streetman <ddstreet@ieee.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Nhat Pham <nphamcs@gmail.com>
Cc: Seth Jennings <sjenning@redhat.com>
Cc: Vitaly Wool <vitaly.wool@konsulko.com>
Cc: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This reverts commit 2beb81fbf0.
While removing CONFIG_CRYPTO_STATS is a worthy goal, this also
removed unrelated infrastructure such as crypto_comp_alg_common.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove support for the "Crypto usage statistics" feature
(CONFIG_CRYPTO_STATS). This feature does not appear to have ever been
used, and it is harmful because it significantly reduces performance and
is a large maintenance burden.
Covering each of these points in detail:
1. Feature is not being used
Since these generic crypto statistics are only readable using netlink,
it's fairly straightforward to look for programs that use them. I'm
unable to find any evidence that any such programs exist. For example,
Debian Code Search returns no hits except the kernel header and kernel
code itself and translations of the kernel header:
https://codesearch.debian.net/search?q=CRYPTOCFGA_STAT&literal=1&perpkg=1
The patch series that added this feature in 2018
(https://lore.kernel.org/linux-crypto/1537351855-16618-1-git-send-email-clabbe@baylibre.com/)
said "The goal is to have an ifconfig for crypto device." This doesn't
appear to have happened.
It's not clear that there is real demand for crypto statistics. Just
because the kernel provides other types of statistics such as I/O and
networking statistics and some people find those useful does not mean
that crypto statistics are useful too.
Further evidence that programs are not using CONFIG_CRYPTO_STATS is that
it was able to be disabled in RHEL and Fedora as a bug fix
(https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2947).
Even further evidence comes from the fact that there are and have been
bugs in how the stats work, but they were never reported. For example,
before Linux v6.7 hash stats were double-counted in most cases.
There has also never been any documentation for this feature, so it
might be hard to use even if someone wanted to.
2. CONFIG_CRYPTO_STATS significantly reduces performance
Enabling CONFIG_CRYPTO_STATS significantly reduces the performance of
the crypto API, even if no program ever retrieves the statistics. This
primarily affects systems with large number of CPUs. For example,
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2039576 reported
that Lustre client encryption performance improved from 21.7GB/s to
48.2GB/s by disabling CONFIG_CRYPTO_STATS.
It can be argued that this means that CONFIG_CRYPTO_STATS should be
optimized with per-cpu counters similar to many of the networking
counters. But no one has done this in 5+ years. This is consistent
with the fact that the feature appears to be unused, so there seems to
be little interest in improving it as opposed to just disabling it.
It can be argued that because CONFIG_CRYPTO_STATS is off by default,
performance doesn't matter. But Linux distros tend to error on the side
of enabling options. The option is enabled in Ubuntu and Arch Linux,
and until recently was enabled in RHEL and Fedora (see above). So, even
just having the option available is harmful to users.
3. CONFIG_CRYPTO_STATS is a large maintenance burden
There are over 1000 lines of code associated with CONFIG_CRYPTO_STATS,
spread among 32 files. It significantly complicates much of the
implementation of the crypto API. After the initial submission, many
fixes and refactorings have consumed effort of multiple people to keep
this feature "working". We should be spending this effort elsewhere.
Cc: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since crypto_hash_alg_has_setkey() is only called from ahash.c itself,
make it a static function.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch includes linux/errno.h to address the issue of 'EINVAL' being
undeclared.
Signed-off-by: Clay Chang <clayc@hpe.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 31865c4c4d ("crypto: skcipher - Add lskcipher") moved some
fields from 'struct skcipher_alg' into SKCIPHER_ALG_COMMON but didn't
remove the corresponding kerneldoc members, which results in these
warnings when running 'make htmldocs':
./include/crypto/skcipher.h:182: warning: Excess struct member 'min_keysize' description in 'skcipher_alg'
./include/crypto/skcipher.h:182: warning: Excess struct member 'max_keysize' description in 'skcipher_alg'
./include/crypto/skcipher.h:182: warning: Excess struct member 'ivsize' description in 'skcipher_alg'
./include/crypto/skcipher.h:182: warning: Excess struct member 'chunksize' description in 'skcipher_alg'
./include/crypto/skcipher.h:182: warning: Excess struct member 'stat' description in 'skcipher_alg'
./include/crypto/skcipher.h:182: warning: Excess struct member 'base' description in 'skcipher_alg'
SKCIPHER_ALG_COMMON already has the documentation for all these fields.
Fixes: 31865c4c4d ("crypto: skcipher - Add lskcipher")
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
Tested-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 42808e5dc6 ("crypto: hash - Count error stats differently")
moved some fields from 'struct shash_alg' into HASH_ALG_COMMON but
didn't remove the corresponding kerneldoc members, which results in
these warnings when running 'make htmldocs':
./include/crypto/hash.h:248: warning: Excess struct member 'digestsize' description in 'shash_alg'
./include/crypto/hash.h:248: warning: Excess struct member 'statesize' description in 'shash_alg'
./include/crypto/hash.h:248: warning: Excess struct member 'stat' description in 'shash_alg'
./include/crypto/hash.h:248: warning: Excess struct member 'base' description in 'shash_alg'
HASH_ALG_COMMON already has the documentation for all these fields.
Fixes: 42808e5dc6 ("crypto: hash - Count error stats differently")
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
Tested-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Unlike algif_aead which is always issued in one go (thus limiting
the maximum size of the request), algif_skcipher has always allowed
unlimited input data by cutting them up as necessary and feeding
the fragments to the underlying algorithm one at a time.
However, because of deficiencies in the API, this has been broken
for most stream ciphers such as arc4 or chacha. This is because
they have an internal state in addition to the IV that must be
preserved in order to continue processing.
Fix this by using the new skcipher state API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds code to the skcipher/lskcipher API to make use
of the internal state if present. In particular, the skcipher
lskcipher wrapper will allocate a buffer for the IV/state and
feed that to the underlying lskcipher algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Unlike chaining modes such as CBC, stream ciphers other than CTR
usually hold an internal state that must be preserved if the
operation is to be done piecemeal. This has not been represented
in the API, resulting in the inability to split up stream cipher
operations.
This patch adds the basic representation of an internal state to
skcipher and lskcipher. In the interest of backwards compatibility,
the default has been set such that existing users are assumed to
be operating in one go as opposed to piecemeal.
With the new API, each lskcipher/skcipher algorithm has a new
attribute called statesize. For skcipher, this is the size of
the buffer that can be exported or imported similar to ahash.
For lskcipher, instead of providing a buffer of ivsize, the user
now has to provide a buffer of ivsize + statesize.
Each skcipher operation is assumed to be final as they are now,
but this may be overridden with a request flag. When the override
occurs, the user may then export the partial state and reimport
it later.
For lskcipher operations this is reversed. All operations are
not final and the state will be exported unless the FINAL bit is
set. However, the CONT bit still has to be set for the state
to be used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Having multiple in-flight AIO requests results in unpredictable
output because they all share the same IV. Fix this by only allowing
one request at a time.
Fixes: 83094e5e9e ("crypto: af_alg - add async support to algif_aead")
Fixes: a596999b7d ("crypto: algif - change algif_skcipher to be asynchronous")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Register FIPS 202 SHA-3 hashes in hash info for IMA and other
users. Sizes 256 and up, as 224 is too weak for any practical
purposes.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The "ahash" API provides access to both CPU-based and hardware offload-
based implementations of hash algorithms. Typically the former are
implemented as "shash" algorithms under the hood, while the latter are
implemented as "ahash" algorithms. The "ahash" API provides access to
both. Various kernel subsystems use the ahash API because they want to
support hashing hardware offload without using a separate API for it.
Yet, the common case is that a crypto accelerator is not actually being
used, and ahash is just wrapping a CPU-based shash algorithm.
This patch optimizes the ahash API for that common case by eliminating
the extra indirect call for each ahash operation on top of shash.
It also fixes the double-counting of crypto stats in this scenario
(though CONFIG_CRYPTO_STATS should *not* be enabled by anyone interested
in performance anyway...), and it eliminates redundant checking of
CRYPTO_TFM_NEED_KEY. As a bonus, it also shrinks struct crypto_ahash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_ahash_alignmask() no longer has any callers, and it always
returns 0 now that neither ahash nor shash algorithms support nonzero
alignmasks anymore. Therefore, remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, the ahash API checks the alignment of all key and result
buffers against the algorithm's declared alignmask, and for any
unaligned buffers it falls back to manually aligned temporary buffers.
This is virtually useless, however. First, since it does not apply to
the message, its effect is much more limited than e.g. is the case for
the alignmask for "skcipher". Second, the key and result buffers are
given as virtual addresses and cannot (in general) be DMA'ed into, so
drivers end up having to copy to/from them in software anyway. As a
result it's easy to use memcpy() or the unaligned access helpers.
The crypto_hash_walk_*() helper functions do use the alignmask to align
the message. But with one exception those are only used for shash
algorithms being exposed via the ahash API, not for native ahashes, and
aligning the message is not required in this case, especially now that
alignmask support has been removed from shash. The exception is the
n2_core driver, which doesn't set an alignmask.
In any case, no ahash algorithms actually set a nonzero alignmask
anymore. Therefore, remove support for it from ahash. The benefit is
that all the code to handle "misaligned" buffers in the ahash API goes
away, reducing the overhead of the ahash API.
This follows the same change that was made to shash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_shash_ctx_aligned() is no longer used, and it is useless now that
shash algorithms don't support nonzero alignmasks, so remove it.
Also remove crypto_tfm_ctx_aligned() which was only called by
crypto_shash_ctx_aligned(). It's unlikely to be useful again, since it
seems inappropriate to use cra_alignmask to represent alignment for the
tfm context when it already means alignment for inputs/outputs.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_shash_alignmask() no longer has any callers, and it always
returns 0 now that the shash algorithm type no longer supports nonzero
alignmasks. Therefore, remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Most shash algorithms don't have custom ->import and ->export functions,
resulting in the memcpy() based default being used. Yet,
crypto_shash_import() and crypto_shash_export() still make an indirect
call, which is expensive. Therefore, change how the default import and
export are called to make it so that crypto_shash_import() and
crypto_shash_export() don't do an indirect call in this case.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawn users can no longer assume the spawn is of type
struct skcipher_alg, these helpers are no longer used. Remove them.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As skcipher spawns can be of two different types (skcipher vs.
lskcipher), only the common fields can be accessed. Add a helper
to return the common algorithm object.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Correct typo of "destination".
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Correct typos of "destination".
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All callers ignore the return value, so simplify by not providing one.
Note that crypto_engine_exit() is typically called in a device driver's
remove path (or the error path in probe), where errors cannot be handled
anyhow.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a new API type lskcipher designed for taking straight kernel
pointers instead of SG lists. Its relationship to skcipher will
be analogous to that between shash and ahash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the helper crypto_has_aead. This is meant to replace the
existing use of crypto_has_alg to locate AEAD algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
API:
- Move crypto engine callback from tfm ctx into algorithm object.
- Fix atomic sleep bug in crypto_destroy_instance.
- Move lib/mpi into lib/crypto.
Algorithms:
- Add chacha20 and poly1305 implementation for powerpc p10.
Drivers:
- Add AES skcipher and aead support to starfive.
- Add Dynamic Boost Control support to ccp.
- Add support for STM32P13 platform to stm32.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmTsZkMACgkQxycdCkmx
i6furw//e6kYK1CTOqidPM6nI0KK1Ok204VXu56H0wM4THZ09ZwcbDNKpvI6vjMi
XZkKthiayl/1okmpRVP0rPqMWDtxajeu6IUAQqqFGUFU8R7AqCDrOd+te+zlSFWG
16ySNQO47RND0OzNqZ4ojgCC0n9RpP+zOfndmderZ4EnfXSbodwGUwkcuE7Z96cP
jNoainO2iwlyMZPlVynrw61O3RxGu/s/ch+uY1mV+TyvAAWoOlzt57gYUs3eGduz
4Ky+0Ubctg3sfBaqA2Hg6GjtAqG/QUssRyj8YgsFMrgXPHDTbLh6abej39wWo4gz
ZdC7Bm47hV/yfVdWe2iq3/5iqdILEdPBh3fDh6NNsZ1Jlm3aEZpH9rEXm0k4X2MJ
A9NDAFVj8dAYVZza7+Y8jPc8FNe+HqN9HYip/2K7g68WAJGWnMc9lq9qGwGmg1Gl
dn6yM27AgH8B+UljWYM9FS1ZFsc8KCudJavRZqA2d0W3rbXVWAoBBp83ii0yX1Nm
ZPAblAYMZCDeCtrVrDYKLtGn566rfpCrv3R5cppwHLksGJsDxgWrjG47l9uy5HXI
u05jiXT11R+pjIU2Wv5qsiUIhyvli6AaiFYHIdZ8fWaovPAOdhrCrN3IryvUVHj/
LqMcnmW1rWGNYN9pqHn0sQZ730ZJIma0klhTZOn8HPJNbiK68X0=
=LbcA
-----END PGP SIGNATURE-----
Merge tag 'v6.6-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Move crypto engine callback from tfm ctx into algorithm object
- Fix atomic sleep bug in crypto_destroy_instance
- Move lib/mpi into lib/crypto
Algorithms:
- Add chacha20 and poly1305 implementation for powerpc p10
Drivers:
- Add AES skcipher and aead support to starfive
- Add Dynamic Boost Control support to ccp
- Add support for STM32P13 platform to stm32"
* tag 'v6.6-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (149 commits)
Revert "dt-bindings: crypto: qcom,prng: Add SM8450"
crypto: chelsio - Remove unused declarations
X.509: if signature is unsupported skip validation
crypto: qat - fix crypto capability detection for 4xxx
crypto: drivers - Explicitly include correct DT includes
crypto: engine - Remove crypto_engine_ctx
crypto: zynqmp - Use new crypto_engine_op interface
crypto: virtio - Use new crypto_engine_op interface
crypto: stm32 - Use new crypto_engine_op interface
crypto: jh7110 - Use new crypto_engine_op interface
crypto: rk3288 - Use new crypto_engine_op interface
crypto: omap - Use new crypto_engine_op interface
crypto: keembay - Use new crypto_engine_op interface
crypto: sl3516 - Use new crypto_engine_op interface
crypto: caam - Use new crypto_engine_op interface
crypto: aspeed - Remove non-standard sha512 algorithms
crypto: aspeed - Use new crypto_engine_op interface
crypto: amlogic - Use new crypto_engine_op interface
crypto: sun8i-ss - Use new crypto_engine_op interface
crypto: sun8i-ce - Use new crypto_engine_op interface
...
Rather than having the callback in the request, move it into the
crypto_alg object. This avoids having crypto_engine look into the
request context is private to the driver.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Most drivers should not access the internal details of struct
crypto_engine. Move it into the internal header file.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Create crypto/internal/engine.h to house details that should not
be used by drivers. It is empty for the time being.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The engine file does not need the actual crypto type definitions
so move those header inclusions to where they are actually used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The callbacks for prepare and unprepare request in crypto_engine
is superfluous. They can be done directly from do_one_request.
Move the code into do_one_request and remove the unused callbacks.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a new link restriction. Restrict the addition of keys in a keyring
based on the key having digitalSignature usage set. Additionally, verify
the new certificate against the ones in the system keyrings. Add two
additional functions to use the new restriction within either the builtin
or secondary keyrings.
[jarkko@kernel.org: Fix checkpatch.pl --strict issues]
Signed-off-by: Eric Snowberg <eric.snowberg@oracle.com>
Reviewed-and-tested-by: Mimi Zohar <zohar@linux.ibm.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>