__arch_xprod64(): make __always_inline when optimizing for performance

Recent gcc versions started not systematically inline __arch_xprod64()
and that has performance implications. Give the compiler the freedom to
decide only when optimizing for size.

Here's some timing numbers from lib/math/test_div64.c

Using __always_inline:

```
test_div64: Starting 64bit/32bit division and modulo test
test_div64: Completed 64bit/32bit division and modulo test, 0.048285584s elapsed
```

Without __always_inline:

```
test_div64: Starting 64bit/32bit division and modulo test
test_div64: Completed 64bit/32bit division and modulo test, 0.053023584s elapsed
```

Forcing constant base through the non-constant base code path:

```
test_div64: Starting 64bit/32bit division and modulo test
test_div64: Completed 64bit/32bit division and modulo test, 0.103263776s elapsed
```

It is worth noting that test_div64 does half the test with non constant
divisors already so the impact is greater than what those numbers show.
And for what it is worth, those numbers were obtained using QEMU. The
gcc version is 14.1.0.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
This commit is contained in:
Nicolas Pitre 2024-10-03 17:16:16 -04:00 committed by Arnd Bergmann
parent 06508533d5
commit d533cb2d2a
No known key found for this signature in database
GPG Key ID: 60AB47FFC9095227
2 changed files with 12 additions and 2 deletions

View File

@ -52,7 +52,12 @@ static inline uint32_t __div64_32(uint64_t *n, uint32_t base)
#else #else
static inline uint64_t __arch_xprod_64(uint64_t m, uint64_t n, bool bias) #ifdef CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE
static __always_inline
#else
static inline
#endif
uint64_t __arch_xprod_64(uint64_t m, uint64_t n, bool bias)
{ {
unsigned long long res; unsigned long long res;
register unsigned int tmp asm("ip") = 0; register unsigned int tmp asm("ip") = 0;

View File

@ -134,7 +134,12 @@
* Hoping for compile-time optimization of conditional code. * Hoping for compile-time optimization of conditional code.
* Architectures may provide their own optimized assembly implementation. * Architectures may provide their own optimized assembly implementation.
*/ */
static inline uint64_t __arch_xprod_64(const uint64_t m, uint64_t n, bool bias) #ifdef CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE
static __always_inline
#else
static inline
#endif
uint64_t __arch_xprod_64(const uint64_t m, uint64_t n, bool bias)
{ {
uint32_t m_lo = m; uint32_t m_lo = m;
uint32_t m_hi = m >> 32; uint32_t m_hi = m >> 32;