mul_u64_u64_div_u64: make it precise always

Patch series "mul_u64_u64_div_u64: new implementation", v3.

This provides an implementation for mul_u64_u64_div_u64() that always
produces exact results.


This patch (of 2):

Library facilities must always return exact results.  If the caller may be
contented with approximations then it should do the approximation on its
own.

In this particular case the comment in the code says "the algorithm
... below might lose some precision". Well, if you try it with e.g.:

	a = 18446462598732840960
	b = 18446462598732840960
	c = 18446462598732840961

then the produced answer is 0 whereas the exact answer should be
18446462598732840959.  This is _some_ precision lost indeed!

Let's reimplement this function so it always produces the exact result
regardless of its inputs while preserving existing fast paths when
possible.

Uwe said:

: My personal interest is to get the calculations in pwm drivers right. 
: This function is used in several drivers below drivers/pwm/ .  With the
: errors in mul_u64_u64_div_u64(), pwm consumers might not get the
: settings they request.  Although I have to admit that I'm not aware it
: breaks real use cases (because typically the periods used are too short
: to make the involved multiplications overflow), but I pretty sure am
: not aware of all usages and it breaks testing.
: 
: Another justification is commits like
: https://git.kernel.org/tip/77baa5bafcbe1b2a15ef9c37232c21279c95481c,
: where people start to work around the precision shortcomings of
: mul_u64_u64_div_u64().

Link: https://lkml.kernel.org/r/20240707190648.1982714-1-nico@fluxnic.net
Link: https://lkml.kernel.org/r/20240707190648.1982714-2-nico@fluxnic.net
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Tested-by: Uwe Kleine-König <u.kleine-koenig@baylibre.com>
Reviewed-by: Uwe Kleine-König <u.kleine-koenig@baylibre.com>
Tested-by: Biju Das <biju.das.jz@bp.renesas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Nicolas Pitre 2024-07-07 15:05:19 -04:00 committed by Andrew Morton
parent 431c1646e1
commit b29a62d87c

View File

@ -186,55 +186,77 @@ EXPORT_SYMBOL(iter_div_u64_rem);
#ifndef mul_u64_u64_div_u64 #ifndef mul_u64_u64_div_u64
u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 c) u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 c)
{ {
u64 res = 0, div, rem; if (ilog2(a) + ilog2(b) <= 62)
int shift; return div64_u64(a * b, c);
/* can a * b overflow ? */ #if defined(__SIZEOF_INT128__)
if (ilog2(a) + ilog2(b) > 62) {
/* native 64x64=128 bits multiplication */
u128 prod = (u128)a * b;
u64 n_lo = prod, n_hi = prod >> 64;
#else
/* perform a 64x64=128 bits multiplication manually */
u32 a_lo = a, a_hi = a >> 32, b_lo = b, b_hi = b >> 32;
u64 x, y, z;
x = (u64)a_lo * b_lo;
y = (u64)a_lo * b_hi + (u32)(x >> 32);
z = (u64)a_hi * b_hi + (u32)(y >> 32);
y = (u64)a_hi * b_lo + (u32)y;
z += (u32)(y >> 32);
x = (y << 32) + (u32)x;
u64 n_lo = x, n_hi = z;
#endif
int shift = __builtin_ctzll(c);
/* try reducing the fraction in case the dividend becomes <= 64 bits */
if ((n_hi >> shift) == 0) {
u64 n = (n_lo >> shift) | (n_hi << (64 - shift));
return div64_u64(n, c >> shift);
/* /*
* Note that the algorithm after the if block below might lose * The remainder value if needed would be:
* some precision and the result is more exact for b > a. So * res = div64_u64_rem(n, c >> shift, &rem);
* exchange a and b if a is bigger than b. * rem = (rem << shift) + (n_lo - (n << shift));
*
* For example with a = 43980465100800, b = 100000000, c = 1000000000
* the below calculation doesn't modify b at all because div == 0
* and then shift becomes 45 + 26 - 62 = 9 and so the result
* becomes 4398035251080. However with a and b swapped the exact
* result is calculated (i.e. 4398046510080).
*/ */
if (a > b) }
swap(a, b);
/* if (n_hi >= c) {
* (b * a) / c is equal to /* overflow: result is unrepresentable in a u64 */
* return -1;
* (b / c) * a + }
* (b % c) * a / c
* /* Do the full 128 by 64 bits division */
* if nothing overflows. Can the 1st multiplication
* overflow? Yes, but we do not care: this can only shift = __builtin_clzll(c);
* happen if the end result can't fit in u64 anyway. c <<= shift;
*
* So the code below does int p = 64 + shift;
* u64 res = 0;
* res = (b / c) * a; bool carry;
* b = b % c;
*/ do {
div = div64_u64_rem(b, c, &rem); carry = n_hi >> 63;
res = div * a; shift = carry ? 1 : __builtin_clzll(n_hi);
b = rem; if (p < shift)
break;
p -= shift;
n_hi <<= shift;
n_hi |= n_lo >> (64 - shift);
n_lo <<= shift;
if (carry || (n_hi >= c)) {
n_hi -= c;
res |= 1ULL << p;
}
} while (n_hi);
/* The remainder value if needed would be n_hi << p */
shift = ilog2(a) + ilog2(b) - 62;
if (shift > 0) {
/* drop precision */
b >>= shift;
c >>= shift;
if (!c)
return res; return res;
}
}
return res + div64_u64(a * b, c);
} }
EXPORT_SYMBOL(mul_u64_u64_div_u64); EXPORT_SYMBOL(mul_u64_u64_div_u64);
#endif #endif