linux/kernel/time/timekeeping_internal.h
Arnd Bergmann 6909e29fde kdb: use __ktime_get_real_seconds instead of __current_kernel_time
kdb is the only user of the __current_kernel_time() interface, which is
not y2038 safe and should be removed at some point.

The kdb code also goes to great lengths to print the time in a
human-readable format from 'struct timespec', again using a non-y2038-safe
re-implementation of the generic time_to_tm() code.

Using __current_kernel_time() here is necessary since the regular
accessors that require a sequence lock might hang when called during the
xtime update. However, this is safe in the particular case since kdb is
only interested in the tv_sec field that is updated atomically.

In order to make this y2038-safe, I'm converting the code to the generic
time64_to_tm helper, but that introduces the problem that we have no
interface like __current_kernel_time() that provides a 64-bit timestamp
in a lockless, safe and architecture-independent way. I have multiple
ideas for how to solve that:

- __ktime_get_real_seconds() is lockless, but can return
  incorrect results on 32-bit architectures in the special case that
  we are in the process of changing the time across the epoch, either
  during the timer tick that overflows the seconds in 2038, or while
  calling settimeofday.

- ktime_get_real_fast_ns() would work in this context, but does
  require a call into the clocksource driver to return a high-resolution
  timestamp. This may have undesired side-effects in the debugger,
  since we want to limit the interactions with the rest of the kernel.

- Adding a ktime_get_real_fast_seconds() based on tk_fast_mono
  plus tkr->base_real without the tk_clock_read() delta. Not sure about
  the value of adding yet another interface here.

- Changing the existing ktime_get_real_seconds() to use
  tk_fast_mono on 32-bit architectures rather than xtime_sec.  I think
  this could work, but am not entirely sure if this is an improvement.

I picked the first of those for simplicity here. It's technically
not correct but probably good enough as the time is only used for the
debugging output and the race will likely never be hit in practice.
Another downside is having to move the declaration into a public header
file.

Let me know if anyone has a different preference.

Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Link: https://patchwork.kernel.org/patch/9775309/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jason Wessel <jason.wessel@windriver.com>
2018-01-25 08:40:18 -06:00

35 lines
778 B
C

/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _TIMEKEEPING_INTERNAL_H
#define _TIMEKEEPING_INTERNAL_H
/*
* timekeeping debug functions
*/
#include <linux/clocksource.h>
#include <linux/time.h>
#ifdef CONFIG_DEBUG_FS
extern void tk_debug_account_sleep_time(struct timespec64 *t);
#else
#define tk_debug_account_sleep_time(x)
#endif
#ifdef CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE
static inline u64 clocksource_delta(u64 now, u64 last, u64 mask)
{
u64 ret = (now - last) & mask;
/*
* Prevent time going backwards by checking the MSB of mask in
* the result. If set, return 0.
*/
return ret & ~(mask >> 1) ? 0 : ret;
}
#else
static inline u64 clocksource_delta(u64 now, u64 last, u64 mask)
{
return (now - last) & mask;
}
#endif
#endif /* _TIMEKEEPING_INTERNAL_H */