bpf: Remove unnecessary ring buffer size check

The theoretical maximum size of ring buffer is about 64GB, but now the
size of ring buffer is specified by max_entries in bpf_attr and its
maximum value is (4GB - 1), and it won't be possible for overflow.

So just remove the unnecessary size check in ringbuf_map_alloc() but
keep the comments for possible extension in future.

Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Closes: https://lore.kernel.org/bpf/9c636a63-1f3d-442d-9223-96c2dccb9469@moroto.mountain
Link: https://lore.kernel.org/bpf/20230704074014.216616-1-houtao@huaweicloud.com
This commit is contained in:
Hou Tao 2023-07-04 15:40:14 +08:00 committed by Daniel Borkmann
parent c20f9cef72
commit cf6eeb8f9d

View File

@ -23,15 +23,6 @@
#define RINGBUF_MAX_RECORD_SZ (UINT_MAX/4)
/* Maximum size of ring buffer area is limited by 32-bit page offset within
* record header, counted in pages. Reserve 8 bits for extensibility, and take
* into account few extra pages for consumer/producer pages and
* non-mmap()'able parts. This gives 64GB limit, which seems plenty for single
* ring buffer.
*/
#define RINGBUF_MAX_DATA_SZ \
(((1ULL << 24) - RINGBUF_POS_PAGES - RINGBUF_PGOFF) * PAGE_SIZE)
struct bpf_ringbuf {
wait_queue_head_t waitq;
struct irq_work work;
@ -161,6 +152,17 @@ static void bpf_ringbuf_notify(struct irq_work *work)
wake_up_all(&rb->waitq);
}
/* Maximum size of ring buffer area is limited by 32-bit page offset within
* record header, counted in pages. Reserve 8 bits for extensibility, and
* take into account few extra pages for consumer/producer pages and
* non-mmap()'able parts, the current maximum size would be:
*
* (((1ULL << 24) - RINGBUF_POS_PAGES - RINGBUF_PGOFF) * PAGE_SIZE)
*
* This gives 64GB limit, which seems plenty for single ring buffer. Now
* considering that the maximum value of data_sz is (4GB - 1), there
* will be no overflow, so just note the size limit in the comments.
*/
static struct bpf_ringbuf *bpf_ringbuf_alloc(size_t data_sz, int numa_node)
{
struct bpf_ringbuf *rb;
@ -193,12 +195,6 @@ static struct bpf_map *ringbuf_map_alloc(union bpf_attr *attr)
!PAGE_ALIGNED(attr->max_entries))
return ERR_PTR(-EINVAL);
#ifdef CONFIG_64BIT
/* on 32-bit arch, it's impossible to overflow record's hdr->pgoff */
if (attr->max_entries > RINGBUF_MAX_DATA_SZ)
return ERR_PTR(-E2BIG);
#endif
rb_map = bpf_map_area_alloc(sizeof(*rb_map), NUMA_NO_NODE);
if (!rb_map)
return ERR_PTR(-ENOMEM);