binman: Increase default fitImage data section resize step from 1k to 64k

Currently the fitImage data area is resized in 1 kiB steps. This works
when bundling smaller images below some 1 MiB, but when bundling large
images into the fitImage, this make binman spend extreme amount of time
and CPU just spinning in pylibfdt FdtSw.check_space() until the size
grows enough for the large image to fit into the data area. Increase
the default step to 64 kiB, which is a reasonable compromise -- the
U-Boot blobs are somewhere in the 64kiB...1MiB range, DT blob are just
short of 64 kiB, and so are the other blobs. This reduces binman runtime
with 32 MiB blob from 2.3 minutes to 5 seconds.

The following can be used to trigger the problem if rand.bin is some 32 MiB.
"
/ {
  itb {
    fit {
      images {
        test {
          compression = "none";
          description = "none";
          type = "flat_dt";

          blob {
            filename = "rand.bin";
            type = "blob-ext";
          };
        };
      };
    };
  };

  configurations {
    binman_configuration: config {
      loadables = "test";
    };
  };
};
"

Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Alper Nebi Yasak <alpernebiyasak@gmail.com>
Cc: Simon Glass <sjg@chromium.org>
Reviewed-by: Simon Glass <sjg@chromium.org>
This commit is contained in:
Marek Vasut 2022-07-12 19:41:29 +02:00 committed by Simon Glass
parent 54e89a8beb
commit 109dbdf042

View File

@ -658,6 +658,7 @@ class Entry_fit(Entry_section):
# Build a new tree with all nodes and properties starting from the # Build a new tree with all nodes and properties starting from the
# entry node # entry node
fsw = libfdt.FdtSw() fsw = libfdt.FdtSw()
fsw.INC_SIZE = 65536
fsw.finish_reservemap() fsw.finish_reservemap()
to_remove = [] to_remove = []
loadables = [] loadables = []