Merge branch 'patman' of git://git.denx.de/u-boot-x86

This commit is contained in:
Tom Rini 2013-04-08 12:03:22 -04:00
commit f140b5863b
21 changed files with 3831 additions and 123 deletions

1
tools/buildman/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
*.pyc

679
tools/buildman/README Normal file
View File

@ -0,0 +1,679 @@
# Copyright (c) 2013 The Chromium OS Authors.
#
# See file CREDITS for list of people who contributed to this
# project.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of
# the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston,
# MA 02111-1307 USA
#
What is this?
=============
This tool handles building U-Boot to check that you have not broken it
with your patch series. It can build each individual commit and report
which boards fail on which commits, and which errors come up. It aims
to make full use of multi-processor machines.
A key feature of buildman is its output summary, which allows warnings,
errors or image size increases in a particular commit or board to be
quickly identified and the offending commit pinpointed. This can be a big
help for anyone working with >10 patches at a time.
Caveats
=======
Buildman is still in its infancy. It is already a very useful tool, but
expect to find problems and send patches.
Buildman can be stopped and restarted, in which case it will continue
where it left off. This should happen cleanly and without side-effects.
If not, it is a bug, for which a patch would be welcome.
Buildman gets so tied up in its work that it can ignore the outside world.
You may need to press Ctrl-C several times to quit it. Also it will print
out various exceptions when stopped.
Theory of Operation
===================
(please read this section in full twice or you will be perpetually confused)
Buildman is a builder. It is not make, although it runs make. It does not
produce any useful output on the terminal while building, except for
progress information. All the output (errors, warnings and binaries if you
are ask for them) is stored in output directories, which you can look at
while the build is progressing, or when it is finished.
Buildman produces a concise summary of which boards succeeded and failed.
It shows which commit introduced which board failure using a simple
red/green colour coding. Full error information can be requested, in which
case it is de-duped and displayed against the commit that introduced the
error. An example workflow is below.
Buildman stores image size information and can report changes in image size
from commit to commit. An example of this is below.
Buildman starts multiple threads, and each thread builds for one board at
a time. A thread starts at the first commit, configures the source for your
board and builds it. Then it checks out the next commit and does an
incremental build. Eventually the thread reaches the last commit and stops.
If errors or warnings are found along the way, the thread will reconfigure
after every commit, and your build will be very slow. This is because a
file that produces just a warning would not normally be rebuilt in an
incremental build.
Buildman works in an entirely separate place from your U-Boot repository.
It creates a separate working directory for each thread, and puts the
output files in the working directory, organised by commit name and board
name, in a two-level hierarchy.
Buildman is invoked in your U-Boot directory, the one with the .git
directory. It clones this repository into a copy for each thread, and the
threads do not affect the state of your git repository. Any checkouts done
by the thread affect only the working directory for that thread.
Buildman automatically selects the correct toolchain for each board. You
must supply suitable toolchains, but buildman takes care of selecting the
right one.
Buildman always builds a branch, and always builds the upstream commit as
well, for comparison. It cannot build individual commits at present, unless
(maybe) you point it at an empty branch. Put all your commits in a branch,
set the branch's upstream to a valid value, and all will be well. Otherwise
buildman will perform random actions. Use -n to check what the random
actions might be.
Buildman is optimised for building many commits at once, for many boards.
On multi-core machines, Buildman is fast because it uses most of the
available CPU power. When it gets to the end, or if you are building just
a few commits or boards, it will be pretty slow. As a tip, if you don't
plan to use your machine for anything else, you can use -T to increase the
number of threads beyond the default.
Buildman lets you build all boards, or a subset. Specify the subset using
the board name, architecture name, SOC name, or anything else in the
boards.cfg file. So 'at91' will build all AT91 boards (arm), powerpc will
build all PowerPC boards.
Buildman does not store intermediate object files. It optionally copies
the binary output into a directory when a build is successful. Size
information is always recorded. It needs a fair bit of disk space to work,
typically 250MB per thread.
Setting up
==========
1. Get the U-Boot source. You probably already have it, but if not these
steps should get you started with a repo and some commits for testing.
$ cd /path/to/u-boot
$ git clone git://git.denx.de/u-boot.git .
$ git checkout -b my-branch origin/master
$ # Add some commits to the branch, reading for testing
2. Create ~/.buildman to tell buildman where to find tool chains. As an
example:
# Buildman settings file
[toolchain]
root: /
rest: /toolchains/*
eldk: /opt/eldk-4.2
[toolchain-alias]
x86: i386
blackfin: bfin
sh: sh4
nds32: nds32le
openrisc: or32
This selects the available toolchain paths. Add the base directory for
each of your toolchains here. Buildman will search inside these directories
and also in any '/usr' and '/usr/bin' subdirectories.
Make sure the tags (here root: rest: and eldk:) are unique.
The toolchain-alias section indicates that the i386 toolchain should be used
to build x86 commits.
2. Check the available toolchains
Run this check to make sure that you have a toolchain for every architecture.
$ ./tools/buildman/buildman --list-tool-chains
Scanning for tool chains
- scanning path '/'
- looking in '/.'
- looking in '/bin'
- looking in '/usr/bin'
- found '/usr/bin/gcc'
Tool chain test: OK
- found '/usr/bin/c89-gcc'
Tool chain test: OK
- found '/usr/bin/c99-gcc'
Tool chain test: OK
- found '/usr/bin/x86_64-linux-gnu-gcc'
Tool chain test: OK
- scanning path '/toolchains/powerpc-linux'
- looking in '/toolchains/powerpc-linux/.'
- looking in '/toolchains/powerpc-linux/bin'
- found '/toolchains/powerpc-linux/bin/powerpc-linux-gcc'
Tool chain test: OK
- looking in '/toolchains/powerpc-linux/usr/bin'
- scanning path '/toolchains/nds32le-linux-glibc-v1f'
- looking in '/toolchains/nds32le-linux-glibc-v1f/.'
- looking in '/toolchains/nds32le-linux-glibc-v1f/bin'
- found '/toolchains/nds32le-linux-glibc-v1f/bin/nds32le-linux-gcc'
Tool chain test: OK
- looking in '/toolchains/nds32le-linux-glibc-v1f/usr/bin'
- scanning path '/toolchains/nios2'
- looking in '/toolchains/nios2/.'
- looking in '/toolchains/nios2/bin'
- found '/toolchains/nios2/bin/nios2-linux-gcc'
Tool chain test: OK
- found '/toolchains/nios2/bin/nios2-linux-uclibc-gcc'
Tool chain test: OK
- looking in '/toolchains/nios2/usr/bin'
- found '/toolchains/nios2/usr/bin/nios2-linux-gcc'
Tool chain test: OK
- found '/toolchains/nios2/usr/bin/nios2-linux-uclibc-gcc'
Tool chain test: OK
- scanning path '/toolchains/microblaze-unknown-linux-gnu'
- looking in '/toolchains/microblaze-unknown-linux-gnu/.'
- looking in '/toolchains/microblaze-unknown-linux-gnu/bin'
- found '/toolchains/microblaze-unknown-linux-gnu/bin/microblaze-unknown-linux-gnu-gcc'
Tool chain test: OK
- found '/toolchains/microblaze-unknown-linux-gnu/bin/mb-linux-gcc'
Tool chain test: OK
- looking in '/toolchains/microblaze-unknown-linux-gnu/usr/bin'
- scanning path '/toolchains/mips-linux'
- looking in '/toolchains/mips-linux/.'
- looking in '/toolchains/mips-linux/bin'
- found '/toolchains/mips-linux/bin/mips-linux-gcc'
Tool chain test: OK
- looking in '/toolchains/mips-linux/usr/bin'
- scanning path '/toolchains/old'
- looking in '/toolchains/old/.'
- looking in '/toolchains/old/bin'
- looking in '/toolchains/old/usr/bin'
- scanning path '/toolchains/i386-linux'
- looking in '/toolchains/i386-linux/.'
- looking in '/toolchains/i386-linux/bin'
- found '/toolchains/i386-linux/bin/i386-linux-gcc'
Tool chain test: OK
- looking in '/toolchains/i386-linux/usr/bin'
- scanning path '/toolchains/bfin-uclinux'
- looking in '/toolchains/bfin-uclinux/.'
- looking in '/toolchains/bfin-uclinux/bin'
- found '/toolchains/bfin-uclinux/bin/bfin-uclinux-gcc'
Tool chain test: OK
- looking in '/toolchains/bfin-uclinux/usr/bin'
- scanning path '/toolchains/sparc-elf'
- looking in '/toolchains/sparc-elf/.'
- looking in '/toolchains/sparc-elf/bin'
- found '/toolchains/sparc-elf/bin/sparc-elf-gcc'
Tool chain test: OK
- looking in '/toolchains/sparc-elf/usr/bin'
- scanning path '/toolchains/arm-2010q1'
- looking in '/toolchains/arm-2010q1/.'
- looking in '/toolchains/arm-2010q1/bin'
- found '/toolchains/arm-2010q1/bin/arm-none-linux-gnueabi-gcc'
Tool chain test: OK
- looking in '/toolchains/arm-2010q1/usr/bin'
- scanning path '/toolchains/from'
- looking in '/toolchains/from/.'
- looking in '/toolchains/from/bin'
- looking in '/toolchains/from/usr/bin'
- scanning path '/toolchains/sh4-gentoo-linux-gnu'
- looking in '/toolchains/sh4-gentoo-linux-gnu/.'
- looking in '/toolchains/sh4-gentoo-linux-gnu/bin'
- found '/toolchains/sh4-gentoo-linux-gnu/bin/sh4-gentoo-linux-gnu-gcc'
Tool chain test: OK
- looking in '/toolchains/sh4-gentoo-linux-gnu/usr/bin'
- scanning path '/toolchains/avr32-linux'
- looking in '/toolchains/avr32-linux/.'
- looking in '/toolchains/avr32-linux/bin'
- found '/toolchains/avr32-linux/bin/avr32-gcc'
Tool chain test: OK
- looking in '/toolchains/avr32-linux/usr/bin'
- scanning path '/toolchains/m68k-linux'
- looking in '/toolchains/m68k-linux/.'
- looking in '/toolchains/m68k-linux/bin'
- found '/toolchains/m68k-linux/bin/m68k-linux-gcc'
Tool chain test: OK
- looking in '/toolchains/m68k-linux/usr/bin'
List of available toolchains (17):
arm : /toolchains/arm-2010q1/bin/arm-none-linux-gnueabi-gcc
avr32 : /toolchains/avr32-linux/bin/avr32-gcc
bfin : /toolchains/bfin-uclinux/bin/bfin-uclinux-gcc
c89 : /usr/bin/c89-gcc
c99 : /usr/bin/c99-gcc
i386 : /toolchains/i386-linux/bin/i386-linux-gcc
m68k : /toolchains/m68k-linux/bin/m68k-linux-gcc
mb : /toolchains/microblaze-unknown-linux-gnu/bin/mb-linux-gcc
microblaze: /toolchains/microblaze-unknown-linux-gnu/bin/microblaze-unknown-linux-gnu-gcc
mips : /toolchains/mips-linux/bin/mips-linux-gcc
nds32le : /toolchains/nds32le-linux-glibc-v1f/bin/nds32le-linux-gcc
nios2 : /toolchains/nios2/bin/nios2-linux-gcc
powerpc : /toolchains/powerpc-linux/bin/powerpc-linux-gcc
sandbox : /usr/bin/gcc
sh4 : /toolchains/sh4-gentoo-linux-gnu/bin/sh4-gentoo-linux-gnu-gcc
sparc : /toolchains/sparc-elf/bin/sparc-elf-gcc
x86_64 : /usr/bin/x86_64-linux-gnu-gcc
You can see that everything is covered, even some strange ones that won't
be used (c88 and c99). This is a feature.
How to run it
=============
First do a dry run using the -n flag: (replace <branch> with a real, local
branch with a valid upstream)
$ ./tools/buildman/buildman -b <branch> -n
If it can't detect the upstream branch, try checking out the branch, and
doing something like 'git branch --set-upstream <branch> upstream/master'
or something similar.
As an exmmple:
Dry run, so not doing much. But I would do this:
Building 18 commits for 1059 boards (4 threads, 1 job per thread)
Build directory: ../lcd9b
5bb3505 Merge branch 'master' of git://git.denx.de/u-boot-arm
c18f1b4 tegra: Use const for pinmux_config_pingroup/table()
2f043ae tegra: Add display support to funcmux
e349900 tegra: fdt: Add pwm binding and node
424a5f0 tegra: fdt: Add LCD definitions for Tegra
0636ccf tegra: Add support for PWM
a994fe7 tegra: Add SOC support for display/lcd
fcd7350 tegra: Add LCD driver
4d46e9d tegra: Add LCD support to Nvidia boards
991bd48 arm: Add control over cachability of memory regions
54e8019 lcd: Add CONFIG_LCD_ALIGNMENT to select frame buffer alignment
d92aff7 lcd: Add support for flushing LCD fb from dcache after update
dbd0677 tegra: Align LCD frame buffer to section boundary
0cff9b8 tegra: Support control of cache settings for LCD
9c56900 tegra: fdt: Add LCD definitions for Seaboard
5cc29db lcd: Add CONFIG_CONSOLE_SCROLL_LINES option to speed console
cac5a23 tegra: Enable display/lcd support on Seaboard
49ff541 wip
Total boards to build for each commit: 1059
This shows that it will build all 1059 boards, using 4 threads (because
we have a 4-core CPU). Each thread will run with -j1, meaning that each
make job will use a single CPU. The list of commits to be built helps you
confirm that things look about right. Notice that buildman has chosen a
'base' directory for you, immediately above your source tree.
Buildman works entirely inside the base directory, here ../lcd9b,
creating a working directory for each thread, and creating output
directories for each commit and board.
Suggested Workflow
==================
To run the build for real, take off the -n:
$ ./tools/buildman/buildman -b <branch>
Buildman will set up some working directories, and get started. After a
minute or so it will settle down to a steady pace, with a display like this:
Building 18 commits for 1059 boards (4 threads, 1 job per thread)
528 36 124 /19062 1:13:30 : SIMPC8313_SP
This means that it is building 19062 board/commit combinations. So far it
has managed to succesfully build 528. Another 36 have built with warnings,
and 124 more didn't build at all. Buildman expects to complete the process
in an hour and 15 minutes. Use this time to buy a faster computer.
To find out how the build went, ask for a summary with -s. You can do this
either before the build completes (presumably in another terminal) or or
afterwards. Let's work through an example of how this is used:
$ ./tools/buildman/buildman -b lcd9b -s
...
01: Merge branch 'master' of git://git.denx.de/u-boot-arm
powerpc: + galaxy5200_LOWBOOT
02: tegra: Use const for pinmux_config_pingroup/table()
03: tegra: Add display support to funcmux
04: tegra: fdt: Add pwm binding and node
05: tegra: fdt: Add LCD definitions for Tegra
06: tegra: Add support for PWM
07: tegra: Add SOC support for display/lcd
08: tegra: Add LCD driver
09: tegra: Add LCD support to Nvidia boards
10: arm: Add control over cachability of memory regions
11: lcd: Add CONFIG_LCD_ALIGNMENT to select frame buffer alignment
12: lcd: Add support for flushing LCD fb from dcache after update
arm: + lubbock
13: tegra: Align LCD frame buffer to section boundary
14: tegra: Support control of cache settings for LCD
15: tegra: fdt: Add LCD definitions for Seaboard
16: lcd: Add CONFIG_CONSOLE_SCROLL_LINES option to speed console
17: tegra: Enable display/lcd support on Seaboard
18: wip
This shows which commits have succeeded and which have failed. In this case
the build is still in progress so many boards are not built yet (use -u to
see which ones). But still we can see a few failures. The galaxy5200_LOWBOOT
never builds correctly. This could be a problem with our toolchain, or it
could be a bug in the upstream. The good news is that we probably don't need
to blame our commits. The bad news is it isn't tested on that board.
Commit 12 broke lubbock. That's what the '+ lubbock' means. The failure
is never fixed by a later commit, or you would see lubbock again, in green,
without the +.
To see the actual error:
$ ./tools/buildman/buildman -b <branch> -se lubbock
...
12: lcd: Add support for flushing LCD fb from dcache after update
arm: + lubbock
+common/libcommon.o: In function `lcd_sync':
+/u-boot/lcd9b/.bm-work/00/common/lcd.c:120: undefined reference to `flush_dcache_range'
+arm-none-linux-gnueabi-ld: BFD (Sourcery G++ Lite 2010q1-202) 2.19.51.20090709 assertion fail /scratch/julian/2010q1-release-linux-lite/obj/binutils-src-2010q1-202-arm-none-linux-gnueabi-i686-pc-linux-gnu/bfd/elf32-arm.c:12572
+make: *** [/u-boot/lcd9b/.bm-work/00/build/u-boot] Error 139
13: tegra: Align LCD frame buffer to section boundary
14: tegra: Support control of cache settings for LCD
15: tegra: fdt: Add LCD definitions for Seaboard
16: lcd: Add CONFIG_CONSOLE_SCROLL_LINES option to speed console
-/u-boot/lcd9b/.bm-work/00/common/lcd.c:120: undefined reference to `flush_dcache_range'
+/u-boot/lcd9b/.bm-work/00/common/lcd.c:125: undefined reference to `flush_dcache_range'
17: tegra: Enable display/lcd support on Seaboard
18: wip
So the problem is in lcd.c, due to missing cache operations. This information
should be enough to work out what that commit is doing to break these
boards. (In this case pxa did not have cache operations defined).
If you see error lines marked with - that means that the errors were fixed
by that commit. Sometimes commits can be in the wrong order, so that a
breakage is introduced for a few commits and fixed by later commits. This
shows up clearly with buildman. You can then reorder the commits and try
again.
At commit 16, the error moves - you can see that the old error at line 120
is fixed, but there is a new one at line 126. This is probably only because
we added some code and moved the broken line futher down the file.
If many boards have the same error, then -e will display the error only
once. This makes the output as concise as possible.
The full build output in this case is available in:
../lcd9b/12_of_18_gd92aff7_lcd--Add-support-for/lubbock/
done: Indicates the build was done, and holds the return code from make.
This is 0 for a good build, typically 2 for a failure.
err: Output from stderr, if any. Errors and warnings appear here.
log: Output from stdout. Normally there isn't any since buildman runs
in silent mode for now.
toolchain: Shows information about the toolchain used for the build.
sizes: Shows image size information.
It is possible to get the build output there also. Use the -k option for
this. In that case you will also see some output files, like:
System.map toolchain u-boot u-boot.bin u-boot.map autoconf.mk
(also SPL versions u-boot-spl and u-boot-spl.bin if available)
Checking Image Sizes
====================
A key requirement for U-Boot is that you keep code/data size to a minimum.
Where a new feature increases this noticeably it should normally be put
behind a CONFIG flag so that boards can leave it off and keep the image
size more or less the same with each new release.
To check the impact of your commits on image size, use -S. For example:
$ ./tools/buildman/buildman -b us-x86 -sS
Summary of 10 commits for 1066 boards (4 threads, 1 job per thread)
01: MAKEALL: add support for per architecture toolchains
02: x86: Add function to get top of usable ram
x86: (for 1/3 boards) text -272.0 rodata +41.0
03: x86: Add basic cache operations
04: x86: Permit bootstage and timer data to be used prior to relocation
x86: (for 1/3 boards) data +16.0
05: x86: Add an __end symbol to signal the end of the U-Boot binary
x86: (for 1/3 boards) text +76.0
06: x86: Rearrange the output input to remove BSS
x86: (for 1/3 boards) bss -2140.0
07: x86: Support relocation of FDT on start-up
x86: + coreboot-x86
08: x86: Add error checking to x86 relocation code
09: x86: Adjust link device tree include file
10: x86: Enable CONFIG_OF_CONTROL on coreboot
You can see that image size only changed on x86, which is good because this
series is not supposed to change any other board. From commit 7 onwards the
build fails so we don't get code size numbers. The numbers are fractional
because they are an average of all boards for that architecture. The
intention is to allow you to quickly find image size problems introduced by
your commits.
Note that the 'text' region and 'rodata' are split out. You should add the
two together to get the total read-only size (reported as the first column
in the output from binutil's 'size' utility).
A useful option is --step which lets you skip some commits. For example
--step 2 will show the image sizes for only every 2nd commit (so it will
compare the image sizes of the 1st, 3rd, 5th... commits). You can also use
--step 0 which will compare only the first and last commits. This is useful
for an overview of how your entire series affects code size.
You can also use -d to see a detailed size breakdown for each board. This
list is sorted in order from largest growth to largest reduction.
It is possible to go a little further with the -B option (--bloat). This
shows where U-Boot has bloted, breaking the size change down to the function
level. Example output is below:
$ ./tools/buildman/buildman -b us-mem4 -sSdB
...
19: Roll crc32 into hash infrastructure
arm: (for 10/10 boards) all -143.4 bss +1.2 data -4.8 rodata -48.2 text -91.6
paz00 : all +23 bss -4 rodata -29 text +56
u-boot: add: 1/0, grow: 3/-2 bytes: 168/-104 (64)
function old new delta
hash_command 80 160 +80
crc32_wd_buf - 56 +56
ext4fs_read_file 540 568 +28
insert_var_value_sub 688 692 +4
run_list_real 1996 1992 -4
do_mem_crc 168 68 -100
trimslice : all -9 bss +16 rodata -29 text +4
u-boot: add: 1/0, grow: 1/-3 bytes: 136/-124 (12)
function old new delta
hash_command 80 160 +80
crc32_wd_buf - 56 +56
ext4fs_iterate_dir 672 668 -4
ext4fs_read_file 568 548 -20
do_mem_crc 168 68 -100
whistler : all -9 bss +16 rodata -29 text +4
u-boot: add: 1/0, grow: 1/-3 bytes: 136/-124 (12)
function old new delta
hash_command 80 160 +80
crc32_wd_buf - 56 +56
ext4fs_iterate_dir 672 668 -4
ext4fs_read_file 568 548 -20
do_mem_crc 168 68 -100
seaboard : all -9 bss -28 rodata -29 text +48
u-boot: add: 1/0, grow: 3/-2 bytes: 160/-104 (56)
function old new delta
hash_command 80 160 +80
crc32_wd_buf - 56 +56
ext4fs_read_file 548 568 +20
run_list_real 1996 2000 +4
do_nandboot 760 756 -4
do_mem_crc 168 68 -100
colibri_t20_iris: all -9 rodata -29 text +20
u-boot: add: 1/0, grow: 2/-3 bytes: 140/-112 (28)
function old new delta
hash_command 80 160 +80
crc32_wd_buf - 56 +56
read_abs_bbt 204 208 +4
do_nandboot 760 756 -4
ext4fs_read_file 576 568 -8
do_mem_crc 168 68 -100
ventana : all -37 bss -12 rodata -29 text +4
u-boot: add: 1/0, grow: 1/-3 bytes: 136/-124 (12)
function old new delta
hash_command 80 160 +80
crc32_wd_buf - 56 +56
ext4fs_iterate_dir 672 668 -4
ext4fs_read_file 568 548 -20
do_mem_crc 168 68 -100
harmony : all -37 bss -16 rodata -29 text +8
u-boot: add: 1/0, grow: 2/-3 bytes: 140/-124 (16)
function old new delta
hash_command 80 160 +80
crc32_wd_buf - 56 +56
nand_write_oob_syndrome 428 432 +4
ext4fs_iterate_dir 672 668 -4
ext4fs_read_file 568 548 -20
do_mem_crc 168 68 -100
medcom-wide : all -417 bss +28 data -16 rodata -93 text -336
u-boot: add: 1/-1, grow: 1/-2 bytes: 88/-376 (-288)
function old new delta
crc32_wd_buf - 56 +56
do_fat_read_at 2872 2904 +32
hash_algo 16 - -16
do_mem_crc 168 68 -100
hash_command 420 160 -260
tec : all -449 bss -4 data -16 rodata -93 text -336
u-boot: add: 1/-1, grow: 1/-2 bytes: 88/-376 (-288)
function old new delta
crc32_wd_buf - 56 +56
do_fat_read_at 2872 2904 +32
hash_algo 16 - -16
do_mem_crc 168 68 -100
hash_command 420 160 -260
plutux : all -481 bss +16 data -16 rodata -93 text -388
u-boot: add: 1/-1, grow: 1/-3 bytes: 68/-408 (-340)
function old new delta
crc32_wd_buf - 56 +56
do_load_serial_bin 1688 1700 +12
hash_algo 16 - -16
do_fat_read_at 2904 2872 -32
do_mem_crc 168 68 -100
hash_command 420 160 -260
powerpc: (for 5/5 boards) all +37.4 data -3.2 rodata -41.8 text +82.4
MPC8610HPCD : all +55 rodata -29 text +84
u-boot: add: 1/0, grow: 0/-1 bytes: 176/-96 (80)
function old new delta
hash_command - 176 +176
do_mem_crc 184 88 -96
MPC8641HPCN : all +55 rodata -29 text +84
u-boot: add: 1/0, grow: 0/-1 bytes: 176/-96 (80)
function old new delta
hash_command - 176 +176
do_mem_crc 184 88 -96
MPC8641HPCN_36BIT: all +55 rodata -29 text +84
u-boot: add: 1/0, grow: 0/-1 bytes: 176/-96 (80)
function old new delta
hash_command - 176 +176
do_mem_crc 184 88 -96
sbc8641d : all +55 rodata -29 text +84
u-boot: add: 1/0, grow: 0/-1 bytes: 176/-96 (80)
function old new delta
hash_command - 176 +176
do_mem_crc 184 88 -96
xpedite517x : all -33 data -16 rodata -93 text +76
u-boot: add: 1/-1, grow: 0/-1 bytes: 176/-112 (64)
function old new delta
hash_command - 176 +176
hash_algo 16 - -16
do_mem_crc 184 88 -96
...
This shows that commit 19 has increased text size for arm (although only one
board was built) and by 96 bytes for powerpc. This increase was offset in both
cases by reductions in rodata and data/bss.
Shown below the summary lines is the sizes for each board. Below each board
is the sizes for each function. This information starts with:
add - number of functions added / removed
grow - number of functions which grew / shrunk
bytes - number of bytes of code added to / removed from all functions,
plus the total byte change in brackets
The change seems to be that hash_command() has increased by more than the
do_mem_crc() function has decreased. The function sizes typically add up to
roughly the text area size, but note that every read-only section except
rodata is included in 'text', so the function total does not exactly
correspond.
It is common when refactoring code for the rodata to decrease as the text size
increases, and vice versa.
Other options
=============
Buildman has various other command line options. Try --help to see them.
TODO
====
This has mostly be written in my spare time as a response to my difficulties
in testing large series of patches. Apart from tidying up there is quite a
bit of scope for improvement. Things like better error diffs, easier access
to log files, error display while building. Also it would be nice it buildman
could 'hunt' for problems, perhaps by building a few boards for each arch,
or checking commits for changed files and building only boards which use
those files.
Credits
=======
Thanks to Grant Grundler <grundler@chromium.org> for his ideas for improving
the build speed by building all commits for a board instead of the other
way around.
Simon Glass
sjg@chromium.org
Halloween 2012
Updated 12-12-12
Updated 23-02-13

167
tools/buildman/board.py Normal file
View File

@ -0,0 +1,167 @@
# Copyright (c) 2012 The Chromium OS Authors.
#
# See file CREDITS for list of people who contributed to this
# project.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of
# the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston,
# MA 02111-1307 USA
#
class Board:
"""A particular board that we can build"""
def __init__(self, target, arch, cpu, board_name, vendor, soc, options):
"""Create a new board type.
Args:
target: Target name (use make <target>_config to configure)
arch: Architecture name (e.g. arm)
cpu: Cpu name (e.g. arm1136)
board_name: Name of board (e.g. integrator)
vendor: Name of vendor (e.g. armltd)
soc: Name of SOC, or '' if none (e.g. mx31)
options: board-specific options (e.g. integratorcp:CM1136)
"""
self.target = target
self.arch = arch
self.cpu = cpu
self.board_name = board_name
self.vendor = vendor
self.soc = soc
self.props = [self.target, self.arch, self.cpu, self.board_name,
self.vendor, self.soc]
self.options = options
self.build_it = False
class Boards:
"""Manage a list of boards."""
def __init__(self):
# Use a simple list here, sinc OrderedDict requires Python 2.7
self._boards = []
def AddBoard(self, board):
"""Add a new board to the list.
The board's target member must not already exist in the board list.
Args:
board: board to add
"""
self._boards.append(board)
def ReadBoards(self, fname):
"""Read a list of boards from a board file.
Create a board object for each and add it to our _boards list.
Args:
fname: Filename of boards.cfg file
"""
with open(fname, 'r') as fd:
for line in fd:
if line[0] == '#':
continue
fields = line.split()
if not fields:
continue
for upto in range(len(fields)):
if fields[upto] == '-':
fields[upto] = ''
while len(fields) < 7:
fields.append('')
board = Board(*fields)
self.AddBoard(board)
def GetList(self):
"""Return a list of available boards.
Returns:
List of Board objects
"""
return self._boards
def GetDict(self):
"""Build a dictionary containing all the boards.
Returns:
Dictionary:
key is board.target
value is board
"""
board_dict = {}
for board in self._boards:
board_dict[board.target] = board
return board_dict
def GetSelectedDict(self):
"""Return a dictionary containing the selected boards
Returns:
List of Board objects that are marked selected
"""
board_dict = {}
for board in self._boards:
if board.build_it:
board_dict[board.target] = board
return board_dict
def GetSelected(self):
"""Return a list of selected boards
Returns:
List of Board objects that are marked selected
"""
return [board for board in self._boards if board.build_it]
def GetSelectedNames(self):
"""Return a list of selected boards
Returns:
List of board names that are marked selected
"""
return [board.target for board in self._boards if board.build_it]
def SelectBoards(self, args):
"""Mark boards selected based on args
Args:
List of strings specifying boards to include, either named, or
by their target, architecture, cpu, vendor or soc. If empty, all
boards are selected.
Returns:
Dictionary which holds the number of boards which were selected
due to each argument, arranged by argument.
"""
result = {}
for arg in args:
result[arg] = 0
result['all'] = 0
for board in self._boards:
if args:
for arg in args:
if arg in board.props:
if not board.build_it:
board.build_it = True
result[arg] += 1
result['all'] += 1
else:
board.build_it = True
result['all'] += 1
return result

View File

@ -0,0 +1,60 @@
# Copyright (c) 2012 The Chromium OS Authors.
#
# See file CREDITS for list of people who contributed to this
# project.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of
# the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston,
# MA 02111-1307 USA
#
import ConfigParser
import os
def Setup(fname=''):
"""Set up the buildman settings module by reading config files
Args:
config_fname: Config filename to read ('' for default)
"""
global settings
global config_fname
settings = ConfigParser.SafeConfigParser()
config_fname = fname
if config_fname == '':
config_fname = '%s/.buildman' % os.getenv('HOME')
if config_fname:
settings.read(config_fname)
def GetItems(section):
"""Get the items from a section of the config.
Args:
section: name of section to retrieve
Returns:
List of (name, value) tuples for the section
"""
try:
return settings.items(section)
except ConfigParser.NoSectionError as e:
print e
print ("Warning: No tool chains - please add a [toolchain] section "
"to your buildman config file %s. See README for details" %
config_fname)
return []
except:
raise

1445
tools/buildman/builder.py Normal file

File diff suppressed because it is too large Load Diff

1
tools/buildman/buildman Symbolic link
View File

@ -0,0 +1 @@
buildman.py

126
tools/buildman/buildman.py Executable file
View File

@ -0,0 +1,126 @@
#!/usr/bin/python
#
# Copyright (c) 2012 The Chromium OS Authors.
#
# See file CREDITS for list of people who contributed to this
# project.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of
# the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston,
# MA 02111-1307 USA
#
"""See README for more information"""
import multiprocessing
from optparse import OptionParser
import os
import re
import sys
import unittest
# Bring in the patman libraries
our_path = os.path.dirname(os.path.realpath(__file__))
sys.path.append(os.path.join(our_path, '../patman'))
# Our modules
import board
import builder
import checkpatch
import command
import control
import doctest
import gitutil
import patchstream
import terminal
import toolchain
def RunTests():
import test
sys.argv = [sys.argv[0]]
suite = unittest.TestLoader().loadTestsFromTestCase(test.TestBuild)
result = unittest.TestResult()
suite.run(result)
# TODO: Surely we can just 'print' result?
print result
for test, err in result.errors:
print err
for test, err in result.failures:
print err
parser = OptionParser()
parser.add_option('-b', '--branch', type='string',
help='Branch name to build')
parser.add_option('-B', '--bloat', dest='show_bloat',
action='store_true', default=False,
help='Show changes in function code size for each board')
parser.add_option('-c', '--count', dest='count', type='int',
default=-1, help='Run build on the top n commits')
parser.add_option('-e', '--show_errors', action='store_true',
default=False, help='Show errors and warnings')
parser.add_option('-f', '--force-build', dest='force_build',
action='store_true', default=False,
help='Force build of boards even if already built')
parser.add_option('-d', '--detail', dest='show_detail',
action='store_true', default=False,
help='Show detailed information for each board in summary')
parser.add_option('-g', '--git', type='string',
help='Git repo containing branch to build', default='.')
parser.add_option('-H', '--full-help', action='store_true', dest='full_help',
default=False, help='Display the README file')
parser.add_option('-j', '--jobs', dest='jobs', type='int',
default=None, help='Number of jobs to run at once (passed to make)')
parser.add_option('-k', '--keep-outputs', action='store_true',
default=False, help='Keep all build output files (e.g. binaries)')
parser.add_option('--list-tool-chains', action='store_true', default=False,
help='List available tool chains')
parser.add_option('-n', '--dry-run', action='store_true', dest='dry_run',
default=False, help="Do a try run (describe actions, but no nothing)")
parser.add_option('-Q', '--quick', action='store_true',
default=False, help='Do a rough build, with limited warning resolution')
parser.add_option('-s', '--summary', action='store_true',
default=False, help='Show a build summary')
parser.add_option('-S', '--show-sizes', action='store_true',
default=False, help='Show image size variation in summary')
parser.add_option('--step', type='int',
default=1, help='Only build every n commits (0=just first and last)')
parser.add_option('-t', '--test', action='store_true', dest='test',
default=False, help='run tests')
parser.add_option('-T', '--threads', type='int',
default=None, help='Number of builder threads to use')
parser.add_option('-u', '--show_unknown', action='store_true',
default=False, help='Show boards with unknown build result')
parser.usage = """buildman -b <branch> [options]
Build U-Boot for all commits in a branch. Use -n to do a dry run"""
(options, args) = parser.parse_args()
# Run our meagre tests
if options.test:
RunTests()
elif options.full_help:
pager = os.getenv('PAGER')
if not pager:
pager = 'more'
fname = os.path.join(os.path.dirname(sys.argv[0]), 'README')
command.Run(pager, fname)
# Build selected commits for selected boards
else:
control.DoBuildman(options, args)

181
tools/buildman/control.py Normal file
View File

@ -0,0 +1,181 @@
# Copyright (c) 2013 The Chromium OS Authors.
#
# See file CREDITS for list of people who contributed to this
# project.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of
# the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston,
# MA 02111-1307 USA
#
import multiprocessing
import os
import sys
import board
import bsettings
from builder import Builder
import gitutil
import patchstream
import terminal
import toolchain
def GetPlural(count):
"""Returns a plural 's' if count is not 1"""
return 's' if count != 1 else ''
def GetActionSummary(is_summary, count, selected, options):
"""Return a string summarising the intended action.
Returns:
Summary string.
"""
count = (count + options.step - 1) / options.step
str = '%s %d commit%s for %d boards' % (
'Summary of' if is_summary else 'Building', count, GetPlural(count),
len(selected))
str += ' (%d thread%s, %d job%s per thread)' % (options.threads,
GetPlural(options.threads), options.jobs, GetPlural(options.jobs))
return str
def ShowActions(series, why_selected, boards_selected, builder, options):
"""Display a list of actions that we would take, if not a dry run.
Args:
series: Series object
why_selected: Dictionary where each key is a buildman argument
provided by the user, and the value is the boards brought
in by that argument. For example, 'arm' might bring in
400 boards, so in this case the key would be 'arm' and
the value would be a list of board names.
boards_selected: Dict of selected boards, key is target name,
value is Board object
builder: The builder that will be used to build the commits
options: Command line options object
"""
col = terminal.Color()
print 'Dry run, so not doing much. But I would do this:'
print
print GetActionSummary(False, len(series.commits), boards_selected,
options)
print 'Build directory: %s' % builder.base_dir
for upto in range(0, len(series.commits), options.step):
commit = series.commits[upto]
print ' ', col.Color(col.YELLOW, commit.hash, bright=False),
print commit.subject
print
for arg in why_selected:
if arg != 'all':
print arg, ': %d boards' % why_selected[arg]
print ('Total boards to build for each commit: %d\n' %
why_selected['all'])
def DoBuildman(options, args):
"""The main control code for buildman
Args:
options: Command line options object
args: Command line arguments (list of strings)
"""
gitutil.Setup()
bsettings.Setup()
options.git_dir = os.path.join(options.git, '.git')
toolchains = toolchain.Toolchains()
toolchains.Scan(options.list_tool_chains)
if options.list_tool_chains:
toolchains.List()
print
return
# Work out how many commits to build. We want to build everything on the
# branch. We also build the upstream commit as a control so we can see
# problems introduced by the first commit on the branch.
col = terminal.Color()
count = options.count
if count == -1:
if not options.branch:
str = 'Please use -b to specify a branch to build'
print col.Color(col.RED, str)
sys.exit(1)
count = gitutil.CountCommitsInBranch(options.git_dir, options.branch)
count += 1 # Build upstream commit also
if not count:
str = ("No commits found to process in branch '%s': "
"set branch's upstream or use -c flag" % options.branch)
print col.Color(col.RED, str)
sys.exit(1)
# Work out what subset of the boards we are building
boards = board.Boards()
boards.ReadBoards(os.path.join(options.git, 'boards.cfg'))
why_selected = boards.SelectBoards(args)
selected = boards.GetSelected()
if not len(selected):
print col.Color(col.RED, 'No matching boards found')
sys.exit(1)
# Read the metadata from the commits. First look at the upstream commit,
# then the ones in the branch. We would like to do something like
# upstream/master~..branch but that isn't possible if upstream/master is
# a merge commit (it will list all the commits that form part of the
# merge)
range_expr = gitutil.GetRangeInBranch(options.git_dir, options.branch)
upstream_commit = gitutil.GetUpstream(options.git_dir, options.branch)
series = patchstream.GetMetaDataForList(upstream_commit, options.git_dir,
1)
series = patchstream.GetMetaDataForList(range_expr, options.git_dir, None,
series)
# By default we have one thread per CPU. But if there are not enough jobs
# we can have fewer threads and use a high '-j' value for make.
if not options.threads:
options.threads = min(multiprocessing.cpu_count(), len(selected))
if not options.jobs:
options.jobs = max(1, (multiprocessing.cpu_count() +
len(selected) - 1) / len(selected))
if not options.step:
options.step = len(series.commits) - 1
# Create a new builder with the selected options
output_dir = os.path.join('..', options.branch)
builder = Builder(toolchains, output_dir, options.git_dir,
options.threads, options.jobs, checkout=True,
show_unknown=options.show_unknown, step=options.step)
builder.force_config_on_failure = not options.quick
# For a dry run, just show our actions as a sanity check
if options.dry_run:
ShowActions(series, why_selected, selected, builder, options)
else:
builder.force_build = options.force_build
# Work out which boards to build
board_selected = boards.GetSelectedDict()
print GetActionSummary(options.summary, count, board_selected, options)
if options.summary:
# We can't show function sizes without board details at present
if options.show_bloat:
options.show_detail = True
builder.ShowSummary(series.commits, board_selected,
options.show_errors, options.show_sizes,
options.show_detail, options.show_bloat)
else:
builder.BuildBoards(series.commits, board_selected,
options.show_errors, options.keep_outputs)

185
tools/buildman/test.py Normal file
View File

@ -0,0 +1,185 @@
#
# Copyright (c) 2012 The Chromium OS Authors.
#
# See file CREDITS for list of people who contributed to this
# project.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of
# the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston,
# MA 02111-1307 USA
#
import os
import shutil
import sys
import tempfile
import time
import unittest
# Bring in the patman libraries
our_path = os.path.dirname(os.path.realpath(__file__))
sys.path.append(os.path.join(our_path, '../patman'))
import board
import bsettings
import builder
import control
import command
import commit
import toolchain
errors = [
'''main.c: In function 'main_loop':
main.c:260:6: warning: unused variable 'joe' [-Wunused-variable]
''',
'''main.c: In function 'main_loop':
main.c:295:2: error: 'fred' undeclared (first use in this function)
main.c:295:2: note: each undeclared identifier is reported only once for each function it appears in
make[1]: *** [main.o] Error 1
make: *** [common/libcommon.o] Error 2
Make failed
''',
'''main.c: In function 'main_loop':
main.c:280:6: warning: unused variable 'mary' [-Wunused-variable]
''',
'''powerpc-linux-ld: warning: dot moved backwards before `.bss'
powerpc-linux-ld: warning: dot moved backwards before `.bss'
powerpc-linux-ld: u-boot: section .text lma 0xfffc0000 overlaps previous sections
powerpc-linux-ld: u-boot: section .rodata lma 0xfffef3ec overlaps previous sections
powerpc-linux-ld: u-boot: section .reloc lma 0xffffa400 overlaps previous sections
powerpc-linux-ld: u-boot: section .data lma 0xffffcd38 overlaps previous sections
powerpc-linux-ld: u-boot: section .u_boot_cmd lma 0xffffeb40 overlaps previous sections
powerpc-linux-ld: u-boot: section .bootpg lma 0xfffff198 overlaps previous sections
'''
]
# hash, subject, return code, list of errors/warnings
commits = [
['1234', 'upstream/master, ok', 0, []],
['5678', 'Second commit, a warning', 0, errors[0:1]],
['9012', 'Third commit, error', 1, errors[0:2]],
['3456', 'Fourth commit, warning', 0, [errors[0], errors[2]]],
['7890', 'Fifth commit, link errors', 1, [errors[0], errors[3]]],
['abcd', 'Sixth commit, fixes all errors', 0, []]
]
boards = [
['board0', 'arm', 'armv7', 'ARM Board 1', 'Tester', '', ''],
['board1', 'arm', 'armv7', 'ARM Board 2', 'Tester', '', ''],
['board2', 'powerpc', 'powerpc', 'PowerPC board 1', 'Tester', '', ''],
['board3', 'powerpc', 'mpc5xx', 'PowerPC board 2', 'Tester', '', ''],
['board4', 'sandbox', 'sandbox', 'Sandbox board', 'Tester', '', '']
]
class Options:
"""Class that holds build options"""
pass
class TestBuild(unittest.TestCase):
"""Test buildman
TODO: Write tests for the rest of the functionality
"""
def setUp(self):
# Set up commits to build
self.commits = []
sequence = 0
for commit_info in commits:
comm = commit.Commit(commit_info[0])
comm.subject = commit_info[1]
comm.return_code = commit_info[2]
comm.error_list = commit_info[3]
comm.sequence = sequence
sequence += 1
self.commits.append(comm)
# Set up boards to build
self.boards = board.Boards()
for brd in boards:
self.boards.AddBoard(board.Board(*brd))
self.boards.SelectBoards([])
# Set up the toolchains
bsettings.Setup()
self.toolchains = toolchain.Toolchains()
self.toolchains.Add('arm-linux-gcc', test=False)
self.toolchains.Add('sparc-linux-gcc', test=False)
self.toolchains.Add('powerpc-linux-gcc', test=False)
self.toolchains.Add('gcc', test=False)
def Make(self, commit, brd, stage, *args, **kwargs):
result = command.CommandResult()
boardnum = int(brd.target[-1])
result.return_code = 0
result.stderr = ''
result.stdout = ('This is the test output for board %s, commit %s' %
(brd.target, commit.hash))
if boardnum >= 1 and boardnum >= commit.sequence:
result.return_code = commit.return_code
result.stderr = ''.join(commit.error_list)
if stage == 'build':
target_dir = None
for arg in args:
if arg.startswith('O='):
target_dir = arg[2:]
if not os.path.isdir(target_dir):
os.mkdir(target_dir)
#time.sleep(.2 + boardnum * .2)
result.combined = result.stdout + result.stderr
return result
def testBasic(self):
"""Test basic builder operation"""
output_dir = tempfile.mkdtemp()
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
build = builder.Builder(self.toolchains, output_dir, None, 1, 2,
checkout=False, show_unknown=False)
build.do_make = self.Make
board_selected = self.boards.GetSelectedDict()
#build.BuildCommits(self.commits, board_selected, False)
build.BuildBoards(self.commits, board_selected, False, False)
build.ShowSummary(self.commits, board_selected, True, False,
False, False)
def _testGit(self):
"""Test basic builder operation by building a branch"""
base_dir = tempfile.mkdtemp()
if not os.path.isdir(base_dir):
os.mkdir(base_dir)
options = Options()
options.git = os.getcwd()
options.summary = False
options.jobs = None
options.dry_run = False
#options.git = os.path.join(base_dir, 'repo')
options.branch = 'test-buildman'
options.force_build = False
options.list_tool_chains = False
options.count = -1
options.git_dir = None
options.threads = None
options.show_unknown = False
options.quick = False
options.show_errors = False
options.keep_outputs = False
args = ['tegra20']
control.DoBuildman(options, args)
if __name__ == "__main__":
unittest.main()

185
tools/buildman/toolchain.py Normal file
View File

@ -0,0 +1,185 @@
# Copyright (c) 2012 The Chromium OS Authors.
#
# See file CREDITS for list of people who contributed to this
# project.
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of
# the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston,
# MA 02111-1307 USA
#
import glob
import os
import bsettings
import command
class Toolchain:
"""A single toolchain
Public members:
gcc: Full path to C compiler
path: Directory path containing C compiler
cross: Cross compile string, e.g. 'arm-linux-'
arch: Architecture of toolchain as determined from the first
component of the filename. E.g. arm-linux-gcc becomes arm
"""
def __init__(self, fname, test, verbose=False):
"""Create a new toolchain object.
Args:
fname: Filename of the gcc component
test: True to run the toolchain to test it
"""
self.gcc = fname
self.path = os.path.dirname(fname)
self.cross = os.path.basename(fname)[:-3]
pos = self.cross.find('-')
self.arch = self.cross[:pos] if pos != -1 else 'sandbox'
env = self.MakeEnvironment()
# As a basic sanity check, run the C compiler with --version
cmd = [fname, '--version']
if test:
result = command.RunPipe([cmd], capture=True, env=env)
self.ok = result.return_code == 0
if verbose:
print 'Tool chain test: ',
if self.ok:
print 'OK'
else:
print 'BAD'
print 'Command: ', cmd
print result.stdout
print result.stderr
else:
self.ok = True
self.priority = self.GetPriority(fname)
def GetPriority(self, fname):
"""Return the priority of the toolchain.
Toolchains are ranked according to their suitability by their
filename prefix.
Args:
fname: Filename of toolchain
Returns:
Priority of toolchain, 0=highest, 20=lowest.
"""
priority_list = ['-elf', '-unknown-linux-gnu', '-linux', '-elf',
'-none-linux-gnueabi', '-uclinux', '-none-eabi',
'-gentoo-linux-gnu', '-linux-gnueabi', '-le-linux', '-uclinux']
for prio in range(len(priority_list)):
if priority_list[prio] in fname:
return prio
return prio
def MakeEnvironment(self):
"""Returns an environment for using the toolchain.
Thie takes the current environment, adds CROSS_COMPILE and
augments PATH so that the toolchain will operate correctly.
"""
env = dict(os.environ)
env['CROSS_COMPILE'] = self.cross
env['PATH'] += (':' + self.path)
return env
class Toolchains:
"""Manage a list of toolchains for building U-Boot
We select one toolchain for each architecture type
Public members:
toolchains: Dict of Toolchain objects, keyed by architecture name
paths: List of paths to check for toolchains (may contain wildcards)
"""
def __init__(self):
self.toolchains = {}
self.paths = []
for name, value in bsettings.GetItems('toolchain'):
if '*' in value:
self.paths += glob.glob(value)
else:
self.paths.append(value)
def Add(self, fname, test=True, verbose=False):
"""Add a toolchain to our list
We select the given toolchain as our preferred one for its
architecture if it is a higher priority than the others.
Args:
fname: Filename of toolchain's gcc driver
test: True to run the toolchain to test it
"""
toolchain = Toolchain(fname, test, verbose)
add_it = toolchain.ok
if toolchain.arch in self.toolchains:
add_it = (toolchain.priority <
self.toolchains[toolchain.arch].priority)
if add_it:
self.toolchains[toolchain.arch] = toolchain
def Scan(self, verbose):
"""Scan for available toolchains and select the best for each arch.
We look for all the toolchains we can file, figure out the
architecture for each, and whether it works. Then we select the
highest priority toolchain for each arch.
Args:
verbose: True to print out progress information
"""
if verbose: print 'Scanning for tool chains'
for path in self.paths:
if verbose: print " - scanning path '%s'" % path
for subdir in ['.', 'bin', 'usr/bin']:
dirname = os.path.join(path, subdir)
if verbose: print " - looking in '%s'" % dirname
for fname in glob.glob(dirname + '/*gcc'):
if verbose: print " - found '%s'" % fname
self.Add(fname, True, verbose)
def List(self):
"""List out the selected toolchains for each architecture"""
print 'List of available toolchains (%d):' % len(self.toolchains)
if len(self.toolchains):
for key, value in sorted(self.toolchains.iteritems()):
print '%-10s: %s' % (key, value.gcc)
else:
print 'None'
def Select(self, arch):
"""Returns the toolchain for a given architecture
Args:
args: Name of architecture (e.g. 'arm', 'ppc_8xx')
returns:
toolchain object, or None if none found
"""
for name, value in bsettings.GetItems('toolchain-alias'):
if arch == name:
arch = value
if not arch in self.toolchains:
raise ValueError, ("No tool chain found for arch '%s'" % arch)
return self.toolchains[arch]

View File

@ -68,7 +68,7 @@ will get a consistent result each time.
How to configure it
===================
For most cases of using patman for U-Boot developement patman will
For most cases of using patman for U-Boot development, patman will
locate and use the file 'doc/git-mailrc' in your U-Boot directory.
This contains most of the aliases you will need.
@ -182,6 +182,10 @@ END
Sets the cover letter contents for the series. The first line
will become the subject of the cover letter
Cover-letter-cc: email / alias
Additional email addresses / aliases to send cover letter to (you
can add this multiple times)
Series-notes:
blah blah
blah blah
@ -198,8 +202,9 @@ END
override the default signoff that patman automatically adds.
Tested-by: Their Name <email>
Reviewed-by: Their Name <email>
Acked-by: Their Name <email>
These indicate that someone has acked or tested your patch.
These indicate that someone has tested/reviewed/acked your patch.
When you get this reply on the mailing list, you can add this
tag to the relevant commit and the script will include it when
you send out the next version. If 'Tested-by:' is set to
@ -231,7 +236,6 @@ TEST=...
Change-Id:
Review URL:
Reviewed-on:
Reviewed-by:
Exercise for the reader: Try adding some tags to one of your current
@ -263,7 +267,13 @@ will create a patch which is copied to x86, arm, sandbox, mikef, ag and
afleming.
If you have a cover letter it will get sent to the union of the CC lists of
all of the other patches.
all of the other patches. If you want to sent it to additional people you
can add a tag:
Cover-letter-cc: <list of addresses>
These people will get the cover letter even if they are not on the To/Cc
list for any of the patches.
Example Work Flow

View File

@ -19,6 +19,7 @@
# MA 02111-1307 USA
#
import collections
import command
import gitutil
import os
@ -57,63 +58,86 @@ def CheckPatch(fname, verbose=False):
"""Run checkpatch.pl on a file.
Returns:
4-tuple containing:
result: False=failure, True=ok
namedtuple containing:
ok: False=failure, True=ok
problems: List of problems, each a dict:
'type'; error or warning
'msg': text message
'file' : filename
'line': line number
errors: Number of errors
warnings: Number of warnings
checks: Number of checks
lines: Number of lines
stdout: Full output of checkpatch
"""
result = False
error_count, warning_count, lines = 0, 0, 0
problems = []
fields = ['ok', 'problems', 'errors', 'warnings', 'checks', 'lines',
'stdout']
result = collections.namedtuple('CheckPatchResult', fields)
result.ok = False
result.errors, result.warning, result.checks = 0, 0, 0
result.lines = 0
result.problems = []
chk = FindCheckPatch()
item = {}
stdout = command.Output(chk, '--no-tree', fname)
result.stdout = command.Output(chk, '--no-tree', fname)
#pipe = subprocess.Popen(cmd, stdout=subprocess.PIPE)
#stdout, stderr = pipe.communicate()
# total: 0 errors, 0 warnings, 159 lines checked
# or:
# total: 0 errors, 2 warnings, 7 checks, 473 lines checked
re_stats = re.compile('total: (\\d+) errors, (\d+) warnings, (\d+)')
re_stats_full = re.compile('total: (\\d+) errors, (\d+) warnings, (\d+)'
' checks, (\d+)')
re_ok = re.compile('.*has no obvious style problems')
re_bad = re.compile('.*has style problems, please review')
re_error = re.compile('ERROR: (.*)')
re_warning = re.compile('WARNING: (.*)')
re_check = re.compile('CHECK: (.*)')
re_file = re.compile('#\d+: FILE: ([^:]*):(\d+):')
for line in stdout.splitlines():
for line in result.stdout.splitlines():
if verbose:
print line
# A blank line indicates the end of a message
if not line and item:
problems.append(item)
result.problems.append(item)
item = {}
match = re_stats.match(line)
match = re_stats_full.match(line)
if not match:
match = re_stats.match(line)
if match:
error_count = int(match.group(1))
warning_count = int(match.group(2))
lines = int(match.group(3))
result.errors = int(match.group(1))
result.warnings = int(match.group(2))
if len(match.groups()) == 4:
result.checks = int(match.group(3))
result.lines = int(match.group(4))
else:
result.lines = int(match.group(3))
elif re_ok.match(line):
result = True
result.ok = True
elif re_bad.match(line):
result = False
match = re_error.match(line)
if match:
item['msg'] = match.group(1)
result.ok = False
err_match = re_error.match(line)
warn_match = re_warning.match(line)
file_match = re_file.match(line)
check_match = re_check.match(line)
if err_match:
item['msg'] = err_match.group(1)
item['type'] = 'error'
match = re_warning.match(line)
if match:
item['msg'] = match.group(1)
elif warn_match:
item['msg'] = warn_match.group(1)
item['type'] = 'warning'
match = re_file.match(line)
if match:
item['file'] = match.group(1)
item['line'] = int(match.group(2))
elif check_match:
item['msg'] = check_match.group(1)
item['type'] = 'check'
elif file_match:
item['file'] = file_match.group(1)
item['line'] = int(file_match.group(2))
return result, problems, error_count, warning_count, lines, stdout
return result
def GetWarningMsg(col, msg_type, fname, line, msg):
'''Create a message for a given file/line
@ -128,37 +152,39 @@ def GetWarningMsg(col, msg_type, fname, line, msg):
msg_type = col.Color(col.YELLOW, msg_type)
elif msg_type == 'error':
msg_type = col.Color(col.RED, msg_type)
elif msg_type == 'check':
msg_type = col.Color(col.MAGENTA, msg_type)
return '%s: %s,%d: %s' % (msg_type, fname, line, msg)
def CheckPatches(verbose, args):
'''Run the checkpatch.pl script on each patch'''
error_count = 0
warning_count = 0
error_count, warning_count, check_count = 0, 0, 0
col = terminal.Color()
for fname in args:
ok, problems, errors, warnings, lines, stdout = CheckPatch(fname,
verbose)
if not ok:
error_count += errors
warning_count += warnings
print '%d errors, %d warnings for %s:' % (errors,
warnings, fname)
if len(problems) != error_count + warning_count:
result = CheckPatch(fname, verbose)
if not result.ok:
error_count += result.errors
warning_count += result.warnings
check_count += result.checks
print '%d errors, %d warnings, %d checks for %s:' % (result.errors,
result.warnings, result.checks, col.Color(col.BLUE, fname))
if (len(result.problems) != result.errors + result.warnings +
result.checks):
print "Internal error: some problems lost"
for item in problems:
print GetWarningMsg(col, item['type'],
for item in result.problems:
print GetWarningMsg(col, item.get('type', '<unknown>'),
item.get('file', '<unknown>'),
item.get('line', 0), item['msg'])
item.get('line', 0), item.get('msg', 'message'))
print
#print stdout
if error_count != 0 or warning_count != 0:
str = 'checkpatch.pl found %d error(s), %d warning(s)' % (
error_count, warning_count)
if error_count or warning_count or check_count:
str = 'checkpatch.pl found %d error(s), %d warning(s), %d checks(s)'
color = col.GREEN
if warning_count:
color = col.YELLOW
if error_count:
color = col.RED
print col.Color(color, str)
print col.Color(color, str % (error_count, warning_count, check_count))
return False
return True

View File

@ -20,53 +20,98 @@
#
import os
import subprocess
import cros_subprocess
"""Shell command ease-ups for Python."""
def RunPipe(pipeline, infile=None, outfile=None,
capture=False, oneline=False, hide_stderr=False):
class CommandResult:
"""A class which captures the result of executing a command.
Members:
stdout: stdout obtained from command, as a string
stderr: stderr obtained from command, as a string
return_code: Return code from command
exception: Exception received, or None if all ok
"""
def __init__(self):
self.stdout = None
self.stderr = None
self.return_code = None
self.exception = None
def RunPipe(pipe_list, infile=None, outfile=None,
capture=False, capture_stderr=False, oneline=False,
raise_on_error=True, cwd=None, **kwargs):
"""
Perform a command pipeline, with optional input/output filenames.
hide_stderr Don't allow output of stderr (default False)
Args:
pipe_list: List of command lines to execute. Each command line is
piped into the next, and is itself a list of strings. For
example [ ['ls', '.git'] ['wc'] ] will pipe the output of
'ls .git' into 'wc'.
infile: File to provide stdin to the pipeline
outfile: File to store stdout
capture: True to capture output
capture_stderr: True to capture stderr
oneline: True to strip newline chars from output
kwargs: Additional keyword arguments to cros_subprocess.Popen()
Returns:
CommandResult object
"""
result = CommandResult()
last_pipe = None
pipeline = list(pipe_list)
user_pipestr = '|'.join([' '.join(pipe) for pipe in pipe_list])
while pipeline:
cmd = pipeline.pop(0)
kwargs = {}
if last_pipe is not None:
kwargs['stdin'] = last_pipe.stdout
elif infile:
kwargs['stdin'] = open(infile, 'rb')
if pipeline or capture:
kwargs['stdout'] = subprocess.PIPE
kwargs['stdout'] = cros_subprocess.PIPE
elif outfile:
kwargs['stdout'] = open(outfile, 'wb')
if hide_stderr:
kwargs['stderr'] = open('/dev/null', 'wb')
if capture_stderr:
kwargs['stderr'] = cros_subprocess.PIPE
last_pipe = subprocess.Popen(cmd, **kwargs)
try:
last_pipe = cros_subprocess.Popen(cmd, cwd=cwd, **kwargs)
except Exception, err:
result.exception = err
if raise_on_error:
raise Exception("Error running '%s': %s" % (user_pipestr, str))
result.return_code = 255
return result
if capture:
ret = last_pipe.communicate()[0]
if not ret:
return None
elif oneline:
return ret.rstrip('\r\n')
else:
return ret
result.stdout, result.stderr, result.combined = (
last_pipe.CommunicateFilter(None))
if result.stdout and oneline:
result.output = result.stdout.rstrip('\r\n')
result.return_code = last_pipe.wait()
else:
return os.waitpid(last_pipe.pid, 0)[1] == 0
result.return_code = os.waitpid(last_pipe.pid, 0)[1]
if raise_on_error and result.return_code:
raise Exception("Error running '%s'" % user_pipestr)
return result
def Output(*cmd):
return RunPipe([cmd], capture=True)
return RunPipe([cmd], capture=True, raise_on_error=False).stdout
def OutputOneLine(*cmd):
return RunPipe([cmd], capture=True, oneline=True)
def OutputOneLine(*cmd, **kwargs):
raise_on_error = kwargs.pop('raise_on_error', True)
return (RunPipe([cmd], capture=True, oneline=True,
raise_on_error=raise_on_error,
**kwargs).stdout.strip())
def Run(*cmd, **kwargs):
return RunPipe([cmd], **kwargs)
return RunPipe([cmd], **kwargs).stdout
def RunList(cmd):
return RunPipe([cmd], capture=True)
return RunPipe([cmd], capture=True).stdout
def StopAll():
cros_subprocess.stay_alive = False

View File

@ -22,7 +22,7 @@
import re
# Separates a tag: at the beginning of the subject from the rest of it
re_subject_tag = re.compile('([^:]*):\s*(.*)')
re_subject_tag = re.compile('([^:\s]*):\s*(.*)')
class Commit:
"""Holds information about a single commit/patch in the series.
@ -61,9 +61,10 @@ class Commit:
Subject tags look like this:
propounder: Change the widget to propound correctly
propounder: fort: Change the widget to propound correctly
Multiple tags are supported. The list is updated in self.tag
Here the tags are propounder and fort. Multiple tags are supported.
The list is updated in self.tag.
Returns:
None if ok, else the name of a tag with no email alias

View File

@ -0,0 +1,397 @@
# Copyright (c) 2012 The Chromium OS Authors.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
#
# Copyright (c) 2003-2005 by Peter Astrand <astrand@lysator.liu.se>
# Licensed to PSF under a Contributor Agreement.
# See http://www.python.org/2.4/license for licensing details.
"""Subprocress execution
This module holds a subclass of subprocess.Popen with our own required
features, mainly that we get access to the subprocess output while it
is running rather than just at the end. This makes it easiler to show
progress information and filter output in real time.
"""
import errno
import os
import pty
import select
import subprocess
import sys
import unittest
# Import these here so the caller does not need to import subprocess also.
PIPE = subprocess.PIPE
STDOUT = subprocess.STDOUT
PIPE_PTY = -3 # Pipe output through a pty
stay_alive = True
class Popen(subprocess.Popen):
"""Like subprocess.Popen with ptys and incremental output
This class deals with running a child process and filtering its output on
both stdout and stderr while it is running. We do this so we can monitor
progress, and possibly relay the output to the user if requested.
The class is similar to subprocess.Popen, the equivalent is something like:
Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
But this class has many fewer features, and two enhancement:
1. Rather than getting the output data only at the end, this class sends it
to a provided operation as it arrives.
2. We use pseudo terminals so that the child will hopefully flush its output
to us as soon as it is produced, rather than waiting for the end of a
line.
Use CommunicateFilter() to handle output from the subprocess.
"""
def __init__(self, args, stdin=None, stdout=PIPE_PTY, stderr=PIPE_PTY,
shell=False, cwd=None, env=None, **kwargs):
"""Cut-down constructor
Args:
args: Program and arguments for subprocess to execute.
stdin: See subprocess.Popen()
stdout: See subprocess.Popen(), except that we support the sentinel
value of cros_subprocess.PIPE_PTY.
stderr: See subprocess.Popen(), except that we support the sentinel
value of cros_subprocess.PIPE_PTY.
shell: See subprocess.Popen()
cwd: Working directory to change to for subprocess, or None if none.
env: Environment to use for this subprocess, or None to inherit parent.
kwargs: No other arguments are supported at the moment. Passing other
arguments will cause a ValueError to be raised.
"""
stdout_pty = None
stderr_pty = None
if stdout == PIPE_PTY:
stdout_pty = pty.openpty()
stdout = os.fdopen(stdout_pty[1])
if stderr == PIPE_PTY:
stderr_pty = pty.openpty()
stderr = os.fdopen(stderr_pty[1])
super(Popen, self).__init__(args, stdin=stdin,
stdout=stdout, stderr=stderr, shell=shell, cwd=cwd, env=env,
**kwargs)
# If we're on a PTY, we passed the slave half of the PTY to the subprocess.
# We want to use the master half on our end from now on. Setting this here
# does make some assumptions about the implementation of subprocess, but
# those assumptions are pretty minor.
# Note that if stderr is STDOUT, then self.stderr will be set to None by
# this constructor.
if stdout_pty is not None:
self.stdout = os.fdopen(stdout_pty[0])
if stderr_pty is not None:
self.stderr = os.fdopen(stderr_pty[0])
# Insist that unit tests exist for other arguments we don't support.
if kwargs:
raise ValueError("Unit tests do not test extra args - please add tests")
def CommunicateFilter(self, output):
"""Interact with process: Read data from stdout and stderr.
This method runs until end-of-file is reached, then waits for the
subprocess to terminate.
The output function is sent all output from the subprocess and must be
defined like this:
def Output([self,] stream, data)
Args:
stream: the stream the output was received on, which will be
sys.stdout or sys.stderr.
data: a string containing the data
Note: The data read is buffered in memory, so do not use this
method if the data size is large or unlimited.
Args:
output: Function to call with each fragment of output.
Returns:
A tuple (stdout, stderr, combined) which is the data received on
stdout, stderr and the combined data (interleaved stdout and stderr).
Note that the interleaved output will only be sensible if you have
set both stdout and stderr to PIPE or PIPE_PTY. Even then it depends on
the timing of the output in the subprocess. If a subprocess flips
between stdout and stderr quickly in succession, by the time we come to
read the output from each we may see several lines in each, and will read
all the stdout lines, then all the stderr lines. So the interleaving
may not be correct. In this case you might want to pass
stderr=cros_subprocess.STDOUT to the constructor.
This feature is still useful for subprocesses where stderr is
rarely used and indicates an error.
Note also that if you set stderr to STDOUT, then stderr will be empty
and the combined output will just be the same as stdout.
"""
read_set = []
write_set = []
stdout = None # Return
stderr = None # Return
if self.stdin:
# Flush stdio buffer. This might block, if the user has
# been writing to .stdin in an uncontrolled fashion.
self.stdin.flush()
if input:
write_set.append(self.stdin)
else:
self.stdin.close()
if self.stdout:
read_set.append(self.stdout)
stdout = []
if self.stderr and self.stderr != self.stdout:
read_set.append(self.stderr)
stderr = []
combined = []
input_offset = 0
while read_set or write_set:
try:
rlist, wlist, _ = select.select(read_set, write_set, [], 0.2)
except select.error, e:
if e.args[0] == errno.EINTR:
continue
raise
if not stay_alive:
self.terminate()
if self.stdin in wlist:
# When select has indicated that the file is writable,
# we can write up to PIPE_BUF bytes without risk
# blocking. POSIX defines PIPE_BUF >= 512
chunk = input[input_offset : input_offset + 512]
bytes_written = os.write(self.stdin.fileno(), chunk)
input_offset += bytes_written
if input_offset >= len(input):
self.stdin.close()
write_set.remove(self.stdin)
if self.stdout in rlist:
data = ""
# We will get an error on read if the pty is closed
try:
data = os.read(self.stdout.fileno(), 1024)
except OSError:
pass
if data == "":
self.stdout.close()
read_set.remove(self.stdout)
else:
stdout.append(data)
combined.append(data)
if output:
output(sys.stdout, data)
if self.stderr in rlist:
data = ""
# We will get an error on read if the pty is closed
try:
data = os.read(self.stderr.fileno(), 1024)
except OSError:
pass
if data == "":
self.stderr.close()
read_set.remove(self.stderr)
else:
stderr.append(data)
combined.append(data)
if output:
output(sys.stderr, data)
# All data exchanged. Translate lists into strings.
if stdout is not None:
stdout = ''.join(stdout)
else:
stdout = ''
if stderr is not None:
stderr = ''.join(stderr)
else:
stderr = ''
combined = ''.join(combined)
# Translate newlines, if requested. We cannot let the file
# object do the translation: It is based on stdio, which is
# impossible to combine with select (unless forcing no
# buffering).
if self.universal_newlines and hasattr(file, 'newlines'):
if stdout:
stdout = self._translate_newlines(stdout)
if stderr:
stderr = self._translate_newlines(stderr)
self.wait()
return (stdout, stderr, combined)
# Just being a unittest.TestCase gives us 14 public methods. Unless we
# disable this, we can only have 6 tests in a TestCase. That's not enough.
#
# pylint: disable=R0904
class TestSubprocess(unittest.TestCase):
"""Our simple unit test for this module"""
class MyOperation:
"""Provides a operation that we can pass to Popen"""
def __init__(self, input_to_send=None):
"""Constructor to set up the operation and possible input.
Args:
input_to_send: a text string to send when we first get input. We will
add \r\n to the string.
"""
self.stdout_data = ''
self.stderr_data = ''
self.combined_data = ''
self.stdin_pipe = None
self._input_to_send = input_to_send
if input_to_send:
pipe = os.pipe()
self.stdin_read_pipe = pipe[0]
self._stdin_write_pipe = os.fdopen(pipe[1], 'w')
def Output(self, stream, data):
"""Output handler for Popen. Stores the data for later comparison"""
if stream == sys.stdout:
self.stdout_data += data
if stream == sys.stderr:
self.stderr_data += data
self.combined_data += data
# Output the input string if we have one.
if self._input_to_send:
self._stdin_write_pipe.write(self._input_to_send + '\r\n')
self._stdin_write_pipe.flush()
def _BasicCheck(self, plist, oper):
"""Basic checks that the output looks sane."""
self.assertEqual(plist[0], oper.stdout_data)
self.assertEqual(plist[1], oper.stderr_data)
self.assertEqual(plist[2], oper.combined_data)
# The total length of stdout and stderr should equal the combined length
self.assertEqual(len(plist[0]) + len(plist[1]), len(plist[2]))
def test_simple(self):
"""Simple redirection: Get process list"""
oper = TestSubprocess.MyOperation()
plist = Popen(['ps']).CommunicateFilter(oper.Output)
self._BasicCheck(plist, oper)
def test_stderr(self):
"""Check stdout and stderr"""
oper = TestSubprocess.MyOperation()
cmd = 'echo fred >/dev/stderr && false || echo bad'
plist = Popen([cmd], shell=True).CommunicateFilter(oper.Output)
self._BasicCheck(plist, oper)
self.assertEqual(plist [0], 'bad\r\n')
self.assertEqual(plist [1], 'fred\r\n')
def test_shell(self):
"""Check with and without shell works"""
oper = TestSubprocess.MyOperation()
cmd = 'echo test >/dev/stderr'
self.assertRaises(OSError, Popen, [cmd], shell=False)
plist = Popen([cmd], shell=True).CommunicateFilter(oper.Output)
self._BasicCheck(plist, oper)
self.assertEqual(len(plist [0]), 0)
self.assertEqual(plist [1], 'test\r\n')
def test_list_args(self):
"""Check with and without shell works using list arguments"""
oper = TestSubprocess.MyOperation()
cmd = ['echo', 'test', '>/dev/stderr']
plist = Popen(cmd, shell=False).CommunicateFilter(oper.Output)
self._BasicCheck(plist, oper)
self.assertEqual(plist [0], ' '.join(cmd[1:]) + '\r\n')
self.assertEqual(len(plist [1]), 0)
oper = TestSubprocess.MyOperation()
# this should be interpreted as 'echo' with the other args dropped
cmd = ['echo', 'test', '>/dev/stderr']
plist = Popen(cmd, shell=True).CommunicateFilter(oper.Output)
self._BasicCheck(plist, oper)
self.assertEqual(plist [0], '\r\n')
def test_cwd(self):
"""Check we can change directory"""
for shell in (False, True):
oper = TestSubprocess.MyOperation()
plist = Popen('pwd', shell=shell, cwd='/tmp').CommunicateFilter(oper.Output)
self._BasicCheck(plist, oper)
self.assertEqual(plist [0], '/tmp\r\n')
def test_env(self):
"""Check we can change environment"""
for add in (False, True):
oper = TestSubprocess.MyOperation()
env = os.environ
if add:
env ['FRED'] = 'fred'
cmd = 'echo $FRED'
plist = Popen(cmd, shell=True, env=env).CommunicateFilter(oper.Output)
self._BasicCheck(plist, oper)
self.assertEqual(plist [0], add and 'fred\r\n' or '\r\n')
def test_extra_args(self):
"""Check we can't add extra arguments"""
self.assertRaises(ValueError, Popen, 'true', close_fds=False)
def test_basic_input(self):
"""Check that incremental input works
We set up a subprocess which will prompt for name. When we see this prompt
we send the name as input to the process. It should then print the name
properly to stdout.
"""
oper = TestSubprocess.MyOperation('Flash')
prompt = 'What is your name?: '
cmd = 'echo -n "%s"; read name; echo Hello $name' % prompt
plist = Popen([cmd], stdin=oper.stdin_read_pipe,
shell=True).CommunicateFilter(oper.Output)
self._BasicCheck(plist, oper)
self.assertEqual(len(plist [1]), 0)
self.assertEqual(plist [0], prompt + 'Hello Flash\r\r\n')
def test_isatty(self):
"""Check that ptys appear as terminals to the subprocess"""
oper = TestSubprocess.MyOperation()
cmd = ('if [ -t %d ]; then echo "terminal %d" >&%d; '
'else echo "not %d" >&%d; fi;')
both_cmds = ''
for fd in (1, 2):
both_cmds += cmd % (fd, fd, fd, fd, fd)
plist = Popen(both_cmds, shell=True).CommunicateFilter(oper.Output)
self._BasicCheck(plist, oper)
self.assertEqual(plist [0], 'terminal 1\r\n')
self.assertEqual(plist [1], 'terminal 2\r\n')
# Now try with PIPE and make sure it is not a terminal
oper = TestSubprocess.MyOperation()
plist = Popen(both_cmds, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True).CommunicateFilter(oper.Output)
self._BasicCheck(plist, oper)
self.assertEqual(plist [0], 'not 1\n')
self.assertEqual(plist [1], 'not 2\n')
if __name__ == '__main__':
unittest.main()

View File

@ -23,11 +23,12 @@ import command
import re
import os
import series
import settings
import subprocess
import sys
import terminal
import settings
def CountCommitsToBranch():
"""Returns number of commits between HEAD and the tracking branch.
@ -40,10 +41,123 @@ def CountCommitsToBranch():
"""
pipe = [['git', 'log', '--no-color', '--oneline', '@{upstream}..'],
['wc', '-l']]
stdout = command.RunPipe(pipe, capture=True, oneline=True)
stdout = command.RunPipe(pipe, capture=True, oneline=True).stdout
patch_count = int(stdout)
return patch_count
def GetUpstream(git_dir, branch):
"""Returns the name of the upstream for a branch
Args:
git_dir: Git directory containing repo
branch: Name of branch
Returns:
Name of upstream branch (e.g. 'upstream/master') or None if none
"""
remote = command.OutputOneLine('git', '--git-dir', git_dir, 'config',
'branch.%s.remote' % branch)
merge = command.OutputOneLine('git', '--git-dir', git_dir, 'config',
'branch.%s.merge' % branch)
if remote == '.':
return merge
elif remote and merge:
leaf = merge.split('/')[-1]
return '%s/%s' % (remote, leaf)
else:
raise ValueError, ("Cannot determine upstream branch for branch "
"'%s' remote='%s', merge='%s'" % (branch, remote, merge))
def GetRangeInBranch(git_dir, branch, include_upstream=False):
"""Returns an expression for the commits in the given branch.
Args:
git_dir: Directory containing git repo
branch: Name of branch
Return:
Expression in the form 'upstream..branch' which can be used to
access the commits.
"""
upstream = GetUpstream(git_dir, branch)
return '%s%s..%s' % (upstream, '~' if include_upstream else '', branch)
def CountCommitsInBranch(git_dir, branch, include_upstream=False):
"""Returns the number of commits in the given branch.
Args:
git_dir: Directory containing git repo
branch: Name of branch
Return:
Number of patches that exist on top of the branch
"""
range_expr = GetRangeInBranch(git_dir, branch, include_upstream)
pipe = [['git', '--git-dir', git_dir, 'log', '--oneline', range_expr],
['wc', '-l']]
result = command.RunPipe(pipe, capture=True, oneline=True)
patch_count = int(result.stdout)
return patch_count
def CountCommits(commit_range):
"""Returns the number of commits in the given range.
Args:
commit_range: Range of commits to count (e.g. 'HEAD..base')
Return:
Number of patches that exist on top of the branch
"""
pipe = [['git', 'log', '--oneline', commit_range],
['wc', '-l']]
stdout = command.RunPipe(pipe, capture=True, oneline=True).stdout
patch_count = int(stdout)
return patch_count
def Checkout(commit_hash, git_dir=None, work_tree=None, force=False):
"""Checkout the selected commit for this build
Args:
commit_hash: Commit hash to check out
"""
pipe = ['git']
if git_dir:
pipe.extend(['--git-dir', git_dir])
if work_tree:
pipe.extend(['--work-tree', work_tree])
pipe.append('checkout')
if force:
pipe.append('-f')
pipe.append(commit_hash)
result = command.RunPipe([pipe], capture=True, raise_on_error=False)
if result.return_code != 0:
raise OSError, 'git checkout (%s): %s' % (pipe, result.stderr)
def Clone(git_dir, output_dir):
"""Checkout the selected commit for this build
Args:
commit_hash: Commit hash to check out
"""
pipe = ['git', 'clone', git_dir, '.']
result = command.RunPipe([pipe], capture=True, cwd=output_dir)
if result.return_code != 0:
raise OSError, 'git clone: %s' % result.stderr
def Fetch(git_dir=None, work_tree=None):
"""Fetch from the origin repo
Args:
commit_hash: Commit hash to check out
"""
pipe = ['git']
if git_dir:
pipe.extend(['--git-dir', git_dir])
if work_tree:
pipe.extend(['--work-tree', work_tree])
pipe.append('fetch')
result = command.RunPipe([pipe], capture=True)
if result.return_code != 0:
raise OSError, 'git fetch: %s' % result.stderr
def CreatePatches(start, count, series):
"""Create a series of patches from the top of the current branch.
@ -203,7 +317,7 @@ def BuildEmailList(in_list, tag=None, alias=None):
return result
def EmailPatches(series, cover_fname, args, dry_run, cc_fname,
self_only=False, alias=None):
self_only=False, alias=None, in_reply_to=None):
"""Email a patch series.
Args:
@ -213,6 +327,8 @@ def EmailPatches(series, cover_fname, args, dry_run, cc_fname,
dry_run: Just return the command that would be run
cc_fname: Filename of Cc file for per-commit Cc
self_only: True to just email to yourself as a test
in_reply_to: If set we'll pass this to git as --in-reply-to.
Should be a message ID that this is in reply to.
Returns:
Git command that was/would be run
@ -262,6 +378,9 @@ def EmailPatches(series, cover_fname, args, dry_run, cc_fname,
to = BuildEmailList([os.getenv('USER')], '--to', alias)
cc = []
cmd = ['git', 'send-email', '--annotate']
if in_reply_to:
cmd.append('--in-reply-to="%s"' % in_reply_to)
cmd += to
cmd += cc
cmd += ['--cc-cmd', '"%s --cc-cmd %s"' % (sys.argv[0], cc_fname)]
@ -359,7 +478,8 @@ def GetAliasFile():
Returns:
Filename of git alias file, or None if none
"""
fname = command.OutputOneLine('git', 'config', 'sendemail.aliasesfile')
fname = command.OutputOneLine('git', 'config', 'sendemail.aliasesfile',
raise_on_error=False)
if fname:
fname = os.path.join(GetTopLevel(), fname.strip())
return fname
@ -389,6 +509,14 @@ def Setup():
if alias_fname:
settings.ReadGitAliases(alias_fname)
def GetHead():
"""Get the hash of the current HEAD
Returns:
Hash of HEAD
"""
return command.OutputOneLine('git', 'show', '-s', '--pretty=format:%H')
if __name__ == "__main__":
import doctest

View File

@ -31,7 +31,7 @@ from series import Series
# Tags that we detect and remove
re_remove = re.compile('^BUG=|^TEST=|^BRANCH=|^Change-Id:|^Review URL:'
'|Reviewed-on:|Reviewed-by:|Commit-Ready:')
'|Reviewed-on:|Commit-\w*:')
# Lines which are allowed after a TEST= line
re_allowed_after_test = re.compile('^Signed-off-by:')
@ -42,11 +42,14 @@ re_signoff = re.compile('^Signed-off-by:')
# The start of the cover letter
re_cover = re.compile('^Cover-letter:')
# A cover letter Cc
re_cover_cc = re.compile('^Cover-letter-cc: *(.*)')
# Patch series tag
re_series = re.compile('^Series-(\w*): *(.*)')
# Commit tags that we want to collect and keep
re_tag = re.compile('^(Tested-by|Acked-by|Cc): (.*)')
re_tag = re.compile('^(Tested-by|Acked-by|Reviewed-by|Cc): (.*)')
# The start of a new commit in the git log
re_commit = re.compile('^commit (.*)')
@ -153,6 +156,7 @@ class PatchStream:
# Handle state transition and skipping blank lines
series_match = re_series.match(line)
commit_match = re_commit.match(line) if self.is_log else None
cover_cc_match = re_cover_cc.match(line)
tag_match = None
if self.state == STATE_PATCH_HEADER:
tag_match = re_tag.match(line)
@ -205,6 +209,10 @@ class PatchStream:
self.in_section = 'cover'
self.skip_blank = False
elif cover_cc_match:
value = cover_cc_match.group(1)
self.AddToSeries(line, 'cover-cc', value)
# If we are in a change list, key collected lines until a blank one
elif self.in_change:
if is_blank:
@ -237,7 +245,8 @@ class PatchStream:
# Detect the start of a new commit
elif commit_match:
self.CloseCommit()
self.commit = commit.Commit(commit_match.group(1)[:7])
# TODO: We should store the whole hash, and just display a subset
self.commit = commit.Commit(commit_match.group(1)[:8])
# Detect tags in the commit message
elif tag_match:
@ -334,6 +343,35 @@ class PatchStream:
self.Finalize()
def GetMetaDataForList(commit_range, git_dir=None, count=None,
series = Series()):
"""Reads out patch series metadata from the commits
This does a 'git log' on the relevant commits and pulls out the tags we
are interested in.
Args:
commit_range: Range of commits to count (e.g. 'HEAD..base')
git_dir: Path to git repositiory (None to use default)
count: Number of commits to list, or None for no limit
series: Series object to add information into. By default a new series
is started.
Returns:
A Series object containing information about the commits.
"""
params = ['git', 'log', '--no-color', '--reverse', commit_range]
if count is not None:
params[2:2] = ['-n%d' % count]
if git_dir:
params[1:1] = ['--git-dir', git_dir]
pipe = [params]
stdout = command.RunPipe(pipe, capture=True).stdout
ps = PatchStream(series, is_log=True)
for line in stdout.splitlines():
ps.ProcessLine(line)
ps.Finalize()
return series
def GetMetaData(start, count):
"""Reads out patch series metadata from the commits
@ -344,15 +382,7 @@ def GetMetaData(start, count):
start: Commit to start from: 0=HEAD, 1=next one, etc.
count: Number of commits to list
"""
pipe = [['git', 'log', '--no-color', '--reverse', 'HEAD~%d' % start,
'-n%d' % count]]
stdout = command.RunPipe(pipe, capture=True)
series = Series()
ps = PatchStream(series, is_log=True)
for line in stdout.splitlines():
ps.ProcessLine(line)
ps.Finalize()
return series
return GetMetaDataForList('HEAD~%d' % start, None, count)
def FixPatch(backup_dir, fname, series, commit):
"""Fix up a patch file, by adding/removing as required.

View File

@ -49,10 +49,12 @@ parser.add_option('-i', '--ignore-errors', action='store_true',
dest='ignore_errors', default=False,
help='Send patches email even if patch errors are found')
parser.add_option('-n', '--dry-run', action='store_true', dest='dry_run',
default=False, help="Do a try run (create but don't email patches)")
default=False, help="Do a dry run (create but don't email patches)")
parser.add_option('-p', '--project', default=project.DetectProject(),
help="Project name; affects default option values and "
"aliases [default: %default]")
parser.add_option('-r', '--in-reply-to', type='string', action='store',
help="Message ID that this series is in reply to")
parser.add_option('-s', '--start', dest='start', type='int',
default=0, help='Commit to start creating patches from (0 = HEAD)')
parser.add_option('-t', '--test', action='store_true', dest='test',
@ -70,7 +72,7 @@ parser.add_option('--no-tags', action='store_false', dest='process_tags',
parser.usage = """patman [options]
Create patches from commits in a branch, check them and email them as
specified by tags you place in the commits. Use -n to """
specified by tags you place in the commits. Use -n to do a dry run first."""
# Parse options twice: first to get the project and second to handle
@ -163,7 +165,7 @@ else:
cmd = ''
if ok or options.ignore_errors:
cmd = gitutil.EmailPatches(series, cover_fname, args,
options.dry_run, cc_file)
options.dry_run, cc_file, in_reply_to=options.in_reply_to)
# For a dry run, just show our actions as a sanity check
if options.dry_run:

View File

@ -27,7 +27,8 @@ import gitutil
import terminal
# Series-xxx tags that we understand
valid_series = ['to', 'cc', 'version', 'changes', 'prefix', 'notes', 'name'];
valid_series = ['to', 'cc', 'version', 'changes', 'prefix', 'notes', 'name',
'cover-cc']
class Series(dict):
"""Holds information about a patch series, including all tags.
@ -43,6 +44,7 @@ class Series(dict):
def __init__(self):
self.cc = []
self.to = []
self.cover_cc = []
self.commits = []
self.cover = None
self.notes = []
@ -69,6 +71,7 @@ class Series(dict):
value: Tag value (part after 'Series-xxx: ')
"""
# If we already have it, then add to our list
name = name.replace('-', '_')
if name in self:
values = value.split(',')
values = [str.strip() for str in values]
@ -140,7 +143,8 @@ class Series(dict):
print 'Prefix:\t ', self.get('prefix')
if self.cover:
print 'Cover: %d lines' % len(self.cover)
all_ccs = itertools.chain(*self._generated_cc.values())
cover_cc = gitutil.BuildEmailList(self.get('cover_cc', ''))
all_ccs = itertools.chain(cover_cc, *self._generated_cc.values())
for email in set(all_ccs):
print ' Cc: ',email
if cmd:
@ -232,7 +236,8 @@ class Series(dict):
self._generated_cc[commit.patch] = list
if cover_fname:
print >>fd, cover_fname, ', '.join(set(all_ccs))
cover_cc = gitutil.BuildEmailList(self.get('cover_cc', ''))
print >>fd, cover_fname, ', '.join(set(cover_cc + all_ccs))
fd.close()
return fname

View File

@ -24,24 +24,32 @@
This module handles terminal interaction including ANSI color codes.
"""
import os
import sys
# Selection of when we want our output to be colored
COLOR_IF_TERMINAL, COLOR_ALWAYS, COLOR_NEVER = range(3)
class Color(object):
"""Conditionally wraps text in ANSI color escape sequences."""
BLACK, RED, GREEN, YELLOW, BLUE, MAGENTA, CYAN, WHITE = range(8)
BOLD = -1
COLOR_START = '\033[1;%dm'
BRIGHT_START = '\033[1;%dm'
NORMAL_START = '\033[22;%dm'
BOLD_START = '\033[1m'
RESET = '\033[0m'
def __init__(self, enabled=True):
def __init__(self, colored=COLOR_IF_TERMINAL):
"""Create a new Color object, optionally disabling color output.
Args:
enabled: True if color output should be enabled. If False then this
class will not add color codes at all.
"""
self._enabled = enabled
self._enabled = (colored == COLOR_ALWAYS or
(colored == COLOR_IF_TERMINAL and os.isatty(sys.stdout.fileno())))
def Start(self, color):
def Start(self, color, bright=True):
"""Returns a start color code.
Args:
@ -52,7 +60,8 @@ class Color(object):
otherwise returns empty string
"""
if self._enabled:
return self.COLOR_START % (color + 30)
base = self.BRIGHT_START if bright else self.NORMAL_START
return base % (color + 30)
return ''
def Stop(self):
@ -63,10 +72,10 @@ class Color(object):
returns empty string
"""
if self._enabled:
return self.RESET
return self.RESET
return ''
def Color(self, color, text):
def Color(self, color, text, bright=True):
"""Returns text with conditionally added color escape sequences.
Keyword arguments:
@ -78,9 +87,10 @@ class Color(object):
returns text with color escape sequences based on the value of color.
"""
if not self._enabled:
return text
return text
if color == self.BOLD:
start = self.BOLD_START
start = self.BOLD_START
else:
start = self.COLOR_START % (color + 30)
base = self.BRIGHT_START if bright else self.NORMAL_START
start = base % (color + 30)
return start + text + self.RESET

View File

@ -190,6 +190,11 @@ index 0000000..2234c87
+ rec->time_us = (uint32_t)timer_get_us();
+ rec->name = name;
+ }
+ if (!rec->name &&
+ %ssomething_else) {
+ rec->time_us = (uint32_t)timer_get_us();
+ rec->name = name;
+ }
+%sreturn rec->time_us;
+}
--
@ -197,15 +202,18 @@ index 0000000..2234c87
'''
signoff = 'Signed-off-by: Simon Glass <sjg@chromium.org>\n'
tab = ' '
indent = ' '
if data_type == 'good':
pass
elif data_type == 'no-signoff':
signoff = ''
elif data_type == 'spaces':
tab = ' '
elif data_type == 'indent':
indent = tab
else:
print 'not implemented'
return data % (signoff, tab, tab)
return data % (signoff, tab, indent, tab)
def SetupData(self, data_type):
inhandle, inname = tempfile.mkstemp()
@ -215,33 +223,49 @@ index 0000000..2234c87
infd.close()
return inname
def testCheckpatch(self):
def testGood(self):
"""Test checkpatch operation"""
inf = self.SetupData('good')
result, problems, err, warn, lines, stdout = checkpatch.CheckPatch(inf)
self.assertEqual(result, True)
self.assertEqual(problems, [])
self.assertEqual(err, 0)
self.assertEqual(warn, 0)
self.assertEqual(lines, 67)
result = checkpatch.CheckPatch(inf)
self.assertEqual(result.ok, True)
self.assertEqual(result.problems, [])
self.assertEqual(result.errors, 0)
self.assertEqual(result.warnings, 0)
self.assertEqual(result.checks, 0)
self.assertEqual(result.lines, 67)
os.remove(inf)
def testNoSignoff(self):
inf = self.SetupData('no-signoff')
result, problems, err, warn, lines, stdout = checkpatch.CheckPatch(inf)
self.assertEqual(result, False)
self.assertEqual(len(problems), 1)
self.assertEqual(err, 1)
self.assertEqual(warn, 0)
self.assertEqual(lines, 67)
result = checkpatch.CheckPatch(inf)
self.assertEqual(result.ok, False)
self.assertEqual(len(result.problems), 1)
self.assertEqual(result.errors, 1)
self.assertEqual(result.warnings, 0)
self.assertEqual(result.checks, 0)
self.assertEqual(result.lines, 67)
os.remove(inf)
def testSpaces(self):
inf = self.SetupData('spaces')
result, problems, err, warn, lines, stdout = checkpatch.CheckPatch(inf)
self.assertEqual(result, False)
self.assertEqual(len(problems), 2)
self.assertEqual(err, 0)
self.assertEqual(warn, 2)
self.assertEqual(lines, 67)
result = checkpatch.CheckPatch(inf)
self.assertEqual(result.ok, False)
self.assertEqual(len(result.problems), 1)
self.assertEqual(result.errors, 0)
self.assertEqual(result.warnings, 1)
self.assertEqual(result.checks, 0)
self.assertEqual(result.lines, 67)
os.remove(inf)
def testIndent(self):
inf = self.SetupData('indent')
result = checkpatch.CheckPatch(inf)
self.assertEqual(result.ok, False)
self.assertEqual(len(result.problems), 1)
self.assertEqual(result.errors, 0)
self.assertEqual(result.warnings, 0)
self.assertEqual(result.checks, 1)
self.assertEqual(result.lines, 67)
os.remove(inf)