scripts: Use core dump collector

This is done in order to track core dumps in a more efficient
manner. Till now, some cores could be missed if the binary was
executed outside of the cwd of the autotest (i.e. outside of
the spdk repo) but which was part of the critical path of the
actual test (e.g. fio in vhost-initiator tests). Also, since
core_pattern was set to plain "core", impact on the underlying
storage wasn't controlled either - if core was 20G in size,
this is what we would get. This could easly exhaust storage in
case error-prone patchsets were submitted on the CI side.

The collector will try to mitigate all the above by doing the
following:

  - collecting all the cores, regardless of their cwd
  - limiting size of the core to 2G
  - compressing the cores (lz4)

Also, limit of 2 collectors executing at once is set - if more
processes crashes at approx. the same time, they will be logged
in the kernel log instead.

Signed-off-by: Michal Berger <michalx.berger@intel.com>
Change-Id: I5956a9030c463ae85a21bfe95f28af5568c5c285
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/5369
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This commit is contained in:
Michal Berger 2020-12-01 12:57:47 +01:00 committed by Jim Harris
parent d82118485d
commit 45c42ac2f2
3 changed files with 122 additions and 19 deletions

View File

@ -30,8 +30,10 @@ fi
if [ $(uname -s) = Linux ]; then if [ $(uname -s) = Linux ]; then
old_core_pattern=$(< /proc/sys/kernel/core_pattern) old_core_pattern=$(< /proc/sys/kernel/core_pattern)
mkdir -p "$output_dir/coredumps"
# set core_pattern to a known value to avoid ABRT, systemd-coredump, etc. # set core_pattern to a known value to avoid ABRT, systemd-coredump, etc.
echo "core" > /proc/sys/kernel/core_pattern echo "|$rootdir/scripts/core-collector.sh %P %s %t $output_dir/coredumps" > /proc/sys/kernel/core_pattern
echo 2 > /proc/sys/kernel/core_pipe_limit
# Make sure that the hugepage state for our VM is fresh so we don't fail # Make sure that the hugepage state for our VM is fresh so we don't fail
# hugepage allocation. Allow time for this action to complete. # hugepage allocation. Allow time for this action to complete.

89
scripts/core-collector.sh Executable file
View File

@ -0,0 +1,89 @@
#!/usr/bin/env bash
# We don't want to tell kernel to include %e or %E since these
# can include whitespaces or other funny characters, and working
# with those on the cmdline would be a nightmare. Use procfs for
# the remaining pieces we want to gather:
# |$rootdir/scripts/core-collector.sh %P %s %t $output_dir
get_rlimit() {
local limit
while read -ra limit; do
[[ ${limit[1]} == core ]] && echo "${limit[4]}" # soft
done < "/proc/$core_pid/limits"
}
core_meta() {
jq . <<- CORE
{
"$exe_comm": {
"ts": "$core_time",
"size": "$core_size bytes",
"PID": $core_pid,
"signal": "$core_sig ($core_sig_name)",
"path": "$exe_path",
"statm": "$statm"
}
}
CORE
}
bt() { hash gdb && gdb -batch -ex "thread apply all bt full" "$1" "$2" 2>&1; }
stderr() {
exec 2> "$core.stderr.txt"
set -x
}
args+=(core_pid)
args+=(core_sig)
args+=(core_ts)
args+=(output_dir)
read -r "${args[@]}" <<< "$*"
exe_path=$(readlink -f "/proc/$core_pid/exe")
exe_comm=$(< "/proc/$core_pid/comm")
statm=$(< "/proc/$core_pid/statm")
core_time=$(date -d@"$core_ts")
core_sig_name=$(kill -l "$core_sig")
core=$output_dir/${exe_path##*/}_$core_pid.core
stderr
# RLIMIT_CORE is not enforced when core is piped to us. To make
# sure we won't attempt to overload underlying storage, copy
# only the reasonable amount of bytes (systemd defaults to 2G
# so let's follow that). But first, check limits of terminating
# process to see if we need to make any adjustments.
max_core=$((1024 * 1024 * 1024 * 2))
rlimit=$(get_rlimit)
if [[ $rlimit == unlimited ]] || ((rlimit > max_core)); then
rlimit=$max_core
fi
# Nothing to do
((rlimit == 0)) && exit 0
# Clear path for lz
rm -f "$core"{,.{bin,bt,gz,json}}
# Slurp the core
head -c "$rlimit" <&0 > "$core"
core_size=$(wc -c < "$core")
# Compress it
gzip -c "$core" > "$core.gz"
# Save the binary
cp "$exe_path" "$core.bin"
# Save the backtrace
bt "$exe_path" "$core" > "$core.bt.txt"
# Save the metadata of the core
core_meta > "$core.json"
# Nuke the original core
rm "$core"

View File

@ -600,24 +600,36 @@ function gdb_attach() {
} }
function process_core() { function process_core() {
ret=0 # Note that this always was racy as we can't really sync with the kernel
while IFS= read -r -d '' core; do # to see if there's any core queued up for writing. We could check if
exe=$(eu-readelf -n "$core" | grep psargs | sed "s/.*psargs: \([^ \'\" ]*\).*/\1/") # collector is running and wait for it explicitly, but it doesn't seem
if [[ ! -f "$exe" ]]; then # to be worth the effort. So assume that if we are being called via
exe=$(eu-readelf -n "$core" | grep -oP -m1 "$exe.+") # trap, as in, when some error has occurred, wait up to 5s for any
fi # potential cores. If we are called just for cleanup at the very end,
echo "exe for $core is $exe" # don't wait since all the tests ended successfully, hence having any
if [[ -n "$exe" ]]; then # critical cores lying around is unlikely.
if hash gdb &> /dev/null; then local es=$?
gdb -batch -ex "thread apply all bt full" $exe $core ((es != 0)) && sleep 5s
fi
cp $exe $output_dir local coredumps core
fi
mv $core $output_dir shopt -s nullglob
chmod a+r $output_dir/$core coredumps=("$output_dir/coredumps/"*.bt.txt)
ret=1 shopt -u nullglob
done < <(find . -type f \( -name 'core.[0-9]*' -o -name 'core' -o -name '*.core' \) -print0)
return $ret ((${#coredumps[@]} > 0)) || return 0
chmod -R a+r "$output_dir/coredumps"
for core in "${coredumps[@]}"; do
cat <<- BT
##### CORE BT ${core##*/} #####
$(<"$core")
--
BT
done
return 1
} }
function process_shm() { function process_shm() {