We still need to be able to explicitly set specific bits in the cluster array during initialization and loading (especially recovery), so we use a bit_array during load, and then convert it to a bit_pool just before calling the user's cmopletion callback. This gives a roughly 300% improvement over baseline on a benchmark which does continuous resize operations. The benefit is primarily from saving the lowest free bit rather than having to always start at bit 0. We may be able to further improve this by saving extents in the bit pool as well, although after this patch, the benchmark shows other hot spots different from the bit search. Signed-off-by: Jim Harris <james.r.harris@intel.com> Change-Id: Idb1d75d8348bc50560b1f42d49dbe4d79d024619 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3975 Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Paul Luse <paul.e.luse@intel.com> Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> |
||
---|---|---|
.. | ||
blob_bs_dev.c | ||
blobstore.c | ||
blobstore.h | ||
Makefile | ||
request.c | ||
request.h | ||
spdk_blob.map | ||
zeroes.c |