From: Martin Svec Date: Tue, 15 Jan 2013 20:43:35 +0000 (-0800) Subject: target/rd: improve sg_table lookup scalability X-Git-Tag: upstream/snapshot3+hdmi~5589^2~27 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=8f67835f1e389978bb0809d5e528961986aa2a69;p=platform%2Fadaptation%2Frenesas_rcar%2Frenesas_kernel.git target/rd: improve sg_table lookup scalability Sequential scan of rd_dev->sg_table_array in rd_get_sg_table is a serious I/O performance bottleneck for large rd LUNs. Fix this by computing the sg_table index directly from page offset because all sg_tables (except the last one) have the same number of pages. Tested with 90 GiB rd_mcp LUN, where the patch improved maximal random R/W IOPS by more than 100-150%, depending on actual hardware and SAN setup. Signed-off-by: Martin Svec Signed-off-by: Nicholas Bellinger --- diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c index 0457de3..b0fff52 100644 --- a/drivers/target/target_core_rd.c +++ b/drivers/target/target_core_rd.c @@ -256,10 +256,12 @@ static void rd_free_device(struct se_device *dev) static struct rd_dev_sg_table *rd_get_sg_table(struct rd_dev *rd_dev, u32 page) { - u32 i; struct rd_dev_sg_table *sg_table; + u32 i, sg_per_table = (RD_MAX_ALLOCATION_SIZE / + sizeof(struct scatterlist)); - for (i = 0; i < rd_dev->sg_table_count; i++) { + i = page / sg_per_table; + if (i < rd_dev->sg_table_count) { sg_table = &rd_dev->sg_table_array[i]; if ((sg_table->page_start_offset <= page) && (sg_table->page_end_offset >= page))