Optimize memory_xfer_partial for remote
Some analysis we did here showed that increasing the cap on the
transfer size in target.c:memory_xfer_partial could give 20% or more
improvement in remote load across JTAG. Transfer sizes were capped
to 4K bytes because of performance problems encountered with the
restore command, documented here:
https://sourceware.org/ml/gdb-patches/2013-07/msg00611.html
and in commit
67c059c29e1f ("Improve performance of large restore
commands").
The 4K cap was introduced because in a case where the restore command
requested a 100MB transfer, memory_xfer_partial would repeatedy
allocate and copy an entire 100MB buffer in order to properly handle
breakpoint shadow instructions, even though memory_xfer_partial would
actually only write a small portion of the buffer contents.
A couple of alternative solutions were suggested:
* change the algorithm for handling the breakpoint shadow instructions
* throttle the transfer size up or down based on the previous actual
transfer size
I tried implementing the throttling approach, and my implementation
reduced the performance in some cases.
This patch implements a new target function that returns that target's
limit on memory transfer size. It defaults to ULONGEST_MAX bytes,
because for native targets there is no marshaling and thus no limit is
needed. For remote targets it uses get_memory_write_packet_size.
gdb/ChangeLog:
* remote.c (remote_get_memory_xfer_limit): New function.
* target-delegates.c: Regenerate.
* target.c (memory_xfer_partial): Call
target_ops.to_get_memory_xfer_limit.
* target.h (struct target_ops)
<to_get_memory_xfer_limit>: New member.