[ Upstream commit
4140aafcff167b5b9e8dae6a1709a6de7cac6f74 ]
CRYPTO_TFM_REQ_MAY_BACKLOG tells the crypto driver that it should
internally backlog requests until the crypto hw's queue becomes
full. At that point, crypto_engine backlogs the request and returns
-EBUSY. Calling driver such as dm-crypt then waits until the
complete() function is called with a status of -EINPROGRESS before
sending a new request.
The problem lies in the call to complete() with a value of -EINPROGRESS
that is made when a backlog item is present on the queue. The call is
done before the successful execution of the crypto request. In the case
that do_one_request() returns < 0 and the retry support is available,
the request is put back in the queue. This leads upper drivers to send
a new request even if the queue is still full.
The problem can be reproduced by doing a large dd into a crypto
dm-crypt device. This is pretty easy to see when using
Freescale CAAM crypto driver and SWIOTLB dma. Since the actual amount
of requests that can be hold in the queue is unlimited we get IOs error
and dma allocation.
The fix is to call complete with a value of -EINPROGRESS only if
the request is not enqueued back in crypto_queue. This is done
by calling complete() later in the code. In order to delay the decision,
crypto_queue is modified to correctly set the backlog pointer
when a request is enqueued back.
Fixes:
6a89f492f8e5 ("crypto: engine - support for parallel requests based on retry mechanism")
Co-developed-by: Sylvain Ouellet <souellet@genetec.com>
Signed-off-by: Sylvain Ouellet <souellet@genetec.com>
Signed-off-by: Olivier Bacon <obacon@genetec.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sashal@kernel.org>
void crypto_enqueue_request_head(struct crypto_queue *queue,
struct crypto_async_request *request)
{
+ if (unlikely(queue->qlen >= queue->max_qlen))
+ queue->backlog = queue->backlog->prev;
+
queue->qlen++;
list_add(&request->list, &queue->list);
}
if (!engine->retry_support)
engine->cur_req = async_req;
- if (backlog)
- crypto_request_complete(backlog, -EINPROGRESS);
-
if (engine->busy)
was_busy = true;
else
crypto_request_complete(async_req, ret);
retry:
+ if (backlog)
+ crypto_request_complete(backlog, -EINPROGRESS);
+
/* If retry mechanism is supported, send new requests to engine */
if (engine->retry_support) {
spin_lock_irqsave(&engine->queue_lock, flags);