[bugfix] Fix LoRA indices array size in the FC layer
authorDonghyeon Jeong <dhyeon.jeong@samsung.com>
Mon, 20 May 2024 02:12:43 +0000 (11:12 +0900)
committerJijoong Moon <jijoong.moon@samsung.com>
Wed, 22 May 2024 23:10:49 +0000 (08:10 +0900)
This PR resolves an issue related to the incorrect array size for lora_idx in the fully connected layer.
Specifically, the fix has made the array size four elements long, corresponding to loraA, loraB, loraTmp, and loraOut.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <dhyeon.jeong@samsung.com>
nntrainer/layers/fc_layer.h

index cb3726b020ed9aa44ba42b60d8ef0ad9115166f6..44ef99d912dbc1d4f55e23032dd36b5417ff33d9 100644 (file)
@@ -114,7 +114,7 @@ private:
                                                 lora_scaling - scaling factor of LoRA apply, i.e.,
                                              lora_scaling = alpha / lora_rank */
   std::array<unsigned int, 2> weight_idx; /**< indices of the weights */
-  std::array<unsigned int, 2> lora_idx;   /**< indices of the lora weights */
+  std::array<unsigned int, 4> lora_idx;   /**< indices of the lora weights */
 };
 } // namespace nntrainer