[Wait for #2536][application] add generate_multiple_tokens for llm
authorSeungbaek Hong <sb92.hong@samsung.com>
Fri, 5 Apr 2024 05:08:50 +0000 (14:08 +0900)
committerJijoong Moon <jijoong.moon@samsung.com>
Thu, 2 May 2024 23:59:06 +0000 (08:59 +0900)
commit3d9556b02bc028e330b57e897d5d6907084af684
tree1aacadc8bf763869a136e0da78d448c46fbc60e4
parent2060051191035e9ab252296bc41a7d53e0f6aa25
[Wait for #2536][application] add generate_multiple_tokens for llm

Added generate_multiple_tokens function for first generation on llm.

This function takes one logits and generates multiple output tokens.
To meet the purpose of the target application,
even if input are multiple logits,
only the first logits is used to generate multiple output tokens.

**Self evaluation:**
1. Build test:  [X]Passed [ ]Failed [ ]Skipped
2. Run test:  [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Seungbaek Hong <sb92.hong@samsung.com>
Applications/LLaMA/jni/main.cpp