$T^2$ of Thoughts: Temperature Tree Elicits Reasoning in Large Language Models
Publikation: Working paper › Preprint › Forskning
Standard
$T^2$ of Thoughts : Temperature Tree Elicits Reasoning in Large Language Models. / Cai, Chengkun; Zhao, Xu; Du, Yucheng; Liu, Haoliang; Li, Lei.
2024.Publikation: Working paper › Preprint › Forskning
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - UNPB
T1 - $T^2$ of Thoughts
T2 - Temperature Tree Elicits Reasoning in Large Language Models
AU - Cai, Chengkun
AU - Zhao, Xu
AU - Du, Yucheng
AU - Liu, Haoliang
AU - Li, Lei
N1 - 10 pages, 5 figures
PY - 2024/5/23
Y1 - 2024/5/23
N2 - Large Language Models (LLMs) have emerged as powerful tools in artificial intelligence, especially in complex decision-making scenarios, but their static problem-solving strategies often limit their adaptability to dynamic environments. We explore the enhancement of reasoning capabilities in LLMs through Temperature Tree ($T^2$) prompting via Particle Swarm Optimization, termed as $T^2$ of Thoughts ($T^2oT$). The primary focus is on enhancing decision-making processes by dynamically adjusting search parameters, especially temperature, to improve accuracy without increasing computational demands. We empirically validate that our hybrid $T^2oT$ approach yields enhancements in, single-solution accuracy, multi-solution generation and text generation quality. Our findings suggest that while dynamic search depth adjustments based on temperature can yield mixed results, a fixed search depth, when coupled with adaptive capabilities of $T^2oT$, provides a more reliable and versatile problem-solving strategy. This work highlights the potential for future explorations in optimizing algorithmic interactions with foundational language models, particularly illustrated by our development for the Game of 24 and Creative Writing tasks.
AB - Large Language Models (LLMs) have emerged as powerful tools in artificial intelligence, especially in complex decision-making scenarios, but their static problem-solving strategies often limit their adaptability to dynamic environments. We explore the enhancement of reasoning capabilities in LLMs through Temperature Tree ($T^2$) prompting via Particle Swarm Optimization, termed as $T^2$ of Thoughts ($T^2oT$). The primary focus is on enhancing decision-making processes by dynamically adjusting search parameters, especially temperature, to improve accuracy without increasing computational demands. We empirically validate that our hybrid $T^2oT$ approach yields enhancements in, single-solution accuracy, multi-solution generation and text generation quality. Our findings suggest that while dynamic search depth adjustments based on temperature can yield mixed results, a fixed search depth, when coupled with adaptive capabilities of $T^2oT$, provides a more reliable and versatile problem-solving strategy. This work highlights the potential for future explorations in optimizing algorithmic interactions with foundational language models, particularly illustrated by our development for the Game of 24 and Creative Writing tasks.
KW - cs.CL
KW - cs.AI
KW - cs.LG
M3 - Preprint
BT - $T^2$ of Thoughts
ER -
ID: 395084579