You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to request a feature that would greatly enhance the interpretability and debuggability of LocalAI models β the ability to expose the internal reasoning process during text generation.
Problem:
Currently, LocalAI only returns the final generated output (tokens or text), which limits insight into how the model arrived at its response.
There is no way to access intermediate model states such as:
logprobs or top-k token scores at each decoding step
attention weights per layer/head
hidden states / intermediate token embeddings
any kind of token-level reasoning trace
This makes it hard to:
debug model behavior
understand model uncertainty
build explainable AI systems (e.g. chain-of-thought visualization, step-by-step validation)
evaluate how model biases or hallucinations might arise
The text was updated successfully, but these errors were encountered:
Hi LocalAI team π,
I'd like to request a feature that would greatly enhance the interpretability and debuggability of LocalAI models β the ability to expose the internal reasoning process during text generation.
Problem:
Currently, LocalAI only returns the final generated output (tokens or text), which limits insight into how the model arrived at its response.
There is no way to access intermediate model states such as:
logprobs or top-k token scores at each decoding step
attention weights per layer/head
hidden states / intermediate token embeddings
any kind of token-level reasoning trace
This makes it hard to:
debug model behavior
understand model uncertainty
build explainable AI systems (e.g. chain-of-thought visualization, step-by-step validation)
evaluate how model biases or hallucinations might arise
The text was updated successfully, but these errors were encountered: