接口 StreamingResponseHandler<T>
- 类型参数:
T- The type of the response.
public interface StreamingResponseHandler<T>
Represents a handler for streaming responses from a language model.
The handler is invoked each time the model generates a new token in a textual response.
If the model executes a tool instead,
onComplete(dev.langchain4j.model.output.Response<T>) will be invoked instead.-
方法概要
修饰符和类型方法说明default voidonComplete(Response<T> response) Invoked when the language model has finished streaming a response.voidThis method is invoked when an error occurs during streaming.voidInvoked each time the language model generates a new token in a textual response.
-
方法详细资料
-
onNext
Invoked each time the language model generates a new token in a textual response. If the model executes a tool instead, this method will not be invoked;onComplete(dev.langchain4j.model.output.Response<T>)will be invoked instead.- 参数:
token- The newly generated token, which is a part of the complete response.
-
onComplete
Invoked when the language model has finished streaming a response. If the model executes one or multiple tools, it is accessible viaAiMessage.toolExecutionRequests().- 参数:
response- The complete response generated by the language model. For textual responses, it contains all tokens fromonNext(java.lang.String)concatenated.
-
onError
This method is invoked when an error occurs during streaming.- 参数:
error- The error that occurred
-