An LLMChain is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
LLMChain 是一个简单的链,它围绕语言模型添加了一些功能。它在整个LangChain中广泛使用,包括其他链和代理。
An LLMChain consists of a PromptTemplate and a language model (either an LLM or chat model).
LLMChain 由 PromptTemplate 和语言模型(LLM 或聊天模型)组成。
Usage with LLMs
We can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM:
我们可以构造一个LLMChain,它接受用户输入,使用PromptTemplate格式化它,然后将格式化的响应传递给LLM:
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
import { LLMChain } from "langchain/chains";
// We can construct an LLMChain from a PromptTemplate and an LLM.
const model = new OpenAI({ temperature: 0 });
const prompt = PromptTemplate.fromTemplate(
"What is a good name for a company that makes {product}?"
);
const chainA = new LLMChain({ llm: model, prompt });
// The result is an object with a `text` property.
const resA = await chainA.call({ product: "colorful socks" });
console.log({ resA });
// { resA: { text: '\n\nSocktastic!' } }
// Since the LLMChain is a single-input, single-output chain, we can also `run` it.
// This takes in a string and returns the `text` property.
const resA2 = await chainA.run("colorful socks");
console.log({ resA2 });
// { resA2: '\n\nSocktastic!' }
API Reference:
- OpenAI from langchain/llms/openai OpenAI从 langchain/llms/openai
-
PromptTemplate from langchain/prompts
提示模板从 langchain/prompts
- LLMChain from langchain/chains 法学硕士海恩从 langchain/chains
Usage with Chat Models 与聊天模型一起使用
We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to a ChatModel:
我们还可以构造一个LLMChain,它接受用户输入,使用PromptTemplate格式化它,然后将格式化的响应传递给ChatModel:
import {
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
} from "langchain/prompts";
import { LLMChain } from "langchain/chains";
import { ChatOpenAI } from "langchain/chat_models/openai";
// We can also construct an LLMChain from a ChatPromptTemplate and a chat model.
const chat = new ChatOpenAI({ temperature: 0 });
const chatPrompt = ChatPromptTemplate.fromPromptMessages([
SystemMessagePromptTemplate.fromTemplate(
"You are a helpful assistant that translates {input_language} to {output_language}."
),
HumanMessagePromptTemplate.fromTemplate("{text}"),
]);
const chainB = new LLMChain({
prompt: chatPrompt,
llm: chat,
});
const resB = await chainB.call({
input_language: "English",
output_language: "French",
text: "I love programming.",
});
console.log({ resB });
// { resB: { text: "J'adore la programmation." } }
API Reference:
-
ChatPromptTemplate from langchain/prompts
聊天提示模板从 langchain/prompts
-
HumanMessagePromptTemplate from langchain/prompts
人工消息提示模板从 langchain/prompts
-
SystemMessagePromptTemplate from langchain/prompts
系统消息提示模板从 langchain/prompts
- LLMChain from langchain/chains 法学硕士海恩从 langchain/chains
- ChatOpenAI from langchain/chat_models/openai 聊天打开AI从 langchain/chat_models/openai
Usage in Streaming Mode 在流模式下的使用
We can also construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM in streaming mode, which will stream back tokens as they are generated:
我们还可以构造一个LLMChain,它接受用户输入,使用PromptTemplate对其进行格式化,然后将格式化的响应传递给流模式下的LLM,LLM将在生成令牌时流回:
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
import { LLMChain } from "langchain/chains";
// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.
const model = new OpenAI({ temperature: 0.9, streaming: true });
const prompt = PromptTemplate.fromTemplate(
"What is a good name for a company that makes {product}?"
);
const chain = new LLMChain({ llm: model, prompt });
// Call the chain with the inputs and a callback for the streamed tokens
const res = await chain.call({ product: "colorful socks" }, [
{
handleLLMNewToken(token: string) {
process.stdout.write(token);
},
},
]);
console.log({ res });
// { res: { text: '\n\nKaleidoscope Socks' } }
API Reference:
- OpenAI from langchain/llms/openai OpenAI从 langchain/llms/openai
-
PromptTemplate from langchain/prompts
提示模板从 langchain/prompts
- LLMChain from langchain/chains 法学硕士海恩从 langchain/chains
Cancelling a running LLMChain 取消正在运行的LLMChain
We can also cancel a running LLMChain by passing an AbortSignal to the call method:
我们还可以通过将 AbortSignal 传递给 call 方法来取消正在运行的 LLMChain:
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
import { LLMChain } from "langchain/chains";
// Create a new LLMChain from a PromptTemplate and an LLM in streaming mode.
const model = new OpenAI({ temperature: 0.9, streaming: true });
const prompt = PromptTemplate.fromTemplate(
"Give me a long paragraph about {product}?"
);
const chain = new LLMChain({ llm: model, prompt });
const controller = new AbortController();
// Call `controller.abort()` somewhere to cancel the request.
setTimeout(() => {
controller.abort();
}, 3000);
try {
// Call the chain with the inputs and a callback for the streamed tokens
const res = await chain.call(
{ product: "colorful socks", signal: controller.signal },
[
{
handleLLMNewToken(token: string) {
process.stdout.write(token);
},
},
]
);
} catch (e) {
console.log(e);
// Error: Cancel: canceled
}
API Reference:
- OpenAI from langchain/llms/openai OpenAI从 langchain/llms/openai
-
PromptTemplate from langchain/prompts
提示模板从 langchain/prompts
- LLMChain from langchain/chains 法学硕士海恩从 langchain/chains
In this example we show cancellation in streaming mode, but it works the same way in non-streaming mode.
在此示例中,我们在流式处理模式下显示取消,但在非流式处理模式下的工作方式相同。