Quickstart, using LLMs 快速入门,使用 LLM
This tutorial gives you a quick walkthrough about building an end-to-end language model application with LangChain.
本教程为您提供了有关使用 LangChain 构建端到端语言模型应用程序的快速演练。
Installation and Setup 安装和设置
To get started, follow the installation instructions to install LangChain.
要开始使用,请按照安装说明安装 LangChain。
Picking up a LLM 集成语言模型
Using LangChain will usually require integrations with one or more model providers, data stores, apis, etc.
使用 LangChain 通常需要与一个或多个模型提供程序、数据存储、API 等集成。
For this example, we will be using OpenAI's APIs, so no additional setup is required.
在本例中,我们将使用 OpenAI 的 API,因此不需要进行其他设置。
Building a Language Model Application构建语言模型应用程序
Now that we have installed LangChain, we can start building our language model application.
现在我们已经安装了 LangChain,我们可以开始构建我们的语言模型应用程序了。
LangChain provides many modules that can be used to build language model applications. Modules can be combined to create more complex applications, or be used individually for simple applications.
LangChain提供了许多可用于构建语言模型应用程序的模块。模块可以组合以创建更复杂的应用程序,也可以单独用于简单的应用程序。
LLMs: Get Predictions from a Language ModelLLM:从语言模型获取预测
The most basic building block of LangChain is calling an LLM on some input. Let's walk through a simple example of how to do this. For this purpose, let's pretend we are building a service that generates a company name based on what the company makes.
LangChain最基本的构建块是在某些输入上调用LLM。让我们通过一个简单的示例来说明如何执行此操作。为此,让我们假设我们正在构建一个服务,该服务根据公司制作的内容生成公司名称。
In order to do this, we first need to import the LLM wrapper.
为此,我们首先需要导入 LLM 包装器。
import { OpenAI } from "langchain/llms/openai";
We will then need to set the environment variable for the OpenAI key. Three options here:
然后,我们需要为 OpenAI 密钥设置环境变量。这里有三个选项:
-
We can do this by setting the value in a .env file and use the dotenv package to read it.
我们可以通过在 .env 文件中设置值并使用 dotenv 包读取它来做到这一点。
- 1.1. For OpenAI Api 1.1. 对于 OpenAI API
- OPENAI_API_KEY="..."
- 1.2. For Azure OpenAI: 1.2. 对于 Azure OpenAI:
-
AZURE_OPENAI_API_KEY="..."
AZURE_OPENAI_API_INSTANCE_NAME="..."
AZURE_OPENAI_API_DEPLOYMENT_NAME="..."
AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME="..."
AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME="..."
AZURE_OPENAI_API_VERSION="..."
-
Or we can export the environment variable with the following command in your shell:
或者我们可以在 shell 中使用以下命令导出环境变量:
- 2.1. For OpenAI Api 2.1. 对于 OpenAI API
- export OPENAI_API_KEY=sk-....
- 2.2. For Azure OpenAI: 2.2. 对于 Azure OpenAI:
-
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_API_INSTANCE_NAME="..."
export AZURE_OPENAI_API_DEPLOYMENT_NAME="..."
export AZURE_OPENAI_API_COMPLETIONS_DEPLOYMENT_NAME="..."
export AZURE_OPENAI_API_EMBEDDINGS_DEPLOYMENT_NAME="..."
export AZURE_OPENAI_API_VERSION="..."
-
Or we can do it when initializing the wrapper along with other arguments. In this example, we probably want the outputs to be MORE random, so we'll initialize it with a HIGH temperature.
或者我们可以在初始化包装器和其他参数时执行此操作。在这个例子中,我们可能希望输出更加随机,所以我们将使用高温初始化它。
- 3.1. For OpenAI Api 3.1. 对于 OpenAI API
- const model = new OpenAI({ openAIApiKey: "sk-...", temperature: 0.9 });
- 3.2. For Azure OpenAI: 3.2. 对于 Azure OpenAI:
-
const model = new OpenAI({
azureOpenAIApiKey: "...",
azureOpenAIApiInstanceName: "....",
azureOpenAIApiDeploymentName: "....",
azureOpenAIApiVersion: "....",
temperature: 0.9,
});
Once we have initialized the wrapper, we can now call it on some input!
初始化包装器后,我们现在可以在一些输入上调用它!
const res = await model.call(
"What would be a good company name a company that makes colorful socks?"
);
console.log(res);
{ res: '\n\nFantasy Sockery' }
Prompt Templates: Manage Prompts for LLMs提示模板:管理 LLM 的提示
Calling an LLM is a great first step, but it's just the beginning. Normally when you use an LLM in an application, you are not sending user input directly to the LLM. Instead, you are probably taking user input and constructing a prompt, and then sending that to the LLM.
打电话给LLM是一个很好的第一步,但这只是一个开始。通常,当您在应用程序中使用LLM时,您不会将用户输入直接发送到LLM。相反,您可能正在获取用户输入并构造提示,然后将其发送到LLM。
For example, in the previous example, the text we passed in was hardcoded to ask for a name for a company that made colorful socks. In this imaginary service, what we would want to do is take only the user input describing what the company does, and then format the prompt with that information.
例如,在前面的示例中,我们传入的文本被硬编码为请求生产彩色袜子的公司的名称。在这个虚构的服务中,我们想要做的是只接受描述公司工作的用户输入,然后使用该信息格式化提示。
This is easy to do with LangChain!
这很容易用LangChain做到!
First lets define the prompt template:
首先,让我们定义提示模板:
import { PromptTemplate } from "langchain/prompts";
const template = "What is a good name for a company that makes {product}?";
const prompt = new PromptTemplate({
template: template,
inputVariables: ["product"],
});
Let's now see how this works! We can call the .format method to format it.
现在让我们看看这是如何工作的!我们可以调用 .format 方法来格式化它。
const res = await prompt.format({ product: "colorful socks" });
console.log(res);
{ res: 'What is a good name for a company that makes colorful socks?' }
Chains: Combine LLMs and Prompts in Multi-Step Workflows链:在多步骤工作流中组合LLM和提示
Up until now, we've worked with the PromptTemplate and LLM primitives by themselves. But of course, a real application is not just one primitive, but rather a combination of them.
到目前为止,我们已经单独使用PromptTemplate和LLM原语。但是,当然,真正的应用程序不仅仅是一个原语,而是它们的组合。
A chain in LangChain is made up of links, which can be either primitives like LLMs or other chains.
LangChain 中的链由链接组成,链接可以是 LLM 等原语或其他链。
The most core type of chain is an LLMChain, which consists of a PromptTemplate and an LLM.
最核心的链类型是LLMChain,它由PromptTemplate和LLM组成。
Extending the previous example, we can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM.
扩展前面的示例,我们可以构造一个LLMChain,它接受用户输入,使用PromptTemplate对其进行格式化,然后将格式化的响应传递给LLM。
import { OpenAI } from "langchain/llms/openai";
import { PromptTemplate } from "langchain/prompts";
const model = new OpenAI({ temperature: 0.9 });
const template = "What is a good name for a company that makes {product}?";
const prompt = new PromptTemplate({
template: template,
inputVariables: ["product"],
});
We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM:
我们现在可以创建一个非常简单的链,它将接受用户输入,用它格式化提示,然后将其发送到LLM:
import { LLMChain } from "langchain/chains";
const chain = new LLMChain({ llm: model, prompt: prompt });
Now we can run that chain only specifying the product!
现在我们可以只指定产品来运行该链!
const res = await chain.call({ product: "colorful socks" });
console.log(res);
{ res: { text: '\n\nColorfulCo Sockery.' } }
There we go! There's the first chain - an LLM Chain. This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains.
我们开始吧!有第一条链 - LLM链。这是更简单的链类型之一,但了解它的工作原理将使您为使用更复杂的链做好准备。
Agents: Dynamically Run Chains Based on User Input代理:根据用户输入动态运行链
So far the chains we've looked at run in a predetermined order.
到目前为止,我们已经看到的链按预定顺序运行。
Agents no longer do: they use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning to the user.
代理不再这样做:他们使用LLM来确定要采取哪些操作以及以什么顺序执行。操作可以使用工具并观察其输出,也可以返回给用户。
When used correctly agents can be extremely powerful. In this tutorial, we show you how to easily use agents through the simplest, highest level API.
如果使用得当,代理可以非常强大。在本教程中,我们将向您展示如何通过最简单、最高级别 API 轻松使用代理。
In order to load agents, you should understand the following concepts:
为了加载代理,您应该了解以下概念:
-
Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, code REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output.
工具:执行特定职责的功能。这可以是:谷歌搜索,数据库查找,代码REPL,其他链。工具的接口当前是一个函数,该函数应将字符串作为输入,并将字符串作为输出。
-
LLM: The language model powering the agent.
LLM:为代理提供支持的语言模型。
-
Agent: The agent to use. This should be a string that references a support agent class. Because this tutorial focuses on the simplest, highest level API, this only covers using the standard supported agents.
代理:要使用的代理。这应该是引用支持代理类的字符串。由于本教程重点介绍最简单、最高级别 API,因此仅介绍如何使用受支持的标准代理。
For this example, you'll need to set the SerpAPI environment variables in the .env file.
对于此示例,您需要在 .env 文件中设置 SerpAPI 环境变量。
SERPAPI_API_KEY="..."
Install serpapi package (Google Search API):
安装 serpapi 软件包 (谷歌搜索 API):
- npm
npm install -S serpapi
- Yarn
yarn add serpapi
- pnpm
pnpm add serpapi
Now we can get started! 现在我们可以开始了!
import { OpenAI } from "langchain/llms/openai";
import { initializeAgentExecutorWithOptions } from "langchain/agents";
import { SerpAPI } from "langchain/tools";
import { Calculator } from "langchain/tools/calculator";
const model = new OpenAI({ temperature: 0 });
const tools = [
new SerpAPI(process.env.SERPAPI_API_KEY, {
location: "Austin,Texas,United States",
hl: "en",
gl: "us",
}),
new Calculator(),
];
const executor = await initializeAgentExecutorWithOptions(tools, model, {
agentType: "zero-shot-react-description",
});
console.log("Loaded agent.");
const input =
"Who is Olivia Wilde's boyfriend?" +
" What is his current age raised to the 0.23 power?";
console.log(`Executing with input "${input}"...`);
const result = await executor.call({ input });
console.log(`Got output ${result.output}`);
langchain-examples:start: Executing with input "Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?"...
langchain-examples:start: Got output Olivia Wilde's boyfriend is Jason Sudeikis, and his current age raised to the 0.23 power is 2.4242784855673896.
Memory: Add State to Chains and Agents内存:向链和代理添加状态
So far, all the chains and agents we've gone through have been stateless. But often, you may want a chain or agent to have some concept of "memory" so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of "short-term memory". On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of "long-term memory".
到目前为止,我们经历的所有链条和代理人都是无国籍的。但通常,您可能希望链或代理具有一些“记忆”概念,以便它可以记住有关其先前交互的信息。最清晰和简单的例子是在设计聊天机器人时 - 您希望它记住以前的消息,以便它可以使用它中的上下文进行更好的对话。这将是一种“短期记忆”。在更复杂的方面,你可以想象一个链/代理随着时间的推移记住关键信息——这将是“长期记忆”的一种形式。
LangChain provides several specially created chains just for this purpose. This section walks through using one of those chains (the ConversationChain).
LangChain为此提供了几个专门创建的链。本节演练如何使用其中一个链 ( ConversationChain )。
By default, the ConversationChain has a simple type of memory that remembers all previous inputs/outputs and adds them to the context that is passed. Let's take a look at using this chain.
默认情况下, ConversationChain 具有简单的内存类型,可记住所有以前的输入/输出并将它们添加到传递的上下文中。让我们来看看如何使用这个链。
import { OpenAI } from "langchain/llms/openai";
import { BufferMemory } from "langchain/memory";
import { ConversationChain } from "langchain/chains";
const model = new OpenAI({});
const memory = new BufferMemory();
const chain = new ConversationChain({ llm: model, memory: memory });
const res1 = await chain.call({ input: "Hi! I'm Jim." });
console.log(res1);
{response: " Hi Jim! It's nice to meet you. My name is AI. What would you like to talk about?"}
const res2 = await chain.call({ input: "What's my name?" });
console.log(res2);
{response: ' You said your name is Jim. Is there anything else you would like to talk about?'}
Streaming 流
You can also use the streaming API to get words streamed back to you as they are generated. This is useful for eg. chatbots, where you want to show the user what is being generated as it is being generated. Note: OpenAI as of this writing does not support tokenUsage reporting while streaming is enabled.
您还可以使用流式处理 API 在生成单词时将其流式传输回给您。这对例如很有用。聊天机器人,您希望向用户显示正在生成的内容。注意:在撰写本文时,OpenAI 不支持在启用流式传输时进行 tokenUsage 报告。
import { OpenAI } from "langchain/llms/openai";
// To enable streaming, we pass in `streaming: true` to the LLM constructor.
// Additionally, we pass in a handler for the `handleLLMNewToken` event.
const chat = new OpenAI({
streaming: true,
callbacks: [
{
handleLLMNewToken(token: string) {
process.stdout.write(token);
},
},
],
});
await chat.call("Write me a song about sparkling water.");
/*
Verse 1
Crystal clear and made with care
Sparkling water on my lips, so refreshing in the air
Fizzy bubbles, light and sweet
My favorite beverage I can’t help but repeat
Chorus
A toast to sparkling water, I’m feeling so alive
Let’s take a sip, and let’s take a drive
A toast to sparkling water, it’s the best I’ve had in my life
It’s the best way to start off the night
Verse 2
It’s the perfect drink to quench my thirst
It’s the best way to stay hydrated, it’s the first
A few ice cubes, a splash of lime
It will make any day feel sublime
...
*/
API Reference:
- OpenAI from langchain/llms/openai OpenAI从 langchain/llms/openai