Output Parsers
Output parsers are responsible for taking the output of an LLM and transforming it to a more suitable format. This is very useful when you are using LLMs to generate any form of structured data.
Besides having a large collection of different types of output parsers, one distinguishing benefit of LangChain OutputParsers is that many of them support streaming.
Quick Start
See this quick-start guide for an introduction to output parsers and how to work with them.
Output Parser Types
LangChain has lots of different types of output parsers. This is a list of output parsers LangChain supports. The table below has various pieces of information:
Name: The name of the output parser
Supports Streaming: Whether the output parser supports streaming.
Has Format Instructions: Whether the output parser has format instructions. This is generally available except when (a) the desired schema is not specified in the prompt but rather in other parameters (like OpenAI function calling), or (b) when the OutputParser wraps another OutputParser.
Calls LLM: Whether this output parser itself calls an LLM. This is usually only done by output parsers that attempt to correct misformatted output.
Input Type: Expected input type. Most output parsers work on both strings and messages, but some (like OpenAI Functions) need a message with specific kwargs.
Output Type: The output type of the object returned by the parser.
Description: Our commentary on this output parser and when to use it.
Name | Supports Streaming | Has Format Instructions | Calls LLM | Input Type | Output Type | Description |
---|---|---|---|---|---|---|
OpenAITools | (Passes tools to model) | Message (with tool_choice ) | JSON object | Uses latest OpenAI function calling args tools and tool_choice to structure the return output. If you are using a model that supports function calling, this is generally the most reliable method. | ||
OpenAIFunctions | ✅ | (Passes functions to model) | Message (with function_call ) | JSON object | Uses legacy OpenAI function calling args functions and function_call to structure the return output. | |
JSON | ✅ | ✅ | str | Message | JSON object | Returns a JSON object as specified. You can specify a Pydantic model and it will return JSON for that model. Probably the most reliable output parser for getting structured data that does NOT use function calling. | |
XML | ✅ | ✅ | str | Message | dict | Returns a dictionary of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's). | |
CSV | ✅ | ✅ | str | Message | List[str] | Returns a list of comma separated values. | |
OutputFixing | ✅ | str | Message | Wraps another output parser. If that output parser errors, then this will pass the error message and the bad output to an LLM and ask it to fix the output. | |||
RetryWithError | ✅ | str | Message | Wraps another output parser. If that output parser errors, then this will pass the original inputs, the bad output, and the error message to an LLM and ask it to fix it. Compared to OutputFixingParser, this one also sends the original instructions. | |||
Pydantic | ✅ | str | Message | pydantic.BaseModel | Takes a user defined Pydantic model and returns data in that format. | ||
YAML | ✅ | str | Message | pydantic.BaseModel | Takes a user defined Pydantic model and returns data in that format. Uses YAML to encode it. | ||
PandasDataFrame | ✅ | str | Message | dict | Useful for doing operations with pandas DataFrames. | ||
Enum | ✅ | str | Message | Enum | Parses response into one of the provided enum values. | ||
Datetime | ✅ | str | Message | datetime.datetime | Parses response into a datetime string. | ||
Structured | ✅ | str | Message | Dict[str, str] | An output parser that returns structured information. It is less powerful than other output parsers since it only allows for fields to be strings. This can be useful when you are working with smaller LLMs. |