Description
Overview
This workflow showcases how to build a fully customizable conversational agent using LangChain's Code Node and Google Gemini. Unlike n8nβs built-in Conversation Agent, this setup gives you total control over prompts, memory, and model behavior β all while minimizing unnecessary token usage from tool-calling overhead.
Perfect for advanced users, developers, and prompt engineers looking to design deeply personalized AI agents.
π οΈ Setup Instructions
π Configure Gemini API Key β Add your Google Gemini key (or switch to OpenAI, Mistral, etc.)
π§ Enable LangChain Code Node β This workflow runs only on self-hosted n8n
π¨ Choose Chat Input Mode β
ββπ¬ Test inside editor using the Chat button
ββπ Or trigger from a live chat UI via the When Chat Message Received node
π¨ Customization Options
πΌοΈ Interface Settings β Edit title, instructions, and placeholder text in the chat UI node
π§Ύ Prompt Engineering β Define the agentβs tone, rules, and reply logic in the Construct & Execute LLM Prompt node
β οΈ Be sure to keep {chat_history} and {input} placeholders intact
π Model Selection β Swap Gemini for OpenAI, Mistral, or any other LLM via the LangChain-compatible node
π Memory Control β Adjust how much prior conversation history is remembered in the Store Conversation History node
π Requirements
β οΈ This template requires the LangChain Code Node, which is available only on self-hosted n8n instances.
Make sure to enable langchain in your custom nodes configuration.












