Description
A/B test your AI agent prompts in real-time using Supabase session logic and OpenAI. Perfect for optimizing chatbot performance, engagement, or conversion across different prompt styles.
π§ How It Works:
Β β π¬ Chat Trigger β Messages are received through n8n's chat interface β π§Ύ Session Check β Supabase is queried to see if the session ID exists β π² Random Assignment β If itβs a new session, the user is randomly assigned to either a baseline or alternative prompt β π Session Memory β Session ID and chosen prompt group are stored in Supabase β π€ LLM Response β OpenAI generates a response using the assigned prompt variant β π Persistent Prompting β The same prompt is used throughout the session to maintain context
π It Automates:
Β β A/B prompt assignment using session-based logic β Response generation based on stored prompt group β Session tracking in a live Supabase table β Real-time experimentation without interrupting chat flow
π‘ Why Choose This Workflow:
Β β Optimize prompt effectiveness using real user interactions β Run scientific A/B tests without backend changes β Maintain consistent experience per session β Easily extend to test other LLM variables (e.g. temperature, max tokens)
π€ Who Is This For:
Β β Prompt engineers and AI researchers β Product teams running experiments on AI user interfaces β Startups fine-tuning onboarding/chat support bots β Developers building dynamic LLM-powered experiences
π Integrations:
Β β Supabase (for storing session and prompt variant) β OpenAI (for generating AI responses) β n8n Chat or Webhook (for message input and output)
π Smarter Prompts Through Real-Time Split Testing Whether you're fine-tuning a support chatbot or experimenting with sales copy, this workflow helps you validate which prompt performs bestβscientifically and at scale.
Link : https://lovable.dev/projects/3ce5d6ee-fc85-4465-840d-ea6cbb163b13