WellAlly Logo
WellAlly康心伴
Development

Building a Smart AI Meal Planner: Reliable JSON with Next.js & LangChain

Learn to build a generative AI meal planner using Next.js and LangChain. This tutorial covers prompt engineering with Zod schemas to guarantee structured, reliable JSON output from your LLM.

W
2025-12-13
9 min read

Ever tried asking an LLM for structured data, like a JSON object, only to get a slightly different format each time? One day it's a string, the next a malformed object. This unpredictability makes it a nightmare to build reliable applications. The problem isn't the LLM's creativity; it's the lack of constraints.

In this tutorial, we'll tackle this problem head-on by building a smart, generative AI meal planner. This tool will take a user's goals (e.g., "high protein, low carb") and dietary restrictions, and generate a complete, structured 7-day meal plan.

We'll use the power of Next.js for the frontend and API layer, and the magic of LangChain to ensure our AI not only provides a great meal plan but also delivers it in a perfect, parsable JSON format every single time. This is the key to moving from AI novelties to production-ready AI features.

Prerequisites

  • Node.js (v18 or later)
  • An OpenAI API key.
  • Basic understanding of React, TypeScript, and Next.js.

Understanding the Problem: The Chaos of Unstructured AI Output

When you prompt a large language model (LLM) like GPT-4, you're essentially having a conversation. The model's free-text responses are great for chatbots but terrible for applications that need to programmatically use the output.

Imagine trying to parse this:

code
"Sure, here is a meal plan. For Monday, you could have scrambled eggs... then for lunch, maybe a chicken salad. Dinner could be salmon..."
Code collapsed

This is brittle and prone to breaking. What we need is a reliable data structure. This is where LangChain's structured output tools come in, allowing us to define a schema and force the LLM's response to conform to it.

Prerequisites & Setup

First, let's get our Next.js project up and running.

code
npx create-next-app@latest ai-meal-planner --typescript --tailwind --eslint
cd ai-meal-planner
Code collapsed

Next, we need to install the necessary libraries for LangChain and its OpenAI integration, plus Zod for schema validation.

code
npm install langchain @langchain/openai zod
Code collapsed

Finally, create a .env.local file in your project root to store your OpenAI API key securely.

code
# .env.local
OPENAI_API_KEY="your-openai-api-key-here"
Code collapsed

Now, start the development server to make sure everything is working.

code
npm run dev
Code collapsed

You should see the default Next.js starter page at http://localhost:3000.

Step 1: Building the User Interface

Let's create a simple form to capture the user's dietary preferences. We'll use basic React state management for this.

What we're doing

We will replace the content of app/page.tsx with a form that collects the user's dietary goals and restrictions. When submitted, this form will trigger our API call to the backend.

Implementation

code
// app/page.tsx
'use client';

import { useState } from 'react';

export default function Home() {
  const [goals, setGoals] = useState('');
  const [restrictions, setRestrictions] = useState('');
  const [mealPlan, setMealPlan] = useState(null);
  const [isLoading, setIsLoading] = useState(false);

  const handleSubmit = async (e: React.FormEvent) => {
    e.preventDefault();
    setIsLoading(true);
    setMealPlan(null);

    try {
      const response = await fetch('/api/generate-meal-plan', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ goals, restrictions }),
      });

      if (!response.ok) {
        throw new Error('Failed to generate meal plan');
      }

      const data = await response.json();
      setMealPlan(data.mealPlan);
    } catch (error) {
      console.error(error);
      alert('An error occurred. Please try again.');
    } finally {
      setIsLoading(false);
    }
  };

  return (
    <main className="flex min-h-screen flex-col items-center justify-center p-24 bg-gray-50">
      <div className="w-full max-w-2xl bg-white p-8 rounded-lg shadow-md">
        <h1 className="text-3xl font-bold mb-6 text-center text-gray-800">
          AI Meal Planner 🥗
        </h1>
        <form onSubmit={handleSubmit}>
          <div className="mb-4">
            <label htmlFor="goals" className="block text-gray-700 font-medium mb-2">
              Dietary Goals
            </label>
            <input
              type="text"
              id="goals"
              value={goals}
              onChange={(e) => setGoals(e.target.value)}
              className="w-full p-3 border rounded-lg focus:ring-2 focus:ring-blue-500"
              placeholder="e.g., High protein, low carb"
              required
            />
          </div>
          <div className="mb-6">
            <label htmlFor="restrictions" className="block text-gray-700 font-medium mb-2">
              Allergies/Restrictions
            </label>
            <input
              type="text"
              id="restrictions"
              value={restrictions}
              onChange={(e) => setRestrictions(e.target.value)}
              className="w-full p-3 border rounded-lg focus:ring-2 focus:ring-blue-500"
              placeholder="e.g., Gluten-free, no nuts"
            />
          </div>
          <button
            type="submit"
            disabled={isLoading}
            className="w-full bg-blue-600 text-white p-3 rounded-lg font-semibold hover:bg-blue-700 disabled:bg-gray-400"
          >
            {isLoading ? 'Generating...' : 'Generate Meal Plan'}
          </button>
        </form>

        {mealPlan && (
          <div className="mt-8">
            <h2 className="text-2xl font-bold mb-4 text-center text-gray-800">Your 7-Day Meal Plan</h2>
            {/* We will render the meal plan here */}
          </div>
        )}
      </div>
    </main>
  );
}
Code collapsed

How it works

This is a standard client-side React component. We use the useState hook to manage the form inputs, loading state, and the final meal plan data. The handleSubmit function sends the user's input to our yet-to-be-created API route at /api/generate-meal-plan.

Step 2: Defining the Structured Output with Zod and LangChain

This is where the magic happens. We'll define the exact JSON structure we want the LLM to return. LangChain uses this schema to provide instructions to the model.

What we're doing

We're creating a Next.js API route. Inside this route, we'll define a Zod schema that represents a perfect meal plan. This schema will include days of the week, meals (breakfast, lunch, dinner), dish names, and calorie counts.

Implementation

First, create the API route file:

code
mkdir -p app/api/generate-meal-plan
touch app/api/generate-meal-plan/route.ts
Code collapsed

Now, let's write the code for our API route.

code
// app/api/generate-meal-plan/route.ts
import { NextResponse } from 'next/server';
import { z } from 'zod';
import { ChatOpenAI } from '@langchain/openai';
import { StructuredOutputParser } from 'langchain/output_parsers';
import { PromptTemplate } from '@langchain/core/prompts';

// Define the schema for a single meal
const mealSchema = z.object({
  dish_name: z.string().describe('The name of the dish.'),
  calories: z.number().describe('Estimated calories for the meal.'),
});

// Define the schema for a full day's plan
const dailyPlanSchema = z.object({
  breakfast: mealSchema,
  lunch: mealSchema,
  dinner: mealSchema,
});

// Define the schema for the entire weekly meal plan
const weeklyPlanSchema = z.object({
  monday: dailyPlanSchema,
  tuesday: dailyPlanSchema,
  wednesday: dailyPlanSchema,
  thursday: dailyPlanSchema,
  friday: dailyPlanSchema,
  saturday: dailyPlanSchema,
  sunday: dailyPlanSchema,
});

export async function POST(req: Request) {
  try {
    const body = await req.json();
    const { goals, restrictions } = body;

    // 1. Initialize the Output Parser
    const parser = StructuredOutputParser.fromZodSchema(weeklyPlanSchema);

    // 2. Create the Prompt Template
    const prompt = new PromptTemplate({
      template: `You are an expert nutritionist. Generate a 7-day meal plan based on the user's goals and restrictions.
      {format_instructions}
      User's Goals: {goals}
      Dietary Restrictions: {restrictions}
      `,
      inputVariables: ['goals', 'restrictions'],
      partialVariables: { format_instructions: parser.getFormatInstructions() },
    });
    
    // 3. Initialize the Chat Model
    const model = new ChatOpenAI({
      modelName: 'gpt-3.5-turbo',
      temperature: 0.7,
    });
    
    // 4. Create the chain and invoke it
    const chain = prompt.pipe(model).pipe(parser);
    const mealPlan = await chain.invoke({
      goals: goals,
      restrictions: restrictions,
    });

    return NextResponse.json({ mealPlan }, { status: 200 });
  } catch (error) {
    console.error('Error generating meal plan:', error);
    return NextResponse.json({ error: 'Failed to generate meal plan.' }, { status: 500 });
  }
}
Code collapsed

How it works

  1. Schema Definition (Zod): We use Zod to create a detailed schema for our desired output. The .describe() method is crucial—LangChain uses these descriptions to instruct the LLM on what data to put in each field.
  2. StructuredOutputParser: We create a parser instance from our Zod schema. This object has a special method, getFormatInstructions(), which generates a detailed text description of the required JSON format for the LLM.
  3. PromptTemplate: We craft a prompt that tells the LLM its role ("expert nutritionist") and includes placeholders for our user input ({goals}, {restrictions}) and, most importantly, the {format_instructions}. LangChain automatically injects the JSON format instructions here.
  4. Chain Execution: We use the .pipe() method to create a sequence: the formatted prompt goes to the model, and the model's raw output is then sent to the parser. The parser validates the output and, if necessary, can even try to fix it, ensuring we always get a valid JSON object matching our weeklyPlanSchema.

Step 3: Displaying the Structured Data

Now that our backend reliably provides structured JSON, displaying it on the frontend is straightforward and robust.

What we're doing

We'll update our app/page.tsx file to render the meal plan data in a clean, readable format.

Implementation

code
// app/page.tsx (add this inside the main component)

// ... (inside the Home component, after the form)

{mealPlan && (
  <div className="mt-8 w-full">
    <h2 className="text-2xl font-bold mb-4 text-center text-gray-800">Your 7-Day Meal Plan</h2>
    <div className="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-4">
      {Object.entries(mealPlan).map(([day, meals]) => (
        <div key={day} className="bg-gray-100 p-4 rounded-lg">
          <h3 className="text-xl font-semibold capitalize mb-2 text-gray-700">{day}</h3>
          <ul>
            {(Object.entries(meals) as [string, { dish_name: string; calories: number }][]).map(([mealType, details]) => (
              <li key={mealType} className="mb-1">
                <span className="font-semibold capitalize">{mealType}:</span> {details.dish_name} ({details.calories} kcal)
              </li>
            ))}
          </ul>
        </div>
      ))}
    </div>
  </div>
)}

Code collapsed

How it works

Since we are guaranteed to receive a JSON object with a known structure (mealPlan), we can confidently use Object.entries() to map over the days and meals without worrying about undefined errors or inconsistent data formats. This makes our frontend code cleaner, more predictable, and easier to maintain.

Putting It All Together

You now have a fully functional AI meal planner!

  1. Frontend (app/page.tsx): Captures user goals and restrictions.
  2. API Route (app/api/generate-meal-plan/route.ts):
    • Defines the desired JSON structure using Zod.
    • Uses LangChain to create a prompt with formatting instructions.
    • Calls the OpenAI API and parses the output to guarantee valid JSON.
  3. Result: The structured JSON is sent back to the frontend and displayed in a clean, organized layout.

Security Best Practices

  • Environment Variables: Always keep your OPENAI_API_KEY in .env.local and never expose it to the client side. The API call is made securely from our server-side API route.
  • Input Validation: While we didn't implement it here, in a production app, you should validate and sanitize user input on the server side to prevent prompt injection attacks.

Alternative Approaches

  • Direct OpenAI API Call: You could call the OpenAI API directly and use its "JSON mode." However, LangChain's structured output provides a more robust, model-agnostic layer with built-in parsing and potential for error correction.
  • Different Schema Libraries: You can use libraries other than Zod, like yup or even just a JSON Schema object, with LangChain's parsers.

Conclusion

We've successfully built a practical AI application that solves a common developer pain point: unreliable LLM output. By combining the frontend power of Next.js with the structured data guarantees of LangChain, we created a tool that is both smart and robust.

The key takeaway is that by defining a clear data contract with the LLM through schemas, we can build predictable, production-ready AI features.

Next Steps

  • Add a Database: Save generated meal plans for users.
  • Incorporate a Recipe API: Link each meal to a real recipe.
  • Improve the UI: Add more detailed views, loading skeletons, and better error handling.

Resources

#

Article Tags

nextjsailangchainhealthtech
W

WellAlly's core development team, comprised of healthcare professionals, software engineers, and UX designers committed to revolutionizing digital health management.

Expertise

Healthcare TechnologySoftware DevelopmentUser ExperienceAI & Machine Learning

Found this article helpful?

Try KangXinBan and start your health management journey

© 2024 康心伴 WellAlly · Professional Health Management