WellAlly Logo
WellAlly康心伴
Development

Building a Blazing-Fast Nutrition Search API with Go and Redis

Leverage Go's performance and Redis's caching power to build a search API that provides sub-10ms results for a database of millions of food items.

W
2025-12-15
8 min read

In the world of health and wellness apps, data is king. Users expect instant access to nutritional information for millions of food items. A slow, clunky search is a deal-breaker. If your API takes seconds to respond, you've already lost. The challenge is clear: how do you query a massive dataset and return results in the blink of an eye?

Today, we're going to tackle this problem head-on. We'll build a blazing-fast nutrition search API using the raw power of Go and the incredible speed of Redis. We're not just using Redis as a simple key-value cache; we'll be leveraging the RediSearch module to create a powerful, indexed search engine that can deliver results from millions of JSON documents with sub-10ms latency.

We will build a REST API that allows users to perform full-text searches for food items. We'll see how to structure our data, index it efficiently, and serve it through a clean Go API.

Prerequisites

  • Go (v1.18+): A solid understanding of Go basics is required.
  • Docker: The easiest way to run Redis with the necessary modules.
  • A REST Client: Tools like curl, Postman, or Insomnia to test our API.

Why This Matters to Developers

This isn't just a theoretical exercise. The architecture we'll build is a powerful pattern for any application requiring high-speed search over large, structured datasets—product catalogs, user directories, document repositories, and more. You'll gain practical skills in microservice optimization, advanced Redis usage, and building high-throughput backend systems.

Understanding the Problem

A traditional approach might involve a relational database (like PostgreSQL) with a LIKE query. For a few thousand records, this works. For millions? It grinds to a halt. You could add full-text search capabilities to Postgres, but that adds complexity.

Another common solution is to use a dedicated search engine like Elasticsearch. While incredibly powerful, it's also a complex piece of infrastructure to manage.

Our approach finds a sweet spot. We use Redis, a tool many developers already know and love for caching, but we unlock its search capabilities. By using the RediSearch module, we get the performance of a dedicated search engine with the simplicity and low overhead of Redis. We store our data in-memory, indexed for lightning-fast lookups, making it perfect for this kind of heavy-read workload.

Prerequisites & Setup

Let's get our environment ready.

1. Run Redis Stack with Docker

Redis Stack includes the RediSearch module we need. It's the simplest way to get started.

code
docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
Code collapsed

This command starts a Redis container with RediSearch and exposes the Redis port (6379) and the RedisInsight GUI port (8001). You can now connect to localhost:8001 in your browser to get a visual look at your data.

2. Set Up Your Go Project

Let's create our project directory and initialize a Go module.

code
mkdir go-redis-search
cd go-redis-search
go mod init github.com/your-username/go-redis-search
Code collapsed

3. Install Go Dependencies

We'll use the official go-redis client, which has excellent support for Redis modules, and Gin for a lightweight HTTP router.

code
go get github.com/redis/go-redis/v9
go get github.com/gin-gonic/gin
Code collapsed

Our setup is complete! Let's start building. ✨

Step 1: Data Modeling and Seeding

First, we need data. We'll define a FoodItem struct in Go and then write a script to generate a large dataset and load it into Redis.

What we're doing

We'll store our food data as JSON documents in Redis. JSON is flexible and well-supported by RediSearch. Each food item will have a unique key like food:1, food:2, etc.

Implementation

Create a file named main.go:

code
// main.go
package main

import (
	"context"
	"encoding/json"
	"fmt"
	"log"
	"math/rand"
	"time"

	"github.com/gin-gonic/gin"
	"github.com/redis/go-redis/v9"
)

var (
	ctx = context.Background()
	rdb *redis.Client
)

// FoodItem represents the structure of our nutrition data
type FoodItem struct {
	Name     string  `json:"name"`
	Brand    string  `json:"brand"`
	Calories float64 `json:"calories"`
	Protein  float64 `json:"protein"`
	Fat      float64 `json:"fat"`
	Carbs    float64 `json:"carbs"`
}

func main() {
	// Connect to Redis
	rdb = redis.NewClient(&redis.Options{
		Addr: "localhost:6379",
	})

	_, err := rdb.Ping(ctx).Result()
	if err != nil {
		log.Fatalf("Could not connect to Redis: %v", err)
	}
	log.Println("Connected to Redis!")

	// Seed data and create search index
	seedDataAndCreateIndex()

    // Setup Gin router and API endpoints (we'll add this later)
    // ...
}

func seedDataAndCreateIndex() {
	// Check if data is already seeded
	count, err := rdb.Exists(ctx, "food:1").Result()
	if err != nil {
		log.Fatalf("Error checking for existing data: %v", err)
	}
	if count > 0 {
		log.Println("Data already seeded. Skipping seeding.")
		return
	}

	log.Println("Seeding data... (this might take a moment)")

	// Sample data for generation
	brands := []string{"HealthyCo", "NutriFoods", "FitBites", "Organics"}
	names := []string{"Yogurt", "Chicken Breast", "Almonds", "Oats", "Apple"}
	totalItems := 1_000_000 // Let's create a million items!

	// Use a pipeline for mass insertion for better performance
	pipe := rdb.Pipeline()
	for i := 1; i <= totalItems; i++ {
		item := FoodItem{
			Name:     fmt.Sprintf("%s %s", brands[rand.Intn(len(brands))], names[rand.Intn(len(names))]),
			Brand:    brands[rand.Intn(len(brands))],
			Calories: float64(rand.Intn(500) + 50),
			Protein:  float64(rand.Intn(50)),
			Fat:      float64(rand.Intn(30)),
			Carbs:    float64(rand.Intn(100)),
		}

		// Marshal the struct to JSON
		jsonBytes, _ := json.Marshal(item)
		key := fmt.Sprintf("food:%d", i)
		
		// Add the JSON.SET command to the pipeline
		pipe.JSONSet(ctx, key, "$", string(jsonBytes))
	}

	// Execute the pipeline
	_, err = pipe.Exec(ctx)
	if err != nil {
		log.Fatalf("Failed to seed data: %v", err)
	}
	log.Printf("Successfully seeded %d items.\n", totalItems)

	// Create RediSearch Index (more on this in the next step)
}

Code collapsed

How it works

  1. We define our FoodItem struct with JSON tags for serialization.
  2. In seedDataAndCreateIndex, we first check if the data exists to avoid re-seeding every time we start the app.
  3. We use a pipeline to batch thousands of JSON.SET commands into a single round-trip to the server. This is dramatically faster than sending one command at a time.

Step 2: Indexing Data with RediSearch

Now that our JSON data is in Redis, we need to make it searchable. We'll create a search index that tells RediSearch which fields to pay attention to.

What we're doing

We'll use the FT.CREATE command to define a schema for our index. We'll index the name and brand fields as TEXT for full-text search and the numeric fields for potential range queries.

Implementation

Add the following code at the end of the seedDataAndCreateIndex function in main.go:

code
// main.go (inside seedDataAndCreateIndex function)

// ... after seeding data

log.Println("Creating search index...")

// Index schema definition
schema := redis.NewSchema().
    AddField(redis.NewTextFieldOptions("$.name", redis.TextFieldOptions{Weight: 5.0, As: "name"})).
    AddField(redis.NewTextFieldOptions("$.brand", redis.TextFieldOptions{As: "brand"})).
    AddField(redis.NewNumericFieldOptions("$.calories", redis.NumericFieldOptions{As: "calories"})).
    AddField(redis.NewNumericFieldOptions("$.protein", redis.NumericFieldOptions{As: "protein"})).
    AddField(redis.NewNumericFieldOptions("$.fat", redis.NumericFieldOptions{As: "fat"})).
    AddField(redis.NewNumericFieldOptions("$.carbs", redis.NumericFieldOptions{As: "carbs"}))

// Create the index
err = rdb.FTCreate(ctx, "idx:foods", &redis.FTCreateOptions{
    Prefix: []string{"food:"},
    Schema: schema,
}).Err()

// We ignore the "Index already exists" error
if err != nil && err.Error() != "Index already exists" {
    log.Fatalf("Failed to create index: %v", err)
}

if err == nil {
    log.Println("Search index created successfully.")
} else {
    log.Println("Search index already exists.")
}

Code collapsed

How it works

  • FTCreate: This command creates a new search index named idx:foods.
  • Prefix: We tell the index to only consider keys that start with food:. This isolates our food data.
  • Schema: Here we define the fields to index.
    • $.name and $.brand are JSONPath expressions pointing to the fields in our JSON documents. We index them as TEXT. We also give the name field a higher Weight to make matches in the name more relevant in search results.
    • We also index the numeric fields, which would allow us to run queries like "find all foods with less than 100 calories."

Now, run your application once to seed the data and create the index.

code
go run main.go
# Output should be:
# Connected to Redis!
# Seeding data... (this might take a moment)
# Successfully seeded 1000000 items.
# Creating search index...
# Search index created successfully.
Code collapsed

Step 3: Building the Search API Endpoint

With our data indexed, we can now build the API to query it.

What we're doing

We'll use the Gin web framework to create a simple /search endpoint that accepts a query parameter q. This endpoint will use RediSearch to find matching food items.

Implementation

Update your main function and add a new handler function.

code
// main.go

// ... (keep the existing code)

func main() {
	// Connect to Redis
	rdb = redis.NewClient(&redis.Options{
		Addr: "localhost:6379",
	})

	_, err := rdb.Ping(ctx).Result()
	if err != nil {
		log.Fatalf("Could not connect to Redis: %v", err)
	}
	log.Println("Connected to Redis!")

	// Seed data and create search index
	seedDataAndCreateIndex()

	// Setup Gin router
	router := gin.Default()
	router.GET("/search", searchHandler)

	log.Println("Starting server on :8080")
	router.Run(":8080")
}

func searchHandler(c *gin.Context) {
	query := c.Query("q")
	if query == "" {
		c.JSON(400, gin.H{"error": "Query parameter 'q' is required"})
		return
	}

	// Sanitize and format the query for RediSearch
	// For example, simple fuzzy search on name and brand
	searchQuery := fmt.Sprintf("@name|brand:%s*", query)

	// Perform the search
	docs, _, err := rdb.FTSeach(ctx, "idx:foods", searchQuery, &redis.FTSearchOptions{
		Limit: &redis.Limit{
			Num: 10, // Limit to 10 results
		},
	}).Result()

	if err != nil {
		c.JSON(500, gin.H{"error": "Failed to perform search"})
		return
	}
    
    // We get back a list of documents. The first element is the total count,
    // and the rest are key-value pairs.
	var results []FoodItem
	for i := 1; i < len(docs); i++ {
        // Each document is a key followed by its fields
		doc, ok := docs[i].(redis.Document)
        if !ok {
            continue
        }

        // The document properties contains the JSON string
		var item FoodItem
		err := json.Unmarshal([]byte(doc.Properties["$"].(string)), &item)
		if err == nil {
			results = append(results, item)
		}
	}

	c.JSON(200, results)
}

Code collapsed

How it works

  1. We set up a Gin router with a GET endpoint at /search.
  2. searchHandler grabs the q query parameter.
  3. @name|brand:%s*: This is the RediSearch query syntax. It means "find documents where the name OR brand field contains the query text". The * enables prefix searching, so searching for "chick" will find "Chicken".
  4. rdb.FTSearch: This is the key function call. We pass our index name and the query.
  5. We limit the results to 10 for performance.
  6. The result from FTSearch includes the document key and the full JSON payload (under the $ property). We iterate through the results, unmarshal the JSON back into our FoodItem struct, and build our response.

Run the app again (go run main.go) and test it!

code
# In another terminal
curl "http://localhost:8080/search?q=healthy%20yogurt"
Code collapsed

You should get a JSON array of matching food items back almost instantly!

Performance Considerations & Caching

Our API is already fast, but we can make it even faster for repeated queries. This is where a classic cache-aside pattern comes in.

The logic is simple:

  1. Before hitting RediSearch, check for the result in a simple Redis key (e.g., cache:healthy yogurt).
  2. If it exists (a cache hit), return the cached data immediately.
  3. If it doesn't exist (a cache miss), query RediSearch, store the result in the cache key with an expiration time (TTL), and then return it.

Here's how you can modify searchHandler to implement this:

code
// main.go (updated searchHandler)

func searchHandler(c *gin.Context) {
	query := c.Query("q")
	if query == "" {
		c.JSON(400, gin.H{"error": "Query parameter 'q' is required"})
		return
	}

	cacheKey := "cache:" + query

	// 1. Check the cache first
	cachedResult, err := rdb.Get(ctx, cacheKey).Result()
	if err == nil {
		// Cache hit!
		var results []FoodItem
		json.Unmarshal([]byte(cachedResult), &results)
		c.JSON(200, results)
		return
	}

	// 2. Cache miss. Query RediSearch
	searchQuery := fmt.Sprintf("@name|brand:%s*", query)
	docs, _, err := rdb.FTSearch(ctx, "idx:foods", searchQuery, &redis.FTSearchOptions{
		Limit: &redis.Limit{Num: 10},
	}).Result()

	if err != nil {
		c.JSON(500, gin.H{"error": "Failed to perform search"})
		return
	}

	var results []FoodItem
    // ... (same parsing logic as before)
    for i := 1; i < len(docs); i++ {
		doc, ok := docs[i].(redis.Document)
        if !ok { continue }
		var item FoodItem
		err := json.Unmarshal([]byte(doc.Properties["$"].(string)), &item)
		if err == nil {
			results = append(results, item)
		}
	}

	// 3. Store result in cache with a TTL
	jsonBytes, _ := json.Marshal(results)
	err = rdb.Set(ctx, cacheKey, jsonBytes, 5*time.Minute).Err()
	if err != nil {
		// Log the error but don't fail the request
		log.Printf("Failed to cache result: %v", err)
	}

	c.JSON(200, results)
}
Code collapsed

Benchmarking

Let's prove it's fast. Using a tool like wrk:

code
# -t = threads, -c = connections, -d = duration
wrk -t8 -c100 -d30s "http://localhost:8080/search?q=yogurt"
Code collapsed

The first time you run this, you'll see very low latency. But the second time, when all results are cached, the throughput will be even higher and latencies even lower, likely well under 10ms. This demonstrates the power of the cache-aside strategy.

Alternative Approaches

  • Elasticsearch/OpenSearch: These are extremely powerful, feature-rich search engines. They are a great choice for complex search needs (e.g., aggregations, complex filtering, relevance tuning). However, they come with higher operational complexity and resource usage compared to Redis.
  • Database Full-Text Search (PostgreSQL, MySQL): Most modern databases have built-in FTS capabilities. This can be a good option if you want to keep your stack simple. Performance might not match an in-memory solution like Redis for very large datasets or high-throughput scenarios.

Conclusion

We've successfully built a high-performance search API that can handle millions of records with incredibly low latency. By combining Go's efficiency with Redis and the RediSearch module, we created a solution that is both powerful and relatively simple to manage.

You now have a robust pattern for any application that needs fast search. You can expand on this by adding more complex queries, pagination, and filtering by numeric fields. The foundation is solid.

Next Steps

  • Implement pagination for the search results.
  • Add filtering by calories, protein, etc.
  • Explore more advanced RediSearch features like fuzzy matching and geo-filtering.

Resources

#

Article Tags

goredisperformancebackend
W

WellAlly's core development team, comprised of healthcare professionals, software engineers, and UX designers committed to revolutionizing digital health management.

Expertise

Healthcare TechnologySoftware DevelopmentUser ExperienceAI & Machine Learning

Found this article helpful?

Try KangXinBan and start your health management journey

© 2024 康心伴 WellAlly · Professional Health Management