Skip to content

google-gemini/gemini-image-editing-nextjs-quickstart

Folders and files

NameName
Last commit message
Last commit date

Latest commit

f7aad7e Β· Mar 17, 2025

History

11 Commits
Mar 16, 2025
Mar 16, 2025
Mar 17, 2025
Mar 16, 2025
Mar 16, 2025
Mar 16, 2025
Mar 16, 2025
Mar 16, 2025
Mar 16, 2025
Mar 17, 2025
Mar 16, 2025
Mar 16, 2025
Mar 16, 2025
Mar 16, 2025
Mar 16, 2025
Mar 17, 2025
Mar 17, 2025
Mar 16, 2025
Mar 16, 2025
Mar 16, 2025

Repository files navigation

Gemini 2.0 Flash Image Generation and Editing

Nextjs quickstart for to generating and editing images with Google Gemini 2.0 Flash. It allows users to generate images from text prompts or edit existing images through natural language instructions, maintaining conversation context for iterative refinements. Try out the hosted demo at Hugging Face Spaces.

demo.mov

Get your GEMINI_API_KEY key here and start building.

How It Works:

  1. Create Images: Generate images from text prompts using Gemini 2.0 Flash
  2. Edit Images: Upload an image and provide instructions to modify it
  3. Conversation History: Maintain context through a conversation with the AI for iterative refinements
  4. Download Results: Save your generated or edited images

Basic request

For developers who want to call the Gemini API directly, you can use the Google Generative AI JavaScript SDK:

const { GoogleGenerativeAI } = require("@google/generative-ai");
const fs = require("fs");

const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);

async function generateImage() {
  const contents =
    "Hi, can you create a 3d rendered image of a pig " +
    "with wings and a top hat flying over a happy " +
    "futuristic scifi city with lots of greenery?";

  // Set responseModalities to include "Image" so the model can generate
  const model = genAI.getGenerativeModel({
    model: "gemini-2.0-flash-exp",
    generationConfig: {
      responseModalities: ["Text", "Image"]
    }
  });

  try {
    const response = await model.generateContent(contents);
    for (const part of response.response.candidates[0].content.parts) {
      // Based on the part type, either show the text or save the image
      if (part.text) {
        console.log(part.text);
      } else if (part.inlineData) {
        const imageData = part.inlineData.data;
        const buffer = Buffer.from(imageData, "base64");
        fs.writeFileSync("gemini-native-image.png", buffer);
        console.log("Image saved as gemini-native-image.png");
      }
    }
  } catch (error) {
    console.error("Error generating content:", error);
  }
}

Features

  • 🎨 Text-to-image generation with Gemini 2.0 Flash
  • πŸ–ŒοΈ Image editing through natural language instructions
  • πŸ’¬ Conversation history for context-aware image refinements
  • πŸ“± Responsive UI built with Next.js and shadcn/ui
  • πŸ”„ Seamless workflow between creation and editing modes
  • ⚑ Uses Gemini 2.0 Flash Javascript SDK

Getting Started

Local Development

First, set up your environment variables:

cp .env.example .env

Add your Google AI Studio API key to the .env file:

Get your GEMINI_API_KEY key here.

GEMINI_API_KEY=your_google_api_key

Then, install dependencies and run the development server:

npm install
npm run dev

Open http://localhost:3000 with your browser to see the application.

Deployment

Vercel

Deploy with Vercel

Docker

  1. Build the Docker image:
docker build -t nextjs-gemini-image-editing .
  1. Run the container with your Google API key:
docker run -p 3000:3000 -e GEMINI_API_KEY=your_google_api_key nextjs-gemini-image-editing

Or using an environment file:

# Run container with env file
docker run -p 3000:3000 --env-file .env nextjs-gemini-image-editing

Open http://localhost:3000 with your browser to see the application.

Technologies Used

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

About

Get started with native image generation and editing using Gemini 2.0 and Next.js

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Contributors 3