Nextjs quickstart for to generating and editing images with Google Gemini 2.0 Flash. It allows users to generate images from text prompts or edit existing images through natural language instructions, maintaining conversation context for iterative refinements. Try out the hosted demo at Hugging Face Spaces.
demo.mov
Get your GEMINI_API_KEY
key here and start building.
How It Works:
- Create Images: Generate images from text prompts using Gemini 2.0 Flash
- Edit Images: Upload an image and provide instructions to modify it
- Conversation History: Maintain context through a conversation with the AI for iterative refinements
- Download Results: Save your generated or edited images
For developers who want to call the Gemini API directly, you can use the Google Generative AI JavaScript SDK:
const { GoogleGenerativeAI } = require("@google/generative-ai");
const fs = require("fs");
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
async function generateImage() {
const contents =
"Hi, can you create a 3d rendered image of a pig " +
"with wings and a top hat flying over a happy " +
"futuristic scifi city with lots of greenery?";
// Set responseModalities to include "Image" so the model can generate
const model = genAI.getGenerativeModel({
model: "gemini-2.0-flash-exp",
generationConfig: {
responseModalities: ["Text", "Image"]
}
});
try {
const response = await model.generateContent(contents);
for (const part of response.response.candidates[0].content.parts) {
// Based on the part type, either show the text or save the image
if (part.text) {
console.log(part.text);
} else if (part.inlineData) {
const imageData = part.inlineData.data;
const buffer = Buffer.from(imageData, "base64");
fs.writeFileSync("gemini-native-image.png", buffer);
console.log("Image saved as gemini-native-image.png");
}
}
} catch (error) {
console.error("Error generating content:", error);
}
}
- π¨ Text-to-image generation with Gemini 2.0 Flash
- ποΈ Image editing through natural language instructions
- π¬ Conversation history for context-aware image refinements
- π± Responsive UI built with Next.js and shadcn/ui
- π Seamless workflow between creation and editing modes
- β‘ Uses Gemini 2.0 Flash Javascript SDK
First, set up your environment variables:
cp .env.example .env
Add your Google AI Studio API key to the .env
file:
Get your GEMINI_API_KEY
key here.
GEMINI_API_KEY=your_google_api_key
Then, install dependencies and run the development server:
npm install
npm run dev
Open http://localhost:3000 with your browser to see the application.
- Build the Docker image:
docker build -t nextjs-gemini-image-editing .
- Run the container with your Google API key:
docker run -p 3000:3000 -e GEMINI_API_KEY=your_google_api_key nextjs-gemini-image-editing
Or using an environment file:
# Run container with env file
docker run -p 3000:3000 --env-file .env nextjs-gemini-image-editing
Open http://localhost:3000 with your browser to see the application.
- Next.js - React framework for the web application
- Google Gemini 2.0 Flash - AI model for image generation and editing
- shadcn/ui - Re-usable components built using Radix UI and Tailwind CSS
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.