Skip to content

Commit ce75130

Browse files
committed
Fix build error + update README
1 parent 39cb4bd commit ce75130

File tree

4 files changed

+66
-61
lines changed

4 files changed

+66
-61
lines changed

README.md

+22-17
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,9 @@
22

33
Nextjs quickstart for to generating and editing images with Google Gemini 2.0 Flash. It allows users to generate images from text prompts or edit existing images through natural language instructions, maintaining conversation context for iterative refinements. Try out the hosted demo at [Hugging Face Spaces](https://huggingface.co/spaces/philschmid/image-generation-editing).
44

5-
65
https://github.com/user-attachments/assets/8ffa5ee3-1b06-46a9-8b5e-761edb0e00c3
76

8-
9-
Get your `GEMINI_API_KEY` key [here](https://ai.google.dev/gemini-api/docs/api-key) and start building.
7+
Get your `GEMINI_API_KEY` key [here](https://ai.google.dev/gemini-api/docs/api-key) and start building.
108

119
**How It Works:**
1210

@@ -15,7 +13,7 @@ Get your `GEMINI_API_KEY` key [here](https://ai.google.dev/gemini-api/docs/api-k
1513
3. **Conversation History**: Maintain context through a conversation with the AI for iterative refinements
1614
4. **Download Results**: Save your generated or edited images
1715

18-
## Basic request
16+
## Basic request
1917

2018
For developers who want to call the Gemini API directly, you can use the Google Generative AI JavaScript SDK:
2119

@@ -26,29 +24,30 @@ const fs = require("fs");
2624
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
2725

2826
async function generateImage() {
29-
const contents = "Hi, can you create a 3d rendered image of a pig " +
30-
"with wings and a top hat flying over a happy " +
31-
"futuristic scifi city with lots of greenery?";
27+
const contents =
28+
"Hi, can you create a 3d rendered image of a pig " +
29+
"with wings and a top hat flying over a happy " +
30+
"futuristic scifi city with lots of greenery?";
3231

33-
// Set responseModalities to include "Image" so the model can generate
32+
// Set responseModalities to include "Image" so the model can generate
3433
const model = genAI.getGenerativeModel({
3534
model: "gemini-2.0-flash-exp",
3635
generationConfig: {
37-
responseModalities: ['Text', 'Image']
38-
},
36+
responseModalities: ["Text", "Image"]
37+
}
3938
});
4039

4140
try {
4241
const response = await model.generateContent(contents);
43-
for (const part of response.response.candidates[0].content.parts) {
42+
for (const part of response.response.candidates[0].content.parts) {
4443
// Based on the part type, either show the text or save the image
4544
if (part.text) {
4645
console.log(part.text);
4746
} else if (part.inlineData) {
4847
const imageData = part.inlineData.data;
49-
const buffer = Buffer.from(imageData, 'base64');
50-
fs.writeFileSync('gemini-native-image.png', buffer);
51-
console.log('Image saved as gemini-native-image.png');
48+
const buffer = Buffer.from(imageData, "base64");
49+
fs.writeFileSync("gemini-native-image.png", buffer);
50+
console.log("Image saved as gemini-native-image.png");
5251
}
5352
}
5453
} catch (error) {
@@ -76,7 +75,7 @@ First, set up your environment variables:
7675
cp .env.example .env
7776
```
7877

79-
Add your Google AI Studio API key to the `.env` file:
78+
Add your Google AI Studio API key to the `.env` file:
8079

8180
_Get your `GEMINI_API_KEY` key [here](https://ai.google.dev/gemini-api/docs/api-key)._
8281

@@ -93,7 +92,13 @@ npm run dev
9392

9493
Open [http://localhost:3000](http://localhost:3000) with your browser to see the application.
9594

96-
### Docker Deployment
95+
## Deployment
96+
97+
### Vercel
98+
99+
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fgoogle-gemini%2Fgemini-image-editing-nextjs-quickstart&env=GEMINI_API_KEY&envDescription=Create%20an%20account%20and%20generate%20an%20API%20key&envLink=https%3A%2F%2Faistudio.google.com%2Fapp%2Fu%2F0%2Fapikey&demo-url=https%3A%2F%2Fhuggingface.co%2Fspaces%2Fphilschmid%2Fimage-generation-editing)
100+
101+
### Docker
97102

98103
1. Build the Docker image:
99104

@@ -120,7 +125,7 @@ Open [http://localhost:3000](http://localhost:3000) with your browser to see the
120125

121126
- [Next.js](https://nextjs.org/) - React framework for the web application
122127
- [Google Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/) - AI model for image generation and editing
123-
- [shadcn/ui](https://ui.shadcn.com/) - Re-usable components built using Radix UI and Tailwind CSS
128+
- [shadcn/ui](https://ui.shadcn.com/) - Re-usable components built using Radix UI and Tailwind CSS
124129

125130
## License
126131

components/ImageUpload.tsx

+3-3
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ import { useCallback, useState, useEffect } from "react";
44
import { useDropzone } from "react-dropzone";
55
import { Button } from "./ui/button";
66
import { Upload as UploadIcon, Image as ImageIcon, X } from "lucide-react";
7-
import Image from "next/image";
7+
88
interface ImageUploadProps {
99
onImageSelect: (imageData: string) => void;
1010
currentImage: string | null;
@@ -58,10 +58,10 @@ export function ImageUpload({ onImageSelect, currentImage }: ImageUploadProps) {
5858
onDrop,
5959
accept: {
6060
"image/png": [".png"],
61-
"image/jpeg": [".jpg", ".jpeg"],
61+
"image/jpeg": [".jpg", ".jpeg"]
6262
},
6363
maxSize: 10 * 1024 * 1024, // 10MB
64-
multiple: false,
64+
multiple: false
6565
});
6666

6767
const handleRemove = () => {

package-lock.json

+40-40
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
"class-variance-authority": "^0.7.1",
1818
"clsx": "^2.1.1",
1919
"lucide-react": "^0.475.0",
20-
"next": "15.1.0",
20+
"next": "^15.2.2",
2121
"next-themes": "^0.4.4",
2222
"react": "^19.0.0",
2323
"react-dom": "^19.0.0",

0 commit comments

Comments
 (0)