|
7 | 7 | "source": [
|
8 | 8 | "# Image generation with universal control using Flex.2 and OpenVINO\n",
|
9 | 9 | "\n",
|
| 10 | + "<div class=\"alert alert-block alert-danger\"> <b>Important note:</b> This notebook requires python >= 3.10. Please make sure that your environment fulfill to this requirement before running it </div>\n", |
| 11 | + "\n", |
10 | 12 | "Flex.2 is flexible text-to-image diffusion model based on Flux model architecture with built in support inpainting and universal control - model accepts pose, line, and depth inputs.\n",
|
11 | 13 | "\n",
|
| 14 | + "\n", |
| 15 | + "\n", |
12 | 16 | "More details about model can be found in [model card](https://huggingface.co/ostris/Flex.2-preview).\n",
|
13 | 17 | "\n",
|
14 | 18 | "In this tutorial we consider how to convert and optimize Flex.2 model using OpenVINO.\n",
|
15 | 19 | "\n",
|
16 |
| - ">**Note**: Some demonstrated models can require at least 32GB RAM for conversion and running." |
| 20 | + ">**Note**: Some demonstrated models can require at least 32GB RAM for conversion and running.\n", |
| 21 | + "#### Table of contents:\n", |
| 22 | + "\n", |
| 23 | + "- [Prerequisites](#Prerequisites)\n", |
| 24 | + "- [Convert model with OpenVINO](#Convert-model-with-OpenVINO)\n", |
| 25 | + " - [Convert model using Optimum Intel](#Convert-model-using-Optimum-Intel)\n", |
| 26 | + " - [Compress model weights](#Compress-model-weights)\n", |
| 27 | + "- [Run OpenVINO model inference](#Run-OpenVINO-model-inference)\n", |
| 28 | + " - [Select inference device](#Select-inference-device)\n", |
| 29 | + "- [Interactive demo](#Interactive-demo)\n", |
| 30 | + "\n", |
| 31 | + "\n", |
| 32 | + "### Installation Instructions\n", |
| 33 | + "\n", |
| 34 | + "This is a self-contained example that relies solely on its own code.\n", |
| 35 | + "\n", |
| 36 | + "We recommend running the notebook in a virtual environment. You only need a Jupyter server to start.\n", |
| 37 | + "For details, please refer to [Installation Guide](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/README.md#-installation-guide).\n", |
| 38 | + "\n", |
| 39 | + "<img referrerpolicy=\"no-referrer-when-downgrade\" src=\"https://static.scarf.sh/a.png?x-pxid=5b5a4db0-7875-4bfb-bdbd-01698b5b1a77&file=../notebooks/flex.2-image-generation/flex.2-image-generation.ipynb\" />\n" |
17 | 40 | ]
|
18 | 41 | },
|
19 | 42 | {
|
20 | 43 | "attachments": {},
|
21 | 44 | "cell_type": "markdown",
|
22 | 45 | "metadata": {},
|
23 | 46 | "source": [
|
24 |
| - "## Prerequisites" |
| 47 | + "## Prerequisites\n", |
| 48 | + "[back to top ⬆️](#Table-of-contents:)" |
25 | 49 | ]
|
26 | 50 | },
|
27 | 51 | {
|
|
106 | 130 | {
|
107 | 131 | "cell_type": "code",
|
108 | 132 | "execution_count": 3,
|
109 |
| - "metadata": {}, |
| 133 | + "metadata": { |
| 134 | + "test_replace": {"ostris/Flex.2-preview": "katuni4ka/tiny-random-flex.2-preview"} |
| 135 | + }, |
110 | 136 | "outputs": [
|
111 | 137 | {
|
112 | 138 | "data": {
|
|
148 | 174 | "# Read more about telemetry collection at https://github.com/openvinotoolkit/openvino_notebooks?tab=readme-ov-file#-telemetry\n",
|
149 | 175 | "from notebook_utils import collect_telemetry\n",
|
150 | 176 | "\n",
|
151 |
| - "collect_telemetry(\"flex2-image-generation.ipynb\")\n", |
| 177 | + "collect_telemetry(\"flex.2-image-generation.ipynb\")\n", |
152 | 178 | "\n",
|
153 | 179 | "\n",
|
154 | 180 | "model_path = model_base_path / \"INT8\" if to_compress.value else model_base_path / \"FP16\"\n",
|
|
173 | 199 | "metadata": {},
|
174 | 200 | "source": [
|
175 | 201 | "## Run OpenVINO model inference\n",
|
| 202 | + "[back to top ⬆️](#Table-of-contents:)\n", |
176 | 203 | "\n",
|
177 | 204 | "Flex.2 is based on Flux.1 model, but for enabling image control and inpainting capability, model uses own customized pipeline. `ov_flex2_helper.py` contains `OVFlex2Pipeline` class, pipeline adoption for usage with OpenVINO. It is based on Optimum Intel inference API and preserves functional features from original pipeline.\n",
|
178 | 205 | "\n",
|
179 | 206 | "### Select inference device\n",
|
| 207 | + "[back to top ⬆️](#Table-of-contents:)\n", |
180 | 208 | "\n",
|
181 | 209 | "Select device from dropdown list for running inference using OpenVINO."
|
182 | 210 | ]
|
|
344 | 372 | "cell_type": "markdown",
|
345 | 373 | "metadata": {},
|
346 | 374 | "source": [
|
347 |
| - "## Interactive demo" |
| 375 | + "## Interactive demo\n", |
| 376 | + "[back to top ⬆️](#Table-of-contents:)" |
348 | 377 | ]
|
349 | 378 | },
|
350 | 379 | {
|
|
387 | 416 | "pygments_lexer": "ipython3",
|
388 | 417 | "version": "3.11.4"
|
389 | 418 | },
|
| 419 | + "openvino_notebooks": { |
| 420 | + "imageUrl": "https://github.com/user-attachments/assets/6a9ab66a-387a-4538-8625-2bb3a16072b5", |
| 421 | + "tags": { |
| 422 | + "categories": [ |
| 423 | + "Model Demos", |
| 424 | + "AI Trends" |
| 425 | + ], |
| 426 | + "libraries": [], |
| 427 | + "other": [ |
| 428 | + "Stable Diffusion" |
| 429 | + ], |
| 430 | + "tasks": [ |
| 431 | + "Image-to-Image" |
| 432 | + ] |
| 433 | + } |
| 434 | + }, |
390 | 435 | "widgets": {
|
391 | 436 | "application/vnd.jupyter.widget-state+json": {
|
392 | 437 | "state": {
|
|
0 commit comments