|
6 | 6 | "source": [
|
7 | 7 | "# Bioimage Model Zoo Core Example notebook\n",
|
8 | 8 | "\n",
|
9 |
| - "This notebook shows how to interact with the `bioimageio.core` programmatically to explore, load, use, and export content from the [BioImage Model Zoo](https://bioimage.io).\"" |
| 9 | + "This notebook shows how to interact with the `bioimageio.core` programmatically to explore, load, use, and export content from the [BioImage Model Zoo](https://bioimage.io).\n", |
| 10 | + "\n", |
| 11 | + "For a local use of this notebook the minimum package requirements can be found in `bioimageio.core_usage_requirements.txt` located in the same folder as the notebook." |
10 | 12 | ]
|
11 | 13 | },
|
12 | 14 | {
|
|
20 | 22 | "cell_type": "markdown",
|
21 | 23 | "metadata": {},
|
22 | 24 | "source": [
|
23 |
| - "If the notebook is being run on Google Colab, install necessary dependencies" |
| 25 | + "### 0.1. If running on Google Colab, install necessary dependencies" |
24 | 26 | ]
|
25 | 27 | },
|
26 | 28 | {
|
|
32 | 34 | "import os\n",
|
33 | 35 | "\n",
|
34 | 36 | "if os.getenv(\"COLAB_RELEASE_TAG\"):\n",
|
35 |
| - " %pip install bioimageio.core torch onnxruntime" |
| 37 | + " %pip install bioimageio.core==0.6.7 torch==2.3.1 onnxruntime==1.18.0" |
36 | 38 | ]
|
37 | 39 | },
|
38 | 40 | {
|
39 | 41 | "cell_type": "markdown",
|
40 | 42 | "metadata": {},
|
41 | 43 | "source": [
|
42 |
| - "Enable pretty validation errors" |
| 44 | + "### 0.2.Enable pretty_validation_errors\n", |
| 45 | + "\n", |
| 46 | + "This function displays validation errors in a human readable format." |
43 | 47 | ]
|
44 | 48 | },
|
45 | 49 | {
|
|
48 | 52 | "metadata": {},
|
49 | 53 | "outputs": [],
|
50 | 54 | "source": [
|
51 |
| - "# enable pretty validation errors in ipynb\n", |
52 | 55 | "from bioimageio.spec.pretty_validation_errors import (\n",
|
53 | 56 | " enable_pretty_validation_errors_in_ipynb,\n",
|
54 | 57 | ")\n",
|
|
59 | 62 | "cell_type": "markdown",
|
60 | 63 | "metadata": {},
|
61 | 64 | "source": [
|
62 |
| - "Load general dependencies" |
| 65 | + "### 0.3. Load general dependencies" |
63 | 66 | ]
|
64 | 67 | },
|
65 | 68 | {
|
|
70 | 73 | "source": [
|
71 | 74 | "# Load general dependencies\n",
|
72 | 75 | "from imageio.v2 import imread\n",
|
73 |
| - "from pprint import pprint\n", |
74 | 76 | "from bioimageio.spec.utils import download\n",
|
| 77 | + "from pprint import pprint\n", |
75 | 78 | "import matplotlib.pyplot as plt\n",
|
76 | 79 | "import numpy as np\n",
|
77 | 80 | "\n",
|
78 |
| - "# function to display input and prediction output images\n", |
| 81 | + "# Function to display input and prediction output images\n", |
79 | 82 | "def show_images(sample_tensor, prediction_tensor):\n",
|
80 | 83 | " input_array = sample_tensor.members['input0'].data\n",
|
| 84 | + " \n", |
81 | 85 | " # Check for the number of channels to enable display\n",
|
82 | 86 | " input_array = np.squeeze(input_array)\n",
|
83 | 87 | " if len(input_array.shape)>2:\n",
|
84 | 88 | " input_array = input_array[0]\n",
|
85 | 89 | "\n",
|
86 | 90 | " output_array = prediction_tensor.members['output0'].data\n",
|
| 91 | + " \n", |
| 92 | + " # Check for the number of channels to enable display\n", |
87 | 93 | " output_array = np.squeeze(output_array)\n",
|
88 | 94 | " if len(output_array.shape)>2:\n",
|
89 | 95 | " output_array = output_array[0]\n",
|
90 | 96 | "\n",
|
91 | 97 | " plt.figure()\n",
|
92 |
| - " ax1 =plt.subplot(1,2,1)\n", |
| 98 | + " ax1 = plt.subplot(1,2,1)\n", |
93 | 99 | " ax1.set_title(\"Input\")\n",
|
94 | 100 | " ax1.axis('off')\n",
|
95 | 101 | " plt.imshow(input_array)\n",
|
|
134 | 140 | "cell_type": "markdown",
|
135 | 141 | "metadata": {},
|
136 | 142 | "source": [
|
137 |
| - "bioimage.io resources may be identified via their bioimage.io ID, e.g. \"affable-shark\" or the [DOI](https://doi.org/) of their [Zenodo](https://zenodo.org/) backup.\n", |
| 143 | + "`bioimage.io` resources may be identified via their bioimage.io __ID__, e.g. \"affable-shark\" or the [__DOI__](https://doi.org/) of their [__Zenodo__](https://zenodo.org/) backup.\n", |
138 | 144 | "\n",
|
139 |
| - "Both of these options may be version specific (\"affable-shark\" or a version specific [Zenodo](https://zenodo.org/) backup [DOI](https://doi.org/)).\n", |
| 145 | + "Both of these options may be version specific (\"affable-shark/1\" or a version specific [__Zenodo__](https://zenodo.org/) backup [__DOI__](https://doi.org/)).\n", |
140 | 146 | "\n",
|
141 |
| - "Alternativly any RDF source may be loaded by providing a local path or URL." |
| 147 | + "Alternatively, any rdf.yaml source, single file or in a .zip, may be loaded by providing its __local path__ or __URL__." |
142 | 148 | ]
|
143 | 149 | },
|
144 | 150 | {
|
|
147 | 153 | "metadata": {},
|
148 | 154 | "outputs": [],
|
149 | 155 | "source": [
|
150 |
| - "BMZ_MODEL_ID = \"\" #\"affable-shark\"\n", |
| 156 | + "BMZ_MODEL_ID = \"\"#\"affable-shark\"\n", |
151 | 157 | "BMZ_MODEL_DOI = \"\" #\"10.5281/zenodo.6287342\"\n",
|
152 | 158 | "BMZ_MODEL_URL = \"https://uk1s3.embassy.ebi.ac.uk/public-datasets/bioimage.io/affable-shark/draft/files/rdf.yaml\""
|
153 | 159 | ]
|
|
156 | 162 | "cell_type": "markdown",
|
157 | 163 | "metadata": {},
|
158 | 164 | "source": [
|
159 |
| - "load_description is a function of the `bioimageio.spec` package, but as it is a sub-package of `bioimageio.core` it can be called as `bioimageio.core.load_description`.\n", |
| 165 | + "`load_description` is a function of the `bioimageio.spec` package, but as it is a sub-package of `bioimageio.core` it can also be called from it by `bioimageio.core.load_description`.\n", |
| 166 | + "\n", |
160 | 167 | "To learn more about the functionalities of the `bioimageio.spec` package, see the [bioimageio.spec package example notebook](https://github.com/bioimage-io/spec-bioimage-io/blob/main/example/load_model_and_create_your_own.ipynb), also available as a [Google Colab](https://colab.research.google.com/github/bioimage-io/spec-bioimage-io/blob/main/example/load_model_and_create_your_own.ipynb) notebook."
|
161 | 168 | ]
|
162 | 169 | },
|
|
169 | 176 | "name": "stderr",
|
170 | 177 | "output_type": "stream",
|
171 | 178 | "text": [
|
172 |
| - "computing SHA256 of 2717bc821b53e7554f84f82d494f5019-zero_mean_unit_variance.ijm (result: 767f2c3a50e36365c30b9e46e57fcf82e606d337e8a48d4a2440dc512813d186): 100%|██████████| 1/1 [00:00<00:00, 1219.27it/s]\n", |
173 |
| - "computing SHA256 of 0fc5a081dd022def1829f39f58b667b5-test_input_0.npy (result: c29bd6e16e3f7856217b407ba948222b1c2a0da41922a0f79297e25588614fe2): 100%|██████████| 3/3 [00:00<00:00, 3134.76it/s]\n", |
174 |
| - "computing SHA256 of 512ac1bb6de19fba42ae180732aa2f74-sample_input_0.tif (result: a24b3c708b6ca6825494eb7c5a4d221335fb3eef5eb9d03f4108907cdaad2bf9): 100%|██████████| 1/1 [00:00<00:00, 868.03it/s] \n", |
175 |
| - "computing SHA256 of 6908a856d9b4bab0b2bacbceb17c9142-test_output_0.npy (result: 510181f38930e59e4fd8ecc03d6ea7c980eb6609759655f2d4a41fe36108d5f5): 100%|██████████| 5/5 [00:00<00:00, 5011.12it/s]\n", |
176 |
| - "computing SHA256 of 011c6c06668d71d113e21511f83b6586-sample_output_0.tif (result: e8f99aabe8405427f515eba23a49f58ba50302f57d1fdfd07026e1984f836c5e): 100%|██████████| 5/5 [00:00<00:00, 5240.26it/s]\n", |
177 |
| - "computing SHA256 of 112365584e54b0efe14453dac6c454d4-weights.onnx (result: df913b85947f5132bcdaf81d91af0963f60d44f4caf8a4fec672d96a2f327b44): 100%|██████████| 884/884 [00:00<00:00, 4757.85it/s]\n", |
178 |
| - "computing SHA256 of 136455aa974fc10d712b23af9f7d6329-unet.py (result: 7f5b15948e8e2c91f78dcff34fbf30af517073e91ba487f3edb982b948d099b3): 100%|██████████| 1/1 [00:00<00:00, 1234.34it/s]\n", |
179 |
| - "computing SHA256 of 35c39b9c4c5b926f62831f664f775d65-environment.yaml (result: e79043966078d1375f470dd4173eda70d1db66f70ceb568cf62a4fdc50d95c7f): 100%|██████████| 1/1 [00:00<00:00, 1949.93it/s]\n", |
180 |
| - "computing SHA256 of 377a31b4ba5b547fc064a1d4982a377b-weights.pt (result: 608f52cd7f5119f7a7b8272395b0c169714e8be34536eaf159820f72a1d6a5b7): 100%|██████████| 884/884 [00:00<00:00, 14829.93it/s]\n", |
181 |
| - "computing SHA256 of 921b32437eee259df0fded18e52e7c01-weights-torchscript.pt (result: 8410950508655a300793b389c815dc30b1334062fc1dadb1e15e55a93cbb99a0): 100%|██████████| 885/885 [00:00<00:00, 16039.65it/s]" |
| 179 | + "computing SHA256 of 2717bc821b53e7554f84f82d494f5019-zero_mean_unit_variance.ijm (result: 767f2c3a50e36365c30b9e46e57fcf82e606d337e8a48d4a2440dc512813d186): 100%|██████████| 1/1 [00:00<00:00, 1675.04it/s]\n", |
| 180 | + "computing SHA256 of 0fc5a081dd022def1829f39f58b667b5-test_input_0.npy (result: c29bd6e16e3f7856217b407ba948222b1c2a0da41922a0f79297e25588614fe2): 100%|██████████| 3/3 [00:00<00:00, 5748.25it/s]\n", |
| 181 | + "computing SHA256 of 512ac1bb6de19fba42ae180732aa2f74-sample_input_0.tif (result: a24b3c708b6ca6825494eb7c5a4d221335fb3eef5eb9d03f4108907cdaad2bf9): 100%|██████████| 1/1 [00:00<00:00, 2359.00it/s]\n", |
| 182 | + "computing SHA256 of 6908a856d9b4bab0b2bacbceb17c9142-test_output_0.npy (result: 510181f38930e59e4fd8ecc03d6ea7c980eb6609759655f2d4a41fe36108d5f5): 100%|██████████| 5/5 [00:00<00:00, 7302.06it/s] \n", |
| 183 | + "computing SHA256 of 011c6c06668d71d113e21511f83b6586-sample_output_0.tif (result: e8f99aabe8405427f515eba23a49f58ba50302f57d1fdfd07026e1984f836c5e): 100%|██████████| 5/5 [00:00<00:00, 7319.90it/s] \n", |
| 184 | + "computing SHA256 of 112365584e54b0efe14453dac6c454d4-weights.onnx (result: df913b85947f5132bcdaf81d91af0963f60d44f4caf8a4fec672d96a2f327b44): 100%|██████████| 884/884 [00:00<00:00, 16526.77it/s]\n", |
| 185 | + "computing SHA256 of 136455aa974fc10d712b23af9f7d6329-unet.py (result: 7f5b15948e8e2c91f78dcff34fbf30af517073e91ba487f3edb982b948d099b3): 100%|██████████| 1/1 [00:00<00:00, 1776.49it/s]\n", |
| 186 | + "computing SHA256 of 35c39b9c4c5b926f62831f664f775d65-environment.yaml (result: e79043966078d1375f470dd4173eda70d1db66f70ceb568cf62a4fdc50d95c7f): 100%|██████████| 1/1 [00:00<00:00, 2462.89it/s]\n", |
| 187 | + "computing SHA256 of 377a31b4ba5b547fc064a1d4982a377b-weights.pt (result: 608f52cd7f5119f7a7b8272395b0c169714e8be34536eaf159820f72a1d6a5b7): 100%|██████████| 884/884 [00:00<00:00, 16446.86it/s]\n", |
| 188 | + "computing SHA256 of 921b32437eee259df0fded18e52e7c01-weights-torchscript.pt (result: 8410950508655a300793b389c815dc30b1334062fc1dadb1e15e55a93cbb99a0): 100%|██████████| 885/885 [00:00<00:00, 15082.58it/s]" |
182 | 189 | ]
|
183 | 190 | },
|
184 | 191 | {
|
|
232 | 239 | "cell_type": "markdown",
|
233 | 240 | "metadata": {},
|
234 | 241 | "source": [
|
235 |
| - "### 1.3 Inspect the model metadata" |
| 242 | + "### 1.3 Inspect the model metadata\n", |
| 243 | + "\n", |
| 244 | + "Model metadata includes author names, affiliations, license, and documentation." |
236 | 245 | ]
|
237 | 246 | },
|
238 | 247 | {
|
|
408 | 417 | "\n",
|
409 | 418 | "This test should be run before using the model to ensure that it works properly.\n",
|
410 | 419 | "\n",
|
411 |
| - "`bioimageio.core.test_model` returns a dictionary with 'status'='passed'/'failed' and other detailed information.\n", |
| 420 | + "----\n", |
412 | 421 | "\n",
|
413 |
| - "A model description is validated with our format specification. \n", |
414 |
| - "To inspect the corresponding validation summary access the `validation_summary` attribute.\n", |
| 422 | + "`bioimageio.core.test_model` returns a validation dictionary with 'status'='passed'/'failed' and other detailed information that can be inspected by calling `.display()` on it.\n", |
415 | 423 | "\n",
|
416 | 424 | "The validation summary will indicate:\n",
|
417 |
| - "- the version of the `bioimageio.spec` library used to run the validation\n", |
| 425 | + "- the versions of the `bioimageio.spec` and `bioimageio.core` libraries used to run the validation\n", |
418 | 426 | "- the status of several validation steps\n",
|
419 | 427 | " - ✔️: Success\n",
|
420 | 428 | " - 🔍: information about the validation context\n",
|
421 | 429 | " - ⚠: Warning\n",
|
422 |
| - " - ❌: Error\n" |
| 430 | + " - ❌: Error" |
423 | 431 | ]
|
424 | 432 | },
|
425 | 433 | {
|
|
431 | 439 | "name": "stderr",
|
432 | 440 | "output_type": "stream",
|
433 | 441 | "text": [
|
434 |
| - "\u001b[32m2024-06-12 19:00:31.827\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mbioimageio.core._resource_tests\u001b[0m:\u001b[36m_test_model_inference\u001b[0m:\u001b[36m122\u001b[0m - \u001b[1mstarting 'Reproduce test outputs from test inputs'\u001b[0m\n", |
435 |
| - "\u001b[32m2024-06-12 19:00:33.804\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mbioimageio.core._resource_tests\u001b[0m:\u001b[36m_test_model_inference_parametrized\u001b[0m:\u001b[36m192\u001b[0m - \u001b[1mTesting inference with 4 different input tensor sizes\u001b[0m\n" |
| 442 | + "\u001b[32m2024-06-13 12:20:28.532\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mbioimageio.core._resource_tests\u001b[0m:\u001b[36m_test_model_inference\u001b[0m:\u001b[36m122\u001b[0m - \u001b[1mstarting 'Reproduce test outputs from test inputs'\u001b[0m\n", |
| 443 | + "\u001b[32m2024-06-13 12:20:30.337\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36mbioimageio.core._resource_tests\u001b[0m:\u001b[36m_test_model_inference_parametrized\u001b[0m:\u001b[36m192\u001b[0m - \u001b[1mTesting inference with 4 different input tensor sizes\u001b[0m\n" |
436 | 444 | ]
|
437 | 445 | },
|
438 | 446 | {
|
|
505 | 513 | "cell_type": "markdown",
|
506 | 514 | "metadata": {},
|
507 | 515 | "source": [
|
508 |
| - "`bioimageio.core` implements functionality to run a prediction with models described in the `bioimage.io` format.\n", |
| 516 | + "`bioimageio.core` implements the functionality to run a prediction with models described in the `bioimage.io` format.\n", |
509 | 517 | "\n",
|
510 |
| - "This includes functions to run predictions on `numpy.ndarray`s/`xarray.DataArrays` as input and convenience functions to run predictions for images stored on disc.\n", |
| 518 | + "This includes functions to run predictions on `numpy.ndarray`/`xarray.DataArray` as input and convenience functions to run predictions for images stored on disc.\n", |
511 | 519 | "\n",
|
512 | 520 | "### 3.1. Load the test image and convert into a tensor"
|
513 | 521 | ]
|
|
584 | 592 | {
|
585 | 593 | "data": {
|
586 | 594 | "text/plain": [
|
587 |
| - "Sample(members={'raw': <bioimageio.core.tensor.Tensor object at 0x15729a010>}, stat=None, id='sample-from-numpy')" |
| 595 | + "Sample(members={'raw': <bioimageio.core.tensor.Tensor object at 0x147024410>}, stat=None, id='sample-from-numpy')" |
588 | 596 | ]
|
589 | 597 | },
|
590 | 598 | "execution_count": 11,
|
|
604 | 612 | "cell_type": "markdown",
|
605 | 613 | "metadata": {},
|
606 | 614 | "source": [
|
607 |
| - "`bioimageio.core` provides a helper function `create_sample_for_model` to automatically create the `Sample` for the given model." |
| 615 | + "`bioimageio.core` provides the helper function `create_sample_for_model` to automatically create the `Sample` for the given model." |
608 | 616 | ]
|
609 | 617 | },
|
610 | 618 | {
|
|
622 | 630 | {
|
623 | 631 | "data": {
|
624 | 632 | "text/plain": [
|
625 |
| - "Sample(members={'input0': <bioimageio.core.tensor.Tensor object at 0x157214ad0>}, stat={}, id='my_demo_sample')" |
| 633 | + "Sample(members={'input0': <bioimageio.core.tensor.Tensor object at 0x16c0e7990>}, stat={}, id='my_demo_sample')" |
626 | 634 | ]
|
627 | 635 | },
|
628 | 636 | "execution_count": 12,
|
|
659 | 667 | {
|
660 | 668 | "data": {
|
661 | 669 | "text/plain": [
|
662 |
| - "Sample(members={'input0': <bioimageio.core.tensor.Tensor object at 0x157244e10>}, stat={}, id='test-input')" |
| 670 | + "Sample(members={'input0': <bioimageio.core.tensor.Tensor object at 0x16de5b8d0>}, stat={}, id='test-input')" |
663 | 671 | ]
|
664 | 672 | },
|
665 | 673 | "execution_count": 13,
|
|
787 | 795 | "### 3.3. Recover input and output tensors as numpy arrays"
|
788 | 796 | ]
|
789 | 797 | },
|
| 798 | + { |
| 799 | + "cell_type": "markdown", |
| 800 | + "metadata": {}, |
| 801 | + "source": [ |
| 802 | + "This example code shows how to recover the image information from the input and output tensors as numpy arrays." |
| 803 | + ] |
| 804 | + }, |
790 | 805 | {
|
791 | 806 | "cell_type": "code",
|
792 |
| - "execution_count": 19, |
| 807 | + "execution_count": 17, |
793 | 808 | "metadata": {},
|
794 | 809 | "outputs": [
|
795 | 810 | {
|
796 | 811 | "data": {
|
797 | 812 | "text/plain": [
|
798 |
| - "<matplotlib.image.AxesImage at 0x16b069dd0>" |
| 813 | + "<matplotlib.image.AxesImage at 0x1763ce410>" |
799 | 814 | ]
|
800 | 815 | },
|
801 |
| - "execution_count": 19, |
| 816 | + "execution_count": 17, |
802 | 817 | "metadata": {},
|
803 | 818 | "output_type": "execute_result"
|
804 | 819 | },
|
|
817 | 832 | "np_input_list = []\n",
|
818 | 833 | "np_output_list = []\n",
|
819 | 834 | "\n",
|
| 835 | + "# iterate over the number of tensors inside the input sample\n", |
820 | 836 | "for ipt in range(len(sample.members.keys())):\n",
|
821 | 837 | " input_array = sample.members[f\"input{ipt}\"].data\n",
|
822 | 838 | "\n",
|
|
828 | 844 | " np_input_list.append(input_array)\n",
|
829 | 845 | "\n",
|
830 | 846 | "\n",
|
| 847 | + "# iterate over the number of tensors inside the output prediction\n", |
831 | 848 | "for out in range(len(prediction.members.keys())):\n",
|
832 | 849 | " output_array = prediction.members[f\"output{ipt}\"].data\n",
|
833 | 850 | "\n",
|
|
0 commit comments