|
420 | 420 | "metadata": {},
|
421 | 421 | "outputs": [],
|
422 | 422 | "source": [
|
423 |
| - "output_train = pd.read_hdf(\n", |
424 |
| - " os.path.join(output_path, f\"gnn_{task}\", \"output_exporter.hdf5\"), key=\"training\"\n", |
425 |
| - ")\n", |
426 |
| - "output_test = pd.read_hdf(\n", |
427 |
| - " os.path.join(output_path, f\"gnn_{task}\", \"output_exporter.hdf5\"), key=\"testing\"\n", |
428 |
| - ")\n", |
| 423 | + "output_train = pd.read_hdf(os.path.join(output_path, f\"gnn_{task}\", \"output_exporter.hdf5\"), key=\"training\")\n", |
| 424 | + "output_test = pd.read_hdf(os.path.join(output_path, f\"gnn_{task}\", \"output_exporter.hdf5\"), key=\"testing\")\n", |
429 | 425 | "output_train.head()"
|
430 | 426 | ]
|
431 | 427 | },
|
|
436 | 432 | "source": [
|
437 | 433 | "The dataframes contain `phase`, `epoch`, `entry`, `output`, `target`, and `loss` columns, and can be easily used to visualize the results.\n",
|
438 | 434 | "\n",
|
439 |
| - "For example, the loss across the epochs can be plotted for the training and the validation sets:\n" |
| 435 | + "For classification tasks, the `output` column contains a list of probabilities that each class occurs, and each list sums to 1 (for more details, please see documentation on the [softmax function](https://pytorch.org/docs/stable/generated/torch.nn.functional.softmax.html)). Note that the order of the classes in the list depends on the `classes` attribute of the DeeprankDataset instances. For classification tasks, if `classes` is not specified (as in this example case), it is defaulted to [0, 1].\n", |
| 436 | + "\n", |
| 437 | + "The loss across the epochs can be plotted for the training and the validation sets:\n" |
440 | 438 | ]
|
441 | 439 | },
|
442 | 440 | {
|
|
671 | 669 | "metadata": {},
|
672 | 670 | "outputs": [],
|
673 | 671 | "source": [
|
674 |
| - "output_train = pd.read_hdf(\n", |
675 |
| - " os.path.join(output_path, f\"cnn_{task}\", \"output_exporter.hdf5\"), key=\"training\"\n", |
676 |
| - ")\n", |
677 |
| - "output_test = pd.read_hdf(\n", |
678 |
| - " os.path.join(output_path, f\"cnn_{task}\", \"output_exporter.hdf5\"), key=\"testing\"\n", |
679 |
| - ")\n", |
| 672 | + "output_train = pd.read_hdf(os.path.join(output_path, f\"cnn_{task}\", \"output_exporter.hdf5\"), key=\"training\")\n", |
| 673 | + "output_test = pd.read_hdf(os.path.join(output_path, f\"cnn_{task}\", \"output_exporter.hdf5\"), key=\"testing\")\n", |
680 | 674 | "output_train.head()"
|
681 | 675 | ]
|
682 | 676 | },
|
|
767 | 761 | "name": "python",
|
768 | 762 | "nbconvert_exporter": "python",
|
769 | 763 | "pygments_lexer": "ipython3",
|
770 |
| - "version": "3.10.13" |
| 764 | + "version": "3.10.12" |
771 | 765 | },
|
772 | 766 | "orig_nbformat": 4
|
773 | 767 | },
|
|
0 commit comments