Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/Add Multi Layer Perceptron Demo code #18

Merged
merged 4 commits into from
Jun 18, 2024
Merged

Conversation

jcabrero
Copy link
Member

This adds the multi layer perceptron code.

EXTRA: Linting and formating code.

@jcabrero jcabrero force-pushed the feat/add_mlp_demo branch from 1ffffeb to bda7c50 Compare June 14, 2024 12:04
@jcabrero jcabrero requested a review from mathias-nillion June 14, 2024 13:31
Copy link
Contributor

@mathias-nillion mathias-nillion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Nice work

# Multi-Layer Perceptron Demo
**This folder was generated using `nada init`**

To execute this tutorial, you may potentially need to install the `requirements.txt` appart from nada-ai:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: typo appart -> apart

@@ -0,0 +1,7 @@
name = "text_classification"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: name -> multi_layer_perceptron

@@ -0,0 +1,7 @@
scikit-learn~=1.4.2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: do we need sklearn, pandas, requests and matplotlib here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

matplotlib is required for graphs that are plot on the demo

@@ -116,7 +116,7 @@ def __init__(self) -> None:
self.linear_1 = torch.nn.Linear(4, 2)
self.relu = torch.nn.ReLU()

def forward(self, x: na.NadaArray) -> na.NadaArray:
def forward(self, x: np.ndarray) -> np.ndarray:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: these arrays are torch.tensor instead of np.ndarray, no?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed

requests~=2.31.0
torchvision~=0.18.1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: do we need torchvision and pillow here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed

@@ -0,0 +1,497 @@
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(just gonna comment on first line b/c notebooks x github is annoying)
we're doing inference_result[...] / (2 ** na.get_log_scale()) --> could do rational_to_float instead?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also: that first cell is insane. Can only imagine the pain you had to go through to come up with that fix - my condolences

@@ -0,0 +1,799 @@
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw just saw the confusion matrix --> our trained model is basically return 1
we might wanna look into addressing that class imbalance

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Most medical examples are quite imbalanced. The original AI demo was balanced though. Maybe we could take it from there.

@mathias-nillion
Copy link
Contributor

@jcabrero general comment: I think (correct me if i'm wrong?) this is our most demanding example on the network.
has it been tested on testnet?

@jcabrero
Copy link
Member Author

@jcabrero general comment: I think (correct me if i'm wrong?) this is our most demanding example on the network. has it been tested on testnet?

Probably. However, in Nillion-Devnet it executes on a similar efficiency than other examples (same execution time).

@jcabrero jcabrero merged commit c6bbf0f into main Jun 18, 2024
4 checks passed
@jcabrero jcabrero deleted the feat/add_mlp_demo branch June 18, 2024 08:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants