-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/Add Multi Layer Perceptron Demo code #18
Conversation
1ffffeb
to
bda7c50
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Nice work
# Multi-Layer Perceptron Demo | ||
**This folder was generated using `nada init`** | ||
|
||
To execute this tutorial, you may potentially need to install the `requirements.txt` appart from nada-ai: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: typo appart -> apart
@@ -0,0 +1,7 @@ | |||
name = "text_classification" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: name -> multi_layer_perceptron
@@ -0,0 +1,7 @@ | |||
scikit-learn~=1.4.2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: do we need sklearn, pandas, requests and matplotlib here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
matplotlib is required for graphs that are plot on the demo
@@ -116,7 +116,7 @@ def __init__(self) -> None: | |||
self.linear_1 = torch.nn.Linear(4, 2) | |||
self.relu = torch.nn.ReLU() | |||
|
|||
def forward(self, x: na.NadaArray) -> na.NadaArray: | |||
def forward(self, x: np.ndarray) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: these arrays are torch.tensor
instead of np.ndarray
, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agreed
requests~=2.31.0 | ||
torchvision~=0.18.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: do we need torchvision and pillow here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agreed
@@ -0,0 +1,497 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(just gonna comment on first line b/c notebooks x github is annoying)
we're doing inference_result[...] / (2 ** na.get_log_scale())
--> could do rational_to_float
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also: that first cell is insane. Can only imagine the pain you had to go through to come up with that fix - my condolences
@@ -0,0 +1,799 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw just saw the confusion matrix --> our trained model is basically return 1
we might wanna look into addressing that class imbalance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. Most medical examples are quite imbalanced. The original AI demo was balanced though. Maybe we could take it from there.
@jcabrero general comment: I think (correct me if i'm wrong?) this is our most demanding example on the network. |
Probably. However, in Nillion-Devnet it executes on a similar efficiency than other examples (same execution time). |
This adds the multi layer perceptron code.
EXTRA: Linting and formating code.