-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugfix/type safety #19
Changes from 6 commits
e229a42
287670e
5e41fbe
ea01272
6c08469
b0a9329
a0636c9
a9f7149
82d452f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Complex model | ||
|
||
This example shows how you can build and run an arbitrarily complex AI model - much like you can in PyTorch! | ||
|
||
The model architecture is defined in `src/my_model.py`. You will notice that it is syntactically nearly identical to the equivalent PyTorch model. | ||
|
||
This model is then used in the main Nada program - defined in `src/complex_model.py`. What this script does is simply: | ||
- Load the model provided by Party0 via `my_model = MyModel()` and `my_model.load_state_from_network("my_model", parties[0], na.SecretRational)` | ||
- Load in the (3, 4, 3) input data matrix called "my_input" provided by Party1 via `na.array((3, 4, 3), parties[1], "my_input", na.SecretRational)` | ||
- Run inference via `result = my_model(my_input)` | ||
- Return the inference result to Party1 via `return result.output(parties[1], "my_output")` |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Linear regression | ||
|
||
This example shows how you can run a linear regression model using Nada AI. It highlights that, although there exists a major parallel between Nada AI's design and that of PyTorch, it also integrates with other frameworks such as in this case `sci-kitlearn` | ||
|
||
You will find the nada program in `src/linear_regression.py` | ||
|
||
What this script does is simply: | ||
- Load the model provided by Party0 via `my_model = LinearRegression(in_features=10)` and `my_model.load_state_from_network("my_model", parties[0], na.SecretRational)` | ||
- Load in the 10 input features as a 1-d array called "my_input" provided by Party1 via `my_input = na.array((10,), parties[1], "my_input", na.SecretRational)` | ||
- Run inference via `result = my_model.forward(my_input)` | ||
- Return the inference result to Party1 via `return [Output(result.value, "my_output", parties[1])]` |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,23 +1,18 @@ | ||
import asyncio | ||
import os | ||
import sys | ||
import time | ||
|
||
import nada_algebra as na | ||
import nada_algebra.client as na_client | ||
import numpy as np | ||
import py_nillion_client as nillion | ||
from dotenv import load_dotenv | ||
from sklearn.linear_model import LinearRegression | ||
|
||
from nada_ai.client import SklearnClient | ||
|
||
# Add the parent directory to the system path to import modules from it | ||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "../.."))) | ||
|
||
import nada_algebra.client as na_client | ||
# Import helper functions for creating nillion client and getting keys | ||
from nillion_python_helpers import (create_nillion_client, getNodeKeyFromFile, | ||
getUserKeyFromFile) | ||
from sklearn.linear_model import LinearRegression | ||
|
||
from nada_ai.client import SklearnClient | ||
|
||
# Load environment variables from a .env file | ||
load_dotenv() | ||
|
@@ -96,7 +91,7 @@ async def main(): | |
party_id = client.party_id | ||
user_id = client.user_id | ||
party_names = na_client.parties(2) | ||
program_name = "main" | ||
program_name = "linear_regression" | ||
program_mir_path = f"./target/{program_name}.nada.bin" | ||
|
||
if not os.path.exists("bench"): | ||
|
@@ -162,7 +157,7 @@ async def main(): | |
nillion.Secrets({}), | ||
) | ||
# Rescale the obtained result by the quantization scale | ||
outputs = [na_client.float_from_rational(result["my_output_0"])] | ||
outputs = [na_client.float_from_rational(result["my_output"])] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. note: this occurs b/c we decided to mirror NumPy's behaviour of matrix ops not necessarily returning other matrices There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's okay. I think that the more we mimick Numpy the better. |
||
print(f"🖥️ The result is {outputs}") | ||
|
||
expected = fit_model.predict(np.ones((NUM_FEATS,)).reshape(1, -1)) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,5 @@ | ||
import nada_algebra as na | ||
from nada_dsl import Output | ||
|
||
from nada_ai.linear_model import LinearRegression | ||
|
||
|
@@ -8,7 +9,7 @@ def nada_main(): | |
parties = na.parties(2) | ||
|
||
# Step 2: Instantiate linear regression object | ||
my_model = LinearRegression(10) | ||
my_model = LinearRegression(in_features=10) | ||
|
||
# Step 3: Load model weights from Nillion network by passing model name (acts as ID) | ||
# In this examples Party0 provides the model and Party1 runs inference | ||
|
@@ -22,4 +23,4 @@ def nada_main(): | |
result = my_model.forward(my_input) | ||
|
||
# Step 6: We can use result.output() to produce the output for Party1 and variable name "my_output" | ||
return result.output(parties[1], "my_output") | ||
return [Output(result.value, "my_output", parties[1])] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Suggestion here. We have the
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I will try this but if it works then IMO the type hinting for There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with changing the type hinting not to have the NadaArray only. I think is good to have a single point of output that handles everything no matter what you put inside. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. fixed this but flagging to address that type hint in a separate nada-numpy PR at some point |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,8 +1,6 @@ | ||
# Multi-Layer Perceptron Demo | ||
**This folder was generated using `nada init`** | ||
|
||
To execute this tutorial, you may potentially need to install the `requirements.txt` apart from nada-ai: | ||
To execute this tutorial, you may need to install the `requirements.txt`: | ||
```bash | ||
pip install -r requirements.txt | ||
``` | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1,7 @@ | ||
name = "text_classification" | ||
name = "multi_layer_perceptron" | ||
version = "0.1.0" | ||
authors = [""] | ||
|
||
[[programs]] | ||
path = "src/main.py" | ||
path = "src/multi_layer_perceptron.py" | ||
prime_size = 128 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,3 @@ | ||
"""MLP Nada program""" | ||
|
||
import nada_algebra as na | ||
from my_nn import MyNN | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
# This directory is kept purposely, so that no compilation errors arise. | ||
# Ignore everything in this directory | ||
* | ||
# Except this file | ||
!.gitignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: this is not strictly mandatory for us but it is in torch. so just copied this here so that the syntax is exactly equal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, but at the same time I think having a direct coupling makes it much easier.
Setting this as a reminder to update the Nillion Docs on this regard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wdym by this?