Skip to content

Commit 266d351

Browse files
author
Peiyuan Liao
committed
working on demo
1 parent f75d273 commit 266d351

File tree

10 files changed

+264
-161
lines changed

10 files changed

+264
-161
lines changed

eth/demo.ipynb

Lines changed: 220 additions & 156 deletions
Large diffs are not rendered by default.

eth/keys/out_private.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
"7001944839470927745373394821470350369266097788845460368251493193132763972406"
1+
"21127377611760855151397025938040417124468957529580509010001250098001910171200"

eth/keys/out_public.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
[
2-
"3696244767974077653455679795135848733601425960305578671455690036664496274007",
3-
"10438300322976132768889289517836059848150371544640472911377544330755492363717"
2+
"19517200380561602233013947304080897707587995089006775240619814703482168364893",
3+
"10758991262778566768807205545860719510109920355369254969648901060785189870847"
44
]

eth/model/W.npy

0 Bytes
Binary file not shown.

eth/model/b.npy

0 Bytes
Binary file not shown.

eth/model_shuffled/W.npy

0 Bytes
Binary file not shown.

eth/model_shuffled/b.npy

0 Bytes
Binary file not shown.

eth/package.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@
3535
},
3636
"scripts": {
3737
"compile": "npx hardhat compile",
38-
"deploy-localhost": "npx hardhat run --network localhost scripts/deploy.js"
38+
"deploy-localhost": "npx hardhat run --network localhost scripts/deploy.js",
39+
"prepare-demo": "npx hardhat compile && npx hardhat run --network localhost scripts/deploy.js && npx hardhat add_bounty"
3940
}
4041
}

eth/prepare.py

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
2+
3+
import numpy as np
4+
import pandas as pd
5+
from sklearn.model_selection import train_test_split
6+
from sklearn.datasets import load_iris # Load Iris Data
7+
8+
iris = load_iris() # Creating pd DataFrames
9+
10+
iris_df = pd.DataFrame(data= iris.data, columns= iris.feature_names)
11+
target_df = pd.DataFrame(data= iris.target, columns= ['species'])
12+
13+
def converter(specie):
14+
if specie == 0:
15+
return 'setosa'
16+
elif specie == 1:
17+
return 'versicolor'
18+
else:
19+
return 'virginica'
20+
21+
target_df['species'] = target_df['species'].apply(converter)# Concatenate the DataFrames
22+
iris_df = pd.concat([iris_df, target_df], axis= 1)
23+
24+
# Converting Objects to Numerical dtype
25+
iris_df.drop('species', axis= 1, inplace= True)
26+
target_df = pd.DataFrame(columns= ['species'], data= iris.target)
27+
iris_df = pd.concat([iris_df, target_df], axis= 1)# Variables
28+
X= iris_df.drop(labels= 'sepal length (cm)', axis= 1)
29+
y= iris_df['sepal length (cm)']
30+
31+
# Splitting the Dataset
32+
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.13, random_state= 101)
33+
34+
X = X_test.values[:]
35+
Yt_expected = y_test.values[:].reshape(-1, 1)
36+
37+
np.save('dataset/X.npy',X)
38+
np.save('dataset/Y.npy',Yt_expected)

eth/settings.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,5 +16,5 @@
1616
"m": 20,
1717
"p": 4,
1818
"n": 1,
19-
"mse_target": 0.11232375
19+
"mse_target": 0.07864
2020
}

0 commit comments

Comments
 (0)