Skip to content

[#1] - Feature/generate classification function #13

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Sep 3, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ make_blobs | Generate isotropic Gaussian blobs for clustering.
make_moons | Make two interleaving half circles | [link](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_blobs.html)
make_s_curve | Generate an S curve dataset. | [link](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_s_curve.html)
make_regression | Generate a random regression problem. | [link](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_regression.html])
make_classification | Generate a random n-class classification problem. | [link](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html])

**Disclaimer**: SyntheticDatasets.jl borrows code and documentation from
[scikit-learn](https://scikit-learn.org/stable/modules/classes.html#samples-generator) in the dataset module, but *it is not an official part
Expand Down
71 changes: 71 additions & 0 deletions src/sklearn.jl
Original file line number Diff line number Diff line change
Expand Up @@ -140,4 +140,75 @@ function generate_regression(; n_samples::Int = 100,

return convert(features, labels)

end

"""
function generate_classification(; n_samples::Int = 100,
n_features::Int = 20,
n_informative::Int = 2,
n_redundant::Int = 2,
n_repeated::Int = 0,
n_classes::Int = 2,
n_clusters_per_class::Int = 2,
weights::Union{Nothing, Array{Float64,1}} = nothing,
flip_y::Float64 = 0.01,
class_sep::Float64 = 1.0,
hypercube::Bool = true,
shift::Union{Nothing, Array{Float64,1}} = 0.0,
scale::Union{Nothing, Array{Float64,1}} = 1.0,
shuffle::Bool = true,
random_state::Union{Int, Nothing} = nothing)
Generate a random n-class classification problem. Sklearn interface to make_classification.
#Arguments
- `n_samples::Int = 100`: The number of samples.
- `n_features::Int = 20`: The total number of features. These comprise `n_informative` informative features, `n_redundant` redundant features, `n_repeated` duplicated features and `n_features-n_informative-n_redundant-n_repeated` useless features drawn at random.
- `n_informative::Int = 2`: The number of informative features. Each class is composed of a number of gaussian clusters each located around the vertices of a hypercube in a subspace of dimension `n_informative`. For each cluster, informative features are drawn independently from N(0, 1) and then randomly linearly combined within each cluster in order to add covariance. The clusters are then placed on the vertices of the hypercube.
- `n_redundant::Int = 2`: The number of redundant features. These features are generated as random linear combinations of the informative features.
- `n_repeated::Int = 0`: The number of duplicated features, drawn randomly from the informative and the redundant features.
- `n_classes::Int = 2`: The number of classes (or labels) of the classification problem.
- `n_clusters_per_class::Int = 2`: The number of clusters per class.
- `weights::Union{Nothing, Array{Float64,1}} = nothing`:
- `flip_y::Float64 = 0.01`: The fraction of samples whose class is assigned randomly. Larger values introduce noise in the labels and make the classification task harder. Note that the default setting flip_y > 0 might lead to less than n_classes in y in some cases.
- `class_sep::Float64 = 1.0`: The factor multiplying the hypercube size. Larger values spread out the clusters/classes and make the classification task easier.
- `hypercube::Bool = true`: If True, the clusters are put on the vertices of a hypercube. If False, the clusters are put on the vertices of a random polytope.
- `shift::Union{Nothing, Array{Float64,1}} = 0.0`: Shift features by the specified value. If None, then features are shifted by a random value drawn in [-class_sep, class_sep].
- `scale::Union{Nothing, Array{Float64,1}} = 1.0`: Multiply features by the specified value. If None, then features are scaled by a random value drawn in [1, 100]. Note that scaling happens after shifting.
- `shuffle::Bool = true`: Shuffle the samples and the features.
- `random_state::Union{Int, Nothing} = nothing`: Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary.
Reference: [link](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html)
"""
function generate_classification(; n_samples::Int = 100,
n_features::Int = 20,
n_informative::Int = 2,
n_redundant::Int = 2,
n_repeated::Int = 0,
n_classes::Int = 2,
n_clusters_per_class::Int = 2,
weights::Union{Nothing, Array{Float64,1}} = nothing,
flip_y::Float64 = 0.01,
class_sep::Float64 = 1.0,
hypercube::Bool = true,
shift::Union{Nothing, Float64, Array{Float64,1}} = 0.0,
scale::Union{Nothing, Float64, Array{Float64,1}} = 1.0,
shuffle::Bool = true,
random_state::Union{Int, Nothing} = nothing)


(features, labels) = datasets.make_classification( n_samples = n_samples,
n_features = n_features,
n_informative = n_informative,
n_redundant = n_redundant,
n_repeated = n_repeated,
n_classes = n_classes,
n_clusters_per_class = n_clusters_per_class,
weights = weights,
flip_y = flip_y,
class_sep = class_sep,
hypercube = hypercube,
shift = shift,
scale = scale,
shuffle = shuffle,
random_state = random_state)

return convert(features, labels)
end
8 changes: 8 additions & 0 deletions test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,12 @@ using Test
@test size(data)[1] == samples
@test size(data)[2] == features + 1

data = SyntheticDatasets.generate_classification(n_samples = samples,
n_features = features,
n_classes = 1)


@test size(data)[1] == samples
@test size(data)[2] == features + 1

end