Skip to content

Commit 9d583b6

Browse files
justinormontTomFinley
authored andcommitted
Move HTTP links to HTTPS (when the content matches)
1 parent a901048 commit 9d583b6

File tree

44 files changed

+91
-91
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+91
-91
lines changed

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ We welcome contributions! Please review our [contribution guide](CONTRIBUTING.md
5656

5757
Please join our community on Gitter [![Join the chat at https://gitter.im/dotnet/mlnet](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/dotnet/mlnet?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
5858

59-
This project has adopted the code of conduct defined by the [Contributor Covenant](http://contributor-covenant.org/) to clarify expected behavior in our community.
59+
This project has adopted the code of conduct defined by the [Contributor Covenant](https://contributor-covenant.org/) to clarify expected behavior in our community.
6060
For more information, see the [.NET Foundation Code of Conduct](https://dotnetfoundation.org/code-of-conduct).
6161

6262
## Examples
@@ -94,7 +94,7 @@ ML.NET is licensed under the [MIT license](LICENSE).
9494

9595
## .NET Foundation
9696

97-
ML.NET is a [.NET Foundation](http://www.dotnetfoundation.org/projects) project.
97+
ML.NET is a [.NET Foundation](https://www.dotnetfoundation.org/projects) project.
9898

9999
There are many .NET related projects on GitHub.
100100

build.proj

+1-1
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@
8181
DestinationFile="$(MSBuildThisFileDirectory)test/data/external/winequality-white.csv" />
8282

8383
<TestFile Condition="'$(IncludeBenchmarkData)' == 'true'" Include="$(MSBuildThisFileDirectory)/test/data/external/WikiDetoxAnnotated160kRows.tsv"
84-
Url="http://aka.ms/tlc-resources/benchmarks/WikiDetoxAnnotated160kRows.tsv"
84+
Url="https://aka.ms/tlc-resources/benchmarks/WikiDetoxAnnotated160kRows.tsv"
8585
DestinationFile="$(MSBuildThisFileDirectory)test/data/external/WikiDetoxAnnotated160kRows.tsv" />
8686
</ItemGroup>
8787

docs/building/unix-instructions.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ On macOS a few components are needed which are not provided by a default develop
4545
* gcc
4646
* All the requirements necessary to run .NET Core 2.0 applications. To view macOS prerequisites click [here](https://docs.microsoft.com/en-us/dotnet/core/macos-prerequisites?tabs=netcore2x).
4747

48-
One way of obtaining CMake and gcc is via [Homebrew](http://brew.sh):
48+
One way of obtaining CMake and gcc is via [Homebrew](https://brew.sh):
4949
```sh
5050
$ brew install cmake
5151
$ brew install gcc

docs/code/IdvFileFormat.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -116,8 +116,8 @@ The enum for compression kind is one byte, and follows this scheme:
116116
Compression Kind | Code
117117
---------------------------------------------------------------|-----
118118
None | 0
119-
DEFLATE (i.e., [RFC1951](http://www.ietf.org/rfc/rfc1951.txt)) | 1
120-
zlib (i.e., [RFC1950](http://www.ietf.org/rfc/rfc1950.txt)) | 2
119+
DEFLATE (i.e., [RFC1951](https://www.ietf.org/rfc/rfc1951.txt)) | 1
120+
zlib (i.e., [RFC1950](https://www.ietf.org/rfc/rfc1950.txt)) | 2
121121

122122
None means no compression. DEFLATE is the default scheme. There is a tendency
123123
to conflate zlib and DEFLATE, so to be clear: zlib can be (somewhat inexactly)

docs/release-notes/0.2/release-0.2.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ Below are some of the highlights from this release.
3939
their taste in movies.
4040

4141
* ML.NET 0.2 exposes `KMeansPlusPlusClusterer` which implements [K-Means++
42-
clustering](http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf)
42+
clustering](https://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf)
4343
with [Yinyang K-means
4444
acceleration](https://www.microsoft.com/en-us/research/publication/yinyang-k-means-a-drop-in-replacement-of-the-classic-k-means-with-consistent-speedup/?from=http%3A%2F%2Fresearch.microsoft.com%2Fapps%2Fpubs%2Fdefault.aspx%3Fid%3D252149).
4545
[This

docs/release-notes/0.3/release-0.3.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -39,15 +39,15 @@ Below are some of the highlights from this release.
3939
* FFM is a streaming learner so it does not require the entire dataset to
4040
fit in memory.
4141
* You can learn more about FFM
42-
[here](http://www.csie.ntu.edu.tw/~cjlin/papers/ffm.pdf) and some of the
42+
[here](https://www.csie.ntu.edu.tw/~cjlin/papers/ffm.pdf) and some of the
4343
speedup approaches that are used in ML.NET
4444
[here](https://github.com/wschin/fast-ffm/blob/master/fast-ffm.pdf).
4545

4646
* Added [LightGBM](https://github.com/Microsoft/LightGBM) as a learner for
4747
binary classification, multiclass classification, and regression (#392)
4848

4949
* LightGBM is a tree based gradient boosting machine. It is under the
50-
umbrella of the [DMTK](http://github.com/microsoft/dmtk) project at
50+
umbrella of the [DMTK](https://github.com/microsoft/dmtk) project at
5151
Microsoft.
5252
* The LightGBM repository shows various [comparison
5353
experiments](https://github.com/Microsoft/LightGBM/blob/6488f319f243f7ff679a8e388a33e758c5802303/docs/Experiments.rst#comparison-experiment)

docs/release-notes/0.4/release-0.4.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ Below are some of the highlights from this release.
6464
* Several options for pretrained embeddings are available:
6565
[GloVe](https://nlp.stanford.edu/projects/glove/),
6666
[fastText](https://en.wikipedia.org/wiki/FastText), and
67-
[SSWE](http://anthology.aclweb.org/P/P14/P14-1146.pdf). The pretrained model is downloaded automatically on first use.
67+
[SSWE](https://anthology.aclweb.org/P/P14/P14-1146.pdf). The pretrained model is downloaded automatically on first use.
6868
* Documentation can be found
6969
[here](https://docs.microsoft.com/en-us/dotnet/api/microsoft.ml.transforms.wordembeddings?view=ml-dotnet).
7070

@@ -85,4 +85,4 @@ Shoutout to [dsyme](https://github.com/dsyme),
8585
[jwood803](https://github.com/jwood803),
8686
[sharwell](https://github.com/sharwell),
8787
[JoshuaLight](https://github.com/JoshuaLight), and the ML.NET team for their
88-
contributions as part of this release!
88+
contributions as part of this release!

src/Microsoft.ML.Core/Utilities/DoubleParser.cs

+1-1
Original file line numberDiff line numberDiff line change
@@ -345,7 +345,7 @@ public static bool TryParse(out Double value, string s, int ichMin, int ichLim,
345345
// Taking the high 64 bits of the 128 bit result should give us enough bits to get the
346346
// right answer most of the time. Note, that it's not guaranteed that we always get the
347347
// right answer. Guaranteeing that takes much more work.... See the paper by David Gay at
348-
// http://www.ampl.com/REFS/rounding.pdf.
348+
// https://www.ampl.com/REFS/rounding.pdf.
349349
Contracts.Assert((num & TopTwoBits) != 0);
350350
Contracts.Assert((mul & TopBit) != 0);
351351

src/Microsoft.ML.Core/Utilities/ReservoirSampler.cs

+2-2
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ public interface IReservoirSampler<T>
4747
/// This class produces a sample without replacement from a stream of data of type <typeparamref name="T"/>.
4848
/// It is instantiated with a delegate that gets the next data point, and builds a reservoir in one pass by calling <see cref="Sample"/>
4949
/// for every data point in the stream. In case the next data point does not get 'picked' into the reservoir, the delegate is not invoked.
50-
/// Sampling is done according to the algorithm in this paper: <a href="http://epubs.siam.org/doi/pdf/10.1137/1.9781611972740.53">http://epubs.siam.org/doi/pdf/10.1137/1.9781611972740.53</a>.
50+
/// Sampling is done according to the algorithm in this paper: <a href="https://epubs.siam.org/doi/pdf/10.1137/1.9781611972740.53">https://epubs.siam.org/doi/pdf/10.1137/1.9781611972740.53</a>.
5151
/// </summary>
5252
public sealed class ReservoirSamplerWithoutReplacement<T> : IReservoirSampler<T>
5353
{
@@ -120,7 +120,7 @@ public IEnumerable<T> GetSample()
120120
/// This class produces a sample with replacement from a stream of data of type <typeparamref name="T"/>.
121121
/// It is instantiated with a delegate that gets the next data point, and builds a reservoir in one pass by calling <see cref="Sample"/>
122122
/// for every data point in the stream. In case the next data point does not get 'picked' into the reservoir, the delegate is not invoked.
123-
/// Sampling is done according to the algorithm in this paper: <a href="http://epubs.siam.org/doi/pdf/10.1137/1.9781611972740.53">http://epubs.siam.org/doi/pdf/10.1137/1.9781611972740.53</a>.
123+
/// Sampling is done according to the algorithm in this paper: <a href="https://epubs.siam.org/doi/pdf/10.1137/1.9781611972740.53">https://epubs.siam.org/doi/pdf/10.1137/1.9781611972740.53</a>.
124124
/// </summary>
125125
public sealed class ReservoirSamplerWithReplacement<T> : IReservoirSampler<T>
126126
{

src/Microsoft.ML.Core/Utilities/Stats.cs

+2-2
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ public static int SampleFromPoisson(IRandom rand, double lambda)
199199
}
200200

201201
// Mean refers to the mu parameter. Scale refers to the b parameter.
202-
// http://en.wikipedia.org/wiki/Laplace_distribution
202+
// https://en.wikipedia.org/wiki/Laplace_distribution
203203
public static Float SampleFromLaplacian(IRandom rand, Float mean, Float scale)
204204
{
205205
Float u = rand.NextSingle();
@@ -215,7 +215,7 @@ public static Float SampleFromLaplacian(IRandom rand, Float mean, Float scale)
215215

216216
/// <summary>
217217
/// Sample from a standard Cauchy distribution:
218-
/// http://en.wikipedia.org/wiki/Lorentzian_function
218+
/// https://en.wikipedia.org/wiki/Lorentzian_function
219219
/// </summary>
220220
/// <param name="rand"></param>
221221
/// <returns></returns>

src/Microsoft.ML.Core/Utilities/Stream.cs

+2-2
Original file line numberDiff line numberDiff line change
@@ -551,7 +551,7 @@ public static long WriteStringStream(this BinaryWriter writer, IEnumerable<strin
551551
/// <summary>
552552
/// Writes what Microsoft calls a UTF-7 encoded number in the binary reader and
553553
/// writer string methods. For non-negative integers this is equivalent to LEB128
554-
/// (see http://en.wikipedia.org/wiki/LEB128).
554+
/// (see https://en.wikipedia.org/wiki/LEB128).
555555
/// </summary>
556556
public static void WriteLeb128Int(this BinaryWriter writer, ulong value)
557557
{
@@ -1136,4 +1136,4 @@ public static void CheckOptionalUserDirectory(string file, string userArgument)
11361136
}
11371137
#pragma warning restore MSML_ContractsNameUsesNameof
11381138
}
1139-
}
1139+
}

src/Microsoft.ML.Core/Utilities/SummaryStatistics.cs

+3-3
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ public override string ToString()
9393
/// Accumulates one more value, optionally weighted.
9494
/// This accumulation procedure is based on the following,
9595
/// with adjustments as appropriate for weighted instances:
96-
/// http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
96+
/// https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
9797
/// </summary>
9898
/// <param name="v">The value</param>
9999
/// <param name="w">The weight given to this value</param>
@@ -174,7 +174,7 @@ public sealed class SummaryStatisticsUpToSecondOrderMoments : SummaryStatisticsB
174174
/// A class for one-pass accumulation of weighted summary statistics, up
175175
/// to the fourth moment. The accumulative algorithms used here may be
176176
/// reviewed at
177-
/// http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
177+
/// https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
178178
/// All quantities are weighted, except for <c>RawCount</c>.
179179
/// </summary>
180180
public sealed class SummaryStatistics : SummaryStatisticsBase
@@ -311,7 +311,7 @@ public override string ToString()
311311
/// Accumulates one more value, optionally weighted.
312312
/// This accumulation procedure is based on the following,
313313
/// with adjustments as appropriate for weighted instances:
314-
/// http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
314+
/// https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
315315
/// </summary>
316316
/// <param name="v">The value</param>
317317
/// <param name="w">The weight given to this value</param>

src/Microsoft.ML.Core/Utilities/SupervisedBinFinder.cs

+1-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ namespace Microsoft.ML.Runtime.Internal.Utilities
1515
/// the target function "minimum description length".
1616
/// The algorithm is outlineed in an article
1717
/// "Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning"
18-
/// [Fayyad, Usama M.; Irani, Keki B. (1993)] http://ijcai.org/Past%20Proceedings/IJCAI-93-VOL2/PDF/022.pdf
18+
/// [Fayyad, Usama M.; Irani, Keki B. (1993)] https://ijcai.org/Past%20Proceedings/IJCAI-93-VOL2/PDF/022.pdf
1919
///
2020
/// The class can be used several times sequentially, it is stateful and not thread-safe.
2121
/// Both Single and Double precision processing is implemented, and is identical.

src/Microsoft.ML.Data/Evaluators/AucAggregator.cs

+2-2
Original file line numberDiff line numberDiff line change
@@ -408,7 +408,7 @@ public UnweightedAuPrcAggregator(IRandom rand, int reservoirSize)
408408

409409
/// <summary>
410410
/// Compute the AUPRC using the "lower trapesoid" estimator, as described in the paper
411-
/// <a href="http://www.ecmlpkdd2013.org/wp-content/uploads/2013/07/aucpr_2013ecml_corrected.pdf">http://www.ecmlpkdd2013.org/wp-content/uploads/2013/07/aucpr_2013ecml_corrected.pdf</a>.
411+
/// <a href="https://www.ecmlpkdd2013.org/wp-content/uploads/2013/07/aucpr_2013ecml_corrected.pdf">https://www.ecmlpkdd2013.org/wp-content/uploads/2013/07/aucpr_2013ecml_corrected.pdf</a>.
412412
/// </summary>
413413
protected override Double ComputeWeightedAuPrcCore(out Double unweighted)
414414
{
@@ -482,7 +482,7 @@ public WeightedAuPrcAggregator(IRandom rand, int reservoirSize)
482482

483483
/// <summary>
484484
/// Compute the AUPRC using the "lower trapesoid" estimator, as described in the paper
485-
/// <a href="http://www.ecmlpkdd2013.org/wp-content/uploads/2013/07/aucpr_2013ecml_corrected.pdf">http://www.ecmlpkdd2013.org/wp-content/uploads/2013/07/aucpr_2013ecml_corrected.pdf</a>.
485+
/// <a href="https://www.ecmlpkdd2013.org/wp-content/uploads/2013/07/aucpr_2013ecml_corrected.pdf">https://www.ecmlpkdd2013.org/wp-content/uploads/2013/07/aucpr_2013ecml_corrected.pdf</a>.
486486
/// </summary>
487487
protected override Double ComputeWeightedAuPrcCore(out Double unweighted)
488488
{

src/Microsoft.ML.Data/Model/Pfa/ICanSavePfa.cs

+2-2
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ public interface ISaveAsPfa : ICanSavePfa
3535
}
3636

3737
/// <summary>
38-
/// This data model component is savable as PFA. See http://dmg.org/pfa/ .
38+
/// This data model component is savable as PFA. See https://dmg.org/pfa/ .
3939
/// </summary>
4040
public interface ITransformCanSavePfa : ISaveAsPfa, IDataTransform
4141
{
@@ -111,4 +111,4 @@ public interface IDistCanSavePfa : ISingleCanSavePfa, IValueMapperDist
111111
void SaveAsPfa(BoundPfaContext ctx, JToken input,
112112
string score, out JToken scoreToken, string prob, out JToken probToken);
113113
}
114-
}
114+
}

src/Microsoft.ML.FastTree/QuantileStatistics.cs

+2-2
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ public QuantileStatistics(Float[] data, Float[] weights = null, bool isSorted =
6767

6868
/// <summary>
6969
/// There are many ways to estimate quantile. This implementations is based on R-8, SciPy-(1/3,1/3)
70-
/// http://en.wikipedia.org/wiki/Quantile#Estimating_the_quantiles_of_a_population
70+
/// https://en.wikipedia.org/wiki/Quantile#Estimating_the_quantiles_of_a_population
7171
/// </summary>
7272
public Float GetQuantile(Float p)
7373
{
@@ -131,7 +131,7 @@ private Float GetRank(Float p)
131131
}
132132

133133
// This implementations is based on R-8, SciPy-(1/3,1/3)
134-
// http://en.wikipedia.org/wiki/Quantile#Estimating_the_quantiles_of_a_population
134+
// https://en.wikipedia.org/wiki/Quantile#Estimating_the_quantiles_of_a_population
135135
var h = (_weights == null) ? (weightedLength + oneThird) * p + oneThird : weightedLength * p;
136136

137137
if (_weights == null)

src/Microsoft.ML.FastTree/Training/EnsembleCompression/LassoBasedEnsembleCompressor.cs

+2-2
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ namespace Microsoft.ML.Runtime.FastTree.Internal
1212
/// This implementation is based on:
1313
/// Friedman, J., Hastie, T. and Tibshirani, R. (2008) Regularization
1414
/// Paths for Generalized Linear Models via Coordinate Descent.
15-
/// http://www-stat.stanford.edu/~hastie/Papers/glmnet.pdf
15+
/// https://www-stat.stanford.edu/~hastie/Papers/glmnet.pdf
1616
/// </summary>
1717
/// <remarks>Author was Yasser Ganjisaffar during his internship.</remarks>
1818
public class LassoBasedEnsembleCompressor : IEnsembleCompressor<short>
@@ -556,4 +556,4 @@ public Ensemble GetCompressedEnsemble()
556556
return _compressedEnsemble;
557557
}
558558
}
559-
}
559+
}

src/Microsoft.ML.FastTree/doc.xml

+4-4
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
<para>For more information see:</para>
3232
<list type="bullet">
3333
<item><description><a href='https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting'>Wikipedia: Gradient boosting (Gradient tree boosting).</a></description></item>
34-
<item><description><a href='http://projecteuclid.org/DPubS?service=UI&amp;version=1.0&amp;verb=Display&amp;handle=euclid.aos/1013203451'>Greedy function approximation: A gradient boosting machine.</a></description></item>
34+
<item><description><a href='https://projecteuclid.org/DPubS?service=UI&amp;version=1.0&amp;verb=Display&amp;handle=euclid.aos/1013203451'>Greedy function approximation: A gradient boosting machine.</a></description></item>
3535
</list>
3636
</remarks>
3737
</member>
@@ -96,7 +96,7 @@
9696
Each tree in a decision forest outputs a Gaussian distribution.</para>
9797
<para>For more see: </para>
9898
<list type='bullet'>
99-
<item><description><a href='http://en.wikipedia.org/wiki/Random_forest'>Wikipedia: Random forest</a></description></item>
99+
<item><description><a href='https://en.wikipedia.org/wiki/Random_forest'>Wikipedia: Random forest</a></description></item>
100100
<item><description><a href='http://jmlr.org/papers/volume7/meinshausen06a/meinshausen06a.pdf'>Quantile regression forest</a></description></item>
101101
<item><description><a href='https://blogs.technet.microsoft.com/machinelearning/2014/09/10/from-stumps-to-trees-to-forests/'>From Stumps to Trees to Forests</a></description></item>
102102
</list>
@@ -138,7 +138,7 @@
138138
Insurance Premium Prediction via Gradient Tree-Boosted Tweedie Compound Poisson Models.</a> from Yang, Quan, and Zou.
139139
<para>For an introduction to Gradient Boosting, and more information, see:</para>
140140
<para><a href='https://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting'>Wikipedia: Gradient boosting (Gradient tree boosting)</a></para>
141-
<para><a href='http://projecteuclid.org/DPubS?service=UI&amp;version=1.0&amp;verb=Display&amp;handle=euclid.aos/1013203451'>Greedy function approximation: A gradient boosting machine</a></para>
141+
<para><a href='https://projecteuclid.org/DPubS?service=UI&amp;version=1.0&amp;verb=Display&amp;handle=euclid.aos/1013203451'>Greedy function approximation: A gradient boosting machine</a></para>
142142
</remarks>
143143
</member>
144144

@@ -191,4 +191,4 @@
191191
</member>
192192

193193
</members>
194-
</doc>
194+
</doc>

src/Microsoft.ML.KMeansClustering/KMeansPlusPlusTrainer.cs

+3-3
Original file line numberDiff line numberDiff line change
@@ -671,7 +671,7 @@ private static void ComputeAccelerationMemoryRequirement(long accelMemBudgetMb,
671671
}
672672

673673
/// <summary>
674-
/// KMeans|| Implementation, see http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf
674+
/// KMeans|| Implementation, see https://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf
675675
/// This algorithm will require:
676676
/// - (k * overSampleFactor * rounds * diminsionality * 4) bytes for the final sampled clusters.
677677
/// - (k * overSampleFactor * numThreads * diminsionality * 4) bytes for the per-round sampling.
@@ -1357,7 +1357,7 @@ private static void Initialize(
13571357
int neededPerThreadWorkStates = numThreads == 1 ? 0 : numThreads;
13581358

13591359
// Accelerating KMeans requires the following data structures.
1360-
// The algorithm is based on the YinYang KMeans algorithm [ICML'15], http://research.microsoft.com/apps/pubs/default.aspx?id=252149
1360+
// The algorithm is based on the YinYang KMeans algorithm [ICML'15], https://research.microsoft.com/apps/pubs/default.aspx?id=252149
13611361
// These data structures are allocated only as allowed by the _accelMemBudgetMb parameter
13621362
// if _accelMemBudgetMb is zero, then the algorithm below reduces to the original KMeans++ implementation
13631363
int bytesPerCluster =
@@ -1483,7 +1483,7 @@ public struct RowStats
14831483
/// it expects to be able to sample numSamples * numThreads.
14841484
///
14851485
/// This is based on the 'A-Res' algorithm in 'Weighted Random Sampling', 2005; Efraimidis, Spirakis:
1486-
/// http://utopia.duth.gr/~pefraimi/research/data/2007EncOfAlg.pdf
1486+
/// https://utopia.duth.gr/~pefraimi/research/data/2007EncOfAlg.pdf
14871487
/// </summary>
14881488
public static RowStats ParallelWeightedReservoirSample(
14891489
IHost host, int numThreads,

0 commit comments

Comments
 (0)