You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 19, 2018. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+8-4Lines changed: 8 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -2,9 +2,10 @@
2
2
3
3
This project has two purposes. First of all, I'd like to share some of my experience in nlp tasks such as segmentation or word vectors. The other, which is more important, is that probably some people are searching for pre-trained word vector models for non-English languages. Alas! English has gained much more attention than any other languages has done. Check [this](https://github.com/3Top/word2vec-api) to see how easily you can get a variety of pre-trained English word vectors without efforts. I think it's time to turn our eyes to a multi language version of this.
4
4
5
-
<b>Nearing the end of the work, I happened to know that there is already a similar job named `polyglot`. I strongly encourage you to check [this great project](https://sites.google.com/site/rmyeid/projects/polyglot). How embarrassing! Nevertheless, I decided to open this project. You will know that my job has its own flavor, after all.</b>
5
+
**Nearing the end of the work, I happened to know that there is already a similar job named `polyglot`. I strongly encourage you to check [this great project](https://sites.google.com/site/rmyeid/projects/polyglot). How embarrassing! Nevertheless, I decided to open this project. You will know that my job has its own flavor, after all.**
6
6
7
7
## Requirements
8
+
8
9
* nltk >= 1.11.1
9
10
* regex >= 2016.6.24
10
11
* lxml >= 3.3.3
@@ -16,22 +17,25 @@ This project has two purposes. First of all, I'd like to share some of my experi
* Check [this](https://en.wikipedia.org/wiki/Word_embedding) to know what word embedding is.
22
24
* Check [this](https://en.wikipedia.org/wiki/Word2vec) to quickly get a picture of Word2vec.
23
25
* Check [this](https://github.com/facebookresearch/fastText) to install fastText.
24
26
* Watch [this](https://www.youtube.com/watch?v=T8tQZChniMk&index=2&list=PL_6hBtWGKk2KdY3ANaEYbxL3N5YhRN9i0) to really understand what's happening under the hood of Word2vec.
25
27
* Go get various English word vectors [here](https://github.com/3Top/word2vec-api) if needed.
26
28
27
29
## Work Flow
28
-
* STEP 1. Download the [wikipedia database backup dumps](https://dumps.wikimedia.org/backup-index.html) of the language you want.
30
+
31
+
* STEP 1. Download the [wikipedia database backup dumps](https://dumps.wikimedia.org/backup-index.html) of the language you want (for example, for english wiki go to `https://dumps.wikimedia.org/enwiki/` click the latest timestamp, and download the `enwiki-YYYYMMDD-pages-articles-multistream.xml.bz2` file).
29
32
* STEP 2. Extract running texts to `data/` folder.
30
33
* STEP 3. Run `build_corpus.py`.
31
34
* STEP 4-1. Run `make_wordvector.sh` to get Word2Vec word vectors.
32
-
* STEP 4-2. Run `fasttext.sh` to get fastText word vectors.
35
+
* STEP 4-2. Run `fasttext.sh` to get fastText word vectors.
33
36
34
37
## Pre-trained models
38
+
35
39
Two types of pre-trained models are provided. `w` and `f` represent `word2vec` and `fastText` respectively.
36
40
37
41
| Language | ISO 639-1 | Vector Size | Corpus Size | Vocabulary Size |
0 commit comments