You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
---------------------------------------------------------------------------
error Traceback (most recent call last)
<ipython-input-43-5dbb824063e9> in <module>()
----> 1 nlp = en_coref_md.load()
~\Anaconda3\lib\site-packages\en_coref_md\__init__.py in load(**overrides)
12 disable = overrides.get('disable', [])
13 overrides['disable'] = disable + ['neuralcoref']
---> 14 nlp = load_model_from_init_py(__file__, **overrides)
15 coref = NeuralCoref(nlp.vocab)
16 coref.from_disk(nlp.path / 'neuralcoref')
~\Anaconda3\lib\site-packages\spacy\util.py in load_model_from_init_py(init_file, **overrides)
188 if not model_path.exists():
189 raise IOError(Errors.E052.format(path=path2str(data_path)))
--> 190 return load_model_from_path(data_path, meta, **overrides)
191
192
~\Anaconda3\lib\site-packages\spacy\util.py in load_model_from_path(model_path, meta, **overrides)
171 component = nlp.create_pipe(name, config=config)
172 nlp.add_pipe(component, name=name)
--> 173 return nlp.from_disk(model_path)
174
175
~\Anaconda3\lib\site-packages\spacy\language.py in from_disk(self, path, exclude, disable)
789 # Convert to list here in case exclude is (default) tuple
790 exclude = list(exclude) + ["vocab"]
--> 791 util.from_disk(path, deserializers, exclude)
792 self._path = path
793 return self
~\Anaconda3\lib\site-packages\spacy\util.py in from_disk(path, readers, exclude)
628 # Split to support file names like meta.json
629 if key.split(".")[0] not in exclude:
--> 630 reader(path / key)
631 return path
632
~\Anaconda3\lib\site-packages\spacy\language.py in <lambda>(p)
779 deserializers["meta.json"] = lambda p: self.meta.update(srsly.read_json(p))
780 deserializers["vocab"] = lambda p: self.vocab.from_disk(p) and _fix_pretrained_vectors_name(self)
--> 781 deserializers["tokenizer"] = lambda p: self.tokenizer.from_disk(p, exclude=["vocab"])
782 for name, proc in self.pipeline:
783 if name in exclude:
tokenizer.pyx in spacy.tokenizer.Tokenizer.from_disk()
tokenizer.pyx in spacy.tokenizer.Tokenizer.from_bytes()
~\Anaconda3\lib\re.py in compile(pattern, flags)
231 def compile(pattern, flags=0):
232 "Compile a regular expression pattern, returning a pattern object."
--> 233 return _compile(pattern, flags)
234
235 def purge():
~\Anaconda3\lib\re.py in _compile(pattern, flags)
299 if not sre_compile.isstring(pattern):
300 raise TypeError("first argument must be string or compiled pattern")
--> 301 p = sre_compile.compile(pattern, flags)
302 if not (flags & DEBUG):
303 if len(_cache) >= _MAXCACHE:
~\Anaconda3\lib\sre_compile.py in compile(p, flags)
560 if isstring(p):
561 pattern = p
--> 562 p = sre_parse.parse(p, flags)
563 else:
564 pattern = None
~\Anaconda3\lib\sre_parse.py in parse(str, flags, pattern)
853
854 try:
--> 855 p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
856 except Verbose:
857 # the VERBOSE flag was switched on inside the pattern. to be
~\Anaconda3\lib\sre_parse.py in _parse_sub(source, state, verbose, nested)
414 while True:
415 itemsappend(_parse(source, state, verbose, nested + 1,
--> 416 not nested and not items))
417 if not sourcematch("|"):
418 break
~\Anaconda3\lib\sre_parse.py in _parse(source, state, verbose, nested, first)
525 break
526 elif this[0] == "\\":
--> 527 code1 = _class_escape(source, this)
528 else:
529 code1 = LITERAL, _ord(this)
~\Anaconda3\lib\sre_parse.py in _class_escape(source, escape)
334 if len(escape) == 2:
335 if c in ASCIILETTERS:
--> 336 raise source.error('bad escape %s' % escape, len(escape))
337 return LITERAL, ord(escape[1])
338 except ValueError:
error: bad escape \p at position 275
Previously, I was getting a different error message:
RuntimeWarning: spacy.tokens.span.Span size changed, may indicate binary incompatibility. Expected 72 from C header, got 80 from PyObject
but managed to get around this by re-installing neuralcoref without binaries. Based on what I read on similar threads, I've also been fiddling with the versions of spacy and neuralcoref as well as re-installing files from the URLs directly but without luck. I'm on a Windows 10 PC, and I'm using a Jupyter notebook within Anaconda with Python 3.6.5.
Thanks in advance! This is my first post on a Github thread.
The text was updated successfully, but these errors were encountered:
I had this issue as well which I resolved by uninstalling and reinstalling en_core_web_md, however that left me with RuntimeWarning: spacy.tokens.span.Span size changed, may indicate binary incompatibility. Expected 72 from C header, got 80 from PyObjecterror
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
The "bad escape" error does seem to point to incompatible models in spaCy. You can verify this by running python -m spacy validate (see here).
The "binary incompatibilty" warning points to an incompatibility between spaCy and neuralcoref. If you can downgrade spaCy (and its models!) to 2.1.0 that error should be resolved. If you like to use a more recent version of spaCy, you'd have to build neuralcoref from source.
We plan on keeping the releases of neuralcoref and spaCy more in-sync in the future. Merging this with Issue #197.
I'm trying to run the following code and running into trouble.
I get the following traceback.
Previously, I was getting a different error message:
RuntimeWarning: spacy.tokens.span.Span size changed, may indicate binary incompatibility. Expected 72 from C header, got 80 from PyObject
but managed to get around this by re-installing neuralcoref without binaries. Based on what I read on similar threads, I've also been fiddling with the versions of spacy and neuralcoref as well as re-installing files from the URLs directly but without luck. I'm on a Windows 10 PC, and I'm using a Jupyter notebook within Anaconda with Python 3.6.5.
Thanks in advance! This is my first post on a Github thread.
The text was updated successfully, but these errors were encountered: