-
Notifications
You must be signed in to change notification settings - Fork 851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add NuExtract notebook #2311
Add NuExtract notebook #2311
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
@@ -0,0 +1,824 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
>> To simplify the user experience, we will use OpenVINO Generate API for generation of instruction-following inference pipeline.
I think this description is slightly misleading, can it be rephrased and simplified e.g.:
To simplify the user experience, we will use OpenVINO Generate API for organization inference pipeline.
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
@@ -0,0 +1,824 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we follow the rule to use the latest released version of our tools as lower bound, please update nncf:
nncf>=2.12.
I do not see datasets usage in notebook, please remove.
Also optimum and onnx are dependencies of optimum-intel, there is no need to specify them explicitly and allow optimum-intel select required versions.
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
@@ -0,0 +1,824 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in code you use only optimum for conversion and compression, there is no direct nncf usage, please remove this section.
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed
@@ -0,0 +1,824 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #1. from IPython.display import display, Markdown
I do not think that somebody needs all 3 variants of models in the same time, from UX perspective it is enough to select precision one time for both conversion and usage. I also recommended to reuse conversion code and widgets from llm_config.py
https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/utils/llm_config.py#L495-L610
please check llm-chatbot-genai notebook for details
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
@@ -0,0 +1,824 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #1. core = ov.Core()
could you please use device_widget https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/utils/notebook_utils.py#L31
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
@@ -0,0 +1,824 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
explicit tokenizers conversion used in old notebooks for transition from optimum to genai, because users may have already converted models by previous notebook version that did not use openvino tokenizers and did not care about its conversion (e.g. converted when openvino tokenizers has not been integrated in optimum yet or happens dependencies mismatch between ov and tokenizers). This is new notebook, already written to use only genai that required tokenizer, so situation when user has model and does not have tokenizer never happens, please remove this code
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed
Ticket: CVS-149016