Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add NuExtract notebook #2311

Merged
merged 17 commits into from
Aug 22, 2024

Conversation

yatarkan
Copy link
Collaborator

@yatarkan yatarkan commented Aug 21, 2024

Ticket: CVS-149016

@yatarkan yatarkan added the new notebook new jupyter notebook label Aug 21, 2024
@yatarkan yatarkan requested review from eaidova, a team and aleksandr-mokrov and removed request for a team August 21, 2024 22:06
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@@ -0,0 +1,824 @@
{
Copy link
Collaborator

@eaidova eaidova Aug 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

>> To simplify the user experience, we will use OpenVINO Generate API for generation of instruction-following inference pipeline.

I think this description is slightly misleading, can it be rephrased and simplified e.g.:

To simplify the user experience, we will use OpenVINO Generate API for organization inference pipeline.


Reply via ReviewNB

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@@ -0,0 +1,824 @@
{
Copy link
Collaborator

@eaidova eaidova Aug 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we follow the rule to use the latest released version of our tools as lower bound, please update nncf:

nncf>=2.12.

I do not see datasets usage in notebook, please remove.

Also optimum and onnx are dependencies of optimum-intel, there is no need to specify them explicitly and allow optimum-intel select required versions.


Reply via ReviewNB

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@@ -0,0 +1,824 @@
{
Copy link
Collaborator

@eaidova eaidova Aug 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in code you use only optimum for conversion and compression, there is no direct nncf usage, please remove this section.


Reply via ReviewNB

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

@@ -0,0 +1,824 @@
{
Copy link
Collaborator

@eaidova eaidova Aug 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line #1.    from IPython.display import display, Markdown

I do not think that somebody needs all 3 variants of models in the same time, from UX perspective it is enough to select precision one time for both conversion and usage. I also recommended to reuse conversion code and widgets from llm_config.py

https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/utils/llm_config.py#L495-L610

please check llm-chatbot-genai notebook for details


Reply via ReviewNB

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@@ -0,0 +1,824 @@
{
Copy link
Collaborator

@eaidova eaidova Aug 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line #1.    core = ov.Core()

could you please use device_widget https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/utils/notebook_utils.py#L31


Reply via ReviewNB

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@@ -0,0 +1,824 @@
{
Copy link
Collaborator

@eaidova eaidova Aug 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

explicit tokenizers conversion used in old notebooks for transition from optimum to genai, because users may have already converted models by previous notebook version that did not use openvino tokenizers and did not care about its conversion (e.g. converted when openvino tokenizers has not been integrated in optimum yet or happens dependencies mismatch between ov and tokenizers). This is new notebook, already written to use only genai that required tokenizer, so situation when user has model and does not have tokenizer never happens, please remove this code


Reply via ReviewNB

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

@yatarkan yatarkan requested a review from eaidova August 22, 2024 13:34
@eaidova eaidova merged commit 9270176 into openvinotoolkit:latest Aug 22, 2024
8 of 17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new notebook new jupyter notebook
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants