Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Template generator frontend #86

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

nopeslide
Copy link
Collaborator

@nopeslide nopeslide commented Mar 5, 2021

I added a frontend for the onnx generator to make adding new operators as simple as possible.

usage example:

$ ./gen_template.sh 
usage: ./gen_template.sh [domain] [operator] [version]
possible domains:
ai.onnx.preview.training
ai.onnx.ml
ai.onnx
$ ./gen_template.sh ai.onnx
usage: ./gen_template.sh [domain] [operator] [version]
possible operators:
ai.onnx GreaterOrEqual
ai.onnx Celu
ai.onnx ConcatFromSequence
ai.onnx SoftmaxCrossEntropyLoss
ai.onnx SplitToSequence
ai.onnx SequenceErase
ai.onnx SequenceAt
ai.onnx SequenceInsert
ai.onnx SequenceConstruct
ai.onnx SequenceEmpty
ai.onnx Det
[...]
$ ./gen_template.sh ai.onnx Log
usage: ./gen_template.sh [domain] [operator] [version]
possible versions:
ai.onnx Log 1
ai.onnx Log 6
$ ./gen_template.sh ai.onnx Log 1
### GENERATING TEMPLATE FOR ai.onnx Log 1 ###

selecting domains
onnx operator schemas have 3 domains: ai.onnx.preview.training, ai.onnx.ml, ai.onnx
including domain 'ai.onnx'
including onnx operator schemas
included 'ai.onnx' operator schema 'Log' by pattern '^Log$'
excluding onnx operator schemas
selecting onnx operator schema versions
'ai.onnx' operator schema 'Log' has 2 version(s): 1, 6
included 'ai.onnx' operator schema 'Log' version 1
generating onnx operator headers
generating onnx operator type resolvers
generating onnx operator sets
generating onnx operator template
Writing files
writing file /home/drechsler/git/cONNXr/src/operators/ai.onnx/Log/1/operator__ai_onnx__log__1.h
writing file /home/drechsler/git/cONNXr/src/operators/ai.onnx/Log/1/resolve_operator__ai_onnx__log__1.c
writing file /home/drechsler/git/cONNXr/src/operators/ai.onnx/Log/1/prepare_operator__ai_onnx__log__1.c
writing file /home/drechsler/git/cONNXr/src/operators/ai.onnx/Log/1/free_operator__ai_onnx__log__1.c
writing file /home/drechsler/git/cONNXr/src/operators/ai.onnx/Log/1/execute_operator__ai_onnx__log__1.c
writing file /home/drechsler/git/cONNXr/src/operators/ai.onnx/Log/1/execute_operator__ai_onnx__log__1__T_tensor_double.c
writing file /home/drechsler/git/cONNXr/src/operators/ai.onnx/Log/1/execute_operator__ai_onnx__log__1__T_tensor_float.c
writing file /home/drechsler/git/cONNXr/src/operators/ai.onnx/Log/1/execute_operator__ai_onnx__log__1__T_tensor_float16.c
writing file /home/drechsler/git/cONNXr/src/operators/ai.onnx/Log/1/info_operator__ai_onnx__log__1.c
wrote 9 of 9 files (0 already existed, 0 skipped)

### GENERATING NEW OPERATOR SET ###

selecting domains
including onnx operator schemas
excluding onnx operator schemas
selecting onnx operator schema versions
generating onnx operator headers
generating onnx operator type resolvers
generating onnx operator sets
generating onnx operator template
Writing files
overwriting file /home/drechsler/git/cONNXr/src/operators/operator_set.c
overwriting file /home/drechsler/git/cONNXr/src/operators/ai.onnx/opdomain_operator__ai_onnx.c
wrote 2 of 274 files (228 already existed, 44 skipped)

- added a few new list/skip/force options to the generator
@nopeslide nopeslide force-pushed the template_generator branch from 93ef504 to 5a5d362 Compare March 5, 2021 12:41
@nopeslide
Copy link
Collaborator Author

nopeslide commented Mar 5, 2021

@Coderitter-GmbH
hope you can use this to port your operator.
move your operator files someplace,
generate an operator template with this frontend and fill in the missing pieces.

@alrevuelta
Copy link
Owner

Love the idea. I'm getting an error with:

$ ./gen_template.sh ai.onnx Log 1

find: -printf: unknown primary or operator
usage: onnx_generator [-h] [-v] [-f] [--list-domains] [--list-operators] [--list-versions] [--skip-pattern SKIP_PATTERN [SKIP_PATTERN ...]] [--force-pattern FORCE_PATTERN [FORCE_PATTERN ...]]
                      [--force-header] [--force-resolve] [--force-sets] [--force-template] [--force-info] [-n] [--no-header] [--header <path>] [--no-resolve] [--resolve <path>] [--no-sets]
                      [--sets <path>] [--no-template] [--template <path>] [--no-info] [--info <path>] [--domains <domain> [<domain> ...]] [--version <version>] [-i <regex> [<regex> ...]]
                      [-e <regex> [<regex> ...]]
                      [--] <path>

Aside from that, looks good.

@Coderitter-GmbH
Copy link
Contributor

It works for me so far. The only thing I have to complain, is that it is a bit slow.

@nopeslide
Copy link
Collaborator Author

nopeslide commented Mar 8, 2021

Love the idea. I'm getting an error with:

$ ./gen_template.sh ai.onnx Log 1

find: -printf: unknown primary or operator
usage: onnx_generator [-h] [-v] [-f] [--list-domains] [--list-operators] [--list-versions] [--skip-pattern SKIP_PATTERN [SKIP_PATTERN ...]] [--force-pattern FORCE_PATTERN [FORCE_PATTERN ...]]
                      [--force-header] [--force-resolve] [--force-sets] [--force-template] [--force-info] [-n] [--no-header] [--header <path>] [--no-resolve] [--resolve <path>] [--no-sets]
                      [--sets <path>] [--no-template] [--template <path>] [--no-info] [--info <path>] [--domains <domain> [<domain> ...]] [--version <version>] [-i <regex> [<regex> ...]]
                      [-e <regex> [<regex> ...]]
                      [--] <path>

Aside from that, looks good.

meh. I assume this is a difference in the find implementation between Gnu/Linux and MacOS.
(btw I would not need to do this if I would generate the whole operator set once ;D I have the feeling this will become a running gag)

@nopeslide
Copy link
Collaborator Author

It works for me so far. The only thing I have to complain, is that it is a bit slow.

It takes about 3 seconds on my laptop. How long does take on your side?
./gen_template.sh ai.onnx Log 1 3.03s user 0.19s system 90% cpu 3.555 total
performance should not be an issue, since it should be only run if a new operator template is required.

@Coderitter-GmbH
Copy link
Contributor

Coderitter-GmbH commented Mar 8, 2021

time ./gen_template.sh
real 0m3,865s
user 0m4,108s
sys 0m1,675s

It is true that 4 seconds are not a huge deal. I just didn't expected it and thought, that the script might be hanging. I was to impatient.

And I found an other little problem: Linux (or at least ubuntu) uses python3 instead python. Sure I can create a system link but that's not the default way as far i know.

@nopeslide
Copy link
Collaborator Author

nopeslide commented Mar 8, 2021

time ./gen_template.sh
real 0m3,865s
user 0m4,108s
sys 0m1,675s

It is true that 4 seconds are not a huge deal. I just didn't expected it and thought, that the script might be hanging. I was to impatient.

it runs the generator multiple times that loads the whole onnx operator set description each time, runs a ton of regex etc etc. It's more a beast than a beauty :D
just so you get feeling how much is happening under the hood:

-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          10            280            338           1593
Markdown                         1              8              0             58
-------------------------------------------------------------------------------
SUM:                            11            288            338           1651
-------------------------------------------------------------------------------

@nopeslide
Copy link
Collaborator Author

And I found an other little problem: Linux (or at least ubuntu) uses python3 instead python. Sure I can create a system link but that's not the default way as far i know.

was a typo, fixed it

@nopeslide
Copy link
Collaborator Author

@alrevuelta fixed it, can you try again?

@Coderitter-GmbH
Copy link
Contributor

Coderitter-GmbH commented Mar 8, 2021

For convTranspose it generates 4 execute c files:

execute_operator__ai_onnx__convtranspose__11__T_tensor_double.c
execute_operator__ai_onnx__convtranspose__11__T_tensor_float.c
execute_operator__ai_onnx__convtranspose__11__T_tensor_float16.c 
execute_operator__ai_onnx__convtranspose__11.c

What is the purpose of execute_operator__ai_onnx__convtranspose__11.c or is that a bug?

[nopeslide: sorry I accidentally edited your post]

@nopeslide
Copy link
Collaborator Author

This executer is not referenced by the code generator atm.
I was thinking about using this in the generated resolver when no typed one can be found as a replacement for the currently used global stub.
The generated files should only be hint how to implement the operator, as long as you define the prepare_* symbol you can implement the operator as you wish.
the prepare_* function should allocate everything and sets the executer that will be called.

@alrevuelta what do you think about using the "typeless" function as the default function if no type specific one can be found?

@alrevuelta
Copy link
Owner

(btw I would not need to do this if I would generate the whole operator set once ;D I have the feeling this will become a running gag)

Yep, I agree. I suggested to wait a bit until we are sure that the generated files are what we need and that we are not missing anything. But if you are sure about it feel free to generate all of them. It would definitely help newcomers and we would need to call this script only when onnx releases a new opset.
One of the reasons that I wanted to wait is this. Operators like OneHot generate around 2000 files because there are many permutations input/output/type. I wouldn't like this generated code, as it doesn't really makes sense.

./gen_template.sh ai.onnx OneHot 11

@alrevuelta fixed it, can you try again?

All good, working fine now on mac thanks :D

@alrevuelta what do you think about using the "typeless" function as the default function if no type specific one can be found?

You call "typeless" function to execute_operator__ai_onnx__convtranspose__11 right? I don't have an opinion on that, but is it needed? If no specific type is found, how is that function useful?

@nopeslide
Copy link
Collaborator Author

You call "typeless" function to execute_operator__ai_onnx__convtranspose__11 right?

yes exactly.

I don't have an opinion on that, but is it needed? If no specific type is found, how is that function useful?

fair point, can't come up with a good example, but will try. have the feeling it may be useful in some cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants