Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Find Alternative to Template Repo #7

Closed
nicolas-kuechler opened this issue Aug 11, 2021 · 29 comments · Fixed by #14
Closed

Find Alternative to Template Repo #7

nicolas-kuechler opened this issue Aug 11, 2021 · 29 comments · Fixed by #14
Assignees

Comments

@nicolas-kuechler
Copy link
Owner

nicolas-kuechler commented Aug 11, 2021

  • should allow to add changes added to the template repo in the main repo
  • but "template" repo should be protected from accidental commits
@nicolas-kuechler
Copy link
Owner Author

also consider the role of repotemplate.py and how the process of using the suite should be

@nicolas-kuechler
Copy link
Owner Author

For many experiments, it is likely sufficient to write a suite design file, group_vars for a new host_type, roles to init host_types, and set some variables in host_type all.
An option to check would be to define these specific things in the code repo of a particular project and then find a way to interface with the repository containing the suite.

An example would be to integrate the suite repo as a submodule in the code repo.
(I think we would need to be able to look for group_vars and roles outside of the suite repo when calling the playbook)

If a project requires a project-specific feature that requires to adapt more than the things named above, then we could create a feature branch in the suite repo with these changes and select this feature branch in the submodule.
(We could also use a private fork of the suite repo with these branches as a submodule to differentiate on what we provide for the public and our internal features)

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 9, 2021

I agree. I suggest we try the following procedure for anyone using the AWS Experiment Suite

  1. Fork the repo
  2. Create a new repo, add the fork as submodule
  3. Use a group_vars folder in the new repo, the forked repo is configured to load variables from that "external" group_vars manually. This doesn't work perfectly, since you cannot change the group_vars directory for ansible without changing the inventory location. But we load those variables manually anyway for all host_types so we can do it for all as well.

How to find the external group_vars folder

Regarding how we find the external group_vars folder, I see the following options:

  1. We hardcode this layout that with the fork, so it's always at ../group_vars.
  2. We use an environment variable with the path. The user would have to add this once to .bashrc (or similar) during the setup (if there's more to do, we could write a small install script)

Workflows

We'd need to describe the workflows, but local changes and getting updates from upstream would both work.

Outline:

  • Local change: like working with submodules: got to submodule, checkout branch, do modifications, commit to submodule (aka the fork of our repo).
  • Getting upstream changes: go to fork, pull updates to the fork, go to submodule, update the submodule to the newest commit.

@nicolas-kuechler
Copy link
Owner Author

I agree. I suggest we try the following procedure for anyone using the AWS Experiment Suite

  1. Fork the repo

I'm not sure I understand you correctly.
Concretely, for the PPS Lab we would maintain a single fork, right?
(because you are limited to one fork of a repo per user/organization, so we could not have a fork for every project)
And then in this forked repo, there are feature branches that could contain project-specific changes?
When we open source the project repo (containing the fork as submodule), is then also the whole "private" PPS-Lab specific fork public or only the corresponding branch?

Basically, I would move the aws-simple-ansible repo to my personal GitHub account and make it public, and then we fork it for the PPSLab organization? (in theory you could have a separate fork in the Marble environment but I think as I understood, the plan is that future FHE projects also reside inside the PPS-Lab organization.)

since you cannot change the group_vars directory for ansible without changing the inventory location

Loading all manually certainly works. However, I wanted to point out that in the new version, the inventory plugin file is derived from a template (to set the filter for suite tags and prj_id already there). As result, it would also be possible to place this inventory file in another location (through the template).

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 9, 2021

Sorry, I was a bit unclear.

Concretely, for the PPS Lab we would maintain a single fork, right?

Yes

And then in this forked repo, there are feature branches that could contain project-specific changes?

Yes, that would make sense.

When we open source the project repo (containing the fork as submodule), is then also the whole "private" PPS-Lab specific fork public or only the corresponding branch?

That is a good question. I doubt that you can have a private submodule in a public repo, but I haven't checked. I would have assumed that when we publish some project repo, then without the internal testing and benchmarking tools. Those are anyway configured in our GitHub actions to use our AWS credentials and so on.
But if someone wanted to publish them for better reproducibility of results, I think he could push that branch to a public fork of aws-simple-ansible and then link to that one in the published repo.

Basically, I would move the aws-simple-ansible repo to my personal GitHub account and make it public, and then we fork it for the PPSLab organization?

Exactly! I would also only just fork it once and try to use the same repo/infrastructure for the whole PPS.

However, I wanted to point out that in the new version, the inventory plugin file is derived from a template.

I still have to look at the new version. When I do, I'll check if that would be a cleaner solution. When I tried before, I had to also pull out ansible.cfg. I fear other default directories (that we currently don't use) might also be affected by moving the inventory file. AFAIK, you can only set the path for roles explicitly in the ansible config, the others are relative to the inventory file.

@nicolas-kuechler
Copy link
Owner Author

When we open source the project repo (containing the fork as submodule), is then also the whole "private" PPS-Lab specific fork public or only the corresponding branch?

That is a good question. I doubt that you can have a private submodule in a public repo, but I haven't checked. I would have assumed that when we publish some project repo, then without the internal testing and benchmarking tools. Those are anyway configured in our GitHub actions and so on.
But if someone wanted to publish them for better reproducibility of results, I think he could push that branch to a public fork of aws-simple-ansible and then link to that one in the published repo.

The specific requirement I have in mind is the following:
Often, there is now an artifact evaluation process in which reviewers try to reproduce the results of a paper.
For Zeph (our last paper), we gave reviewers credentials for our AWS account and access to a public repository that contained a dockerized version of the ansible-playbook to run the benchmarks on AWS: zeph artifact repo.
Basically, we want to open source everything that allows somebody to reproduce the paper results on their AWS account.

I guess if it is a concern that the pps-fork of the lab goes public (not sure), we could also unlink the submodule and "copy" the code required into the project repository. Creating another "public" fork of the aws-simple-ansible repo is not so nice because of the restriction of one fork per organization.

@nicolas-kuechler
Copy link
Owner Author

One problem that I found is that you cannot have a private fork of a public repository.

There are some solutions but not sure if they are practical: example

@nicolas-kuechler
Copy link
Owner Author

An alternative is that we only have a single public repository in the pps lab: aws-simple-ansible and we use feature branches on this public repository.
In specific projects, we include this project as a submodule.

Or we have one remote that is public and one remote that is private for development.
For projects that are not open yet, we use feature branches in the private repo but then when we publish the code we also push the feature branch to the public remote and link this in the project repo.

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 10, 2021

we could also unlink the submodule and "copy" the code required into the project repository. Creating another "public" fork of the aws-simple-ansible repo is not so nice because of the restriction of one fork per organization.

The issue that I see with that is that it will be very tedious to get updates for the aws-simple-ansible to that repo, where we "inlined" the submodule. However, if we only use it only for the artifact-evaluation use case, this could be a viable solution, since I guess this is more supposed to be a snapshot of the system that the paper was written about. So it wouldn't (or at least not often) need updates fromaws-simple-ansible.

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 10, 2021

One problem that I found is that you cannot have a private fork of a public repository.

Oh yes, I forgot about that. I used something similar to what you posted in a lab. It was ok to work with. We just need to carefully document the processes for updating and pushing changes. And push protect aws-simple-ansible so that people, especially us who have rights to commit to both repos, don't accidentially push to the wrong one.

But with that solution, also the 1-fork limitation is bypassed. So we could do this for a private and a public repo in PPS and then do the branch-migration workflow to publish evaluation code of a repo.

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 10, 2021

An alternative is that we only have a single public repository in the pps lab: aws-simple-ansible and we use feature branches on this public repository.

Ah, you already suggested something that is similar to what I said above. I would still keep a public aws-simple-ansible that's independent of PPS though. This way, people can use the same workflow as we do to do experiments with the repo. Moreover, they wouldn't get all our feature branches that they don't need.

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 10, 2021

In summary, the best solution I currently see is the following:

  • Create nicolas-kuechler/aws-simple-ansible as the official public tool repo that everyone can base their experiments on and get general updates from.
  • Create one private fork pps-lab/aws-simple-ansible-internal (using the technique you linked) for internal development.
  • Create one public fork pps-lab/aws-simple-ansible to publish code for artifact evaluation. Here, we can either do the same as for private forks to stick to a single workflow, or do a real fork. Not sure what's better. Probably, others that have public projects anyway would want to use normal forking as it's a cleaner solution.
  • Development workflow: Create a feature branch on pps-lab/aws-simple-ansible-internal. Create a submodule in the repo to test, which checks out this feature branch for benchmarking. That way, you can work in a single repo and the ansible controller can just run the tests from this submodule.
  • Publishing workflow: push the feature branch to pps-lab/aws-simple-ansible, change the submodule in your repo to point to the feature branch in pps-lab/aws-simple-ansible, publish your repo.
  • Updating workflows: 1. Update public fork: make updates in nicolas-kuechler/aws-simple-ansible, then fetch updates to fork like in normal forks, merge those updates into the all feature branches (this might be a bit tedious), update all submodules in the project repos. 2. Update private fork: make updates in nicolas-kuechler/aws-simple-ansible, fetch and merge updates from public remotes using a set of commands that we provide, merge those updates into the all feature branches (this might be a bit tedious), update all submodules in the project repos.

@nicolas-kuechler What do you think? Anything missing?

Edit: One thing that maybe doesn't have a nice workflow is pushing some nice general feature from a fork to the official repo. Such things you'd probably need to migrate manually.

@nicolas-kuechler
Copy link
Owner Author

I'm wondering whether the distinction between private and public fork within the pps-lab is really necessary.
I have the feeling that the development workflow with three repositories is more confusing compared to the workflow with two.

If we just use the public fork, we are potentially leaking on what kind of project we are working but I don't think that this is too problematic. In particular, because it's also not the main branch but just on some feature branch.

Btw. do you have a recommendation for a good name for the project? (I'm not happy with aws-simple-ansible and want to change it)

@nicolas-kuechler
Copy link
Owner Author

Where do you see the drawbacks of the following approach:

  • we have a public repo nicolas-kuechler/aws-simple-ansible and a private fork pps-lab/aws-simple-ansible (using the proposed technique)

  • in the private fork pps-lab/aws-simple-ansible we have GitHub actions that create a release of each feature branch

  • in the project repo we have a GitHub action that fetches a release of the experiment suite

  • development happens on a feature branch of the private fork pps-lab/aws-simple-ansible

The reason I'm a bit cautious with the submodule approach is that I heard that it is not that nice to work with.

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 10, 2021

I'm wondering whether the distinction between private and public fork within the pps-lab is really necessary.

That is something you (and Anwar probably) have to decide. I agree, working with a single branch is easier. But if we document it well, it shouldn't be more than a few commands that one has to enter from time to time to publish a repo. I think it would be doable.

Btw. do you have a recommendation for a good name for the project?

I was also already thinking about something better, the name is not ideal.

Brain dump:

  • ansible-experiment-suite has a bit the issue that its acronym AES is already over-used.
  • AutoSEF – Automated Scalable Experiment Framework
  • SAEX or simple-auto-exp for Simple Automated Experiments

I'll tell you if I have some better ideas for a new name.

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 10, 2021

Where do you see the drawbacks of the following approach

If I understand correctly, that would mean that you have to work on two repos:

  1. You work on the project repo on whatever you develop
  2. You work on the feature branch of pps-lab/aws-simple-ansible to develop the benchmarking.

How do you develop benchmarks? Do you always have to make changes, push them to the private fork, and then trigger the GitHub action in your project repo to see if they work?

With submodules, I think you should still be able to run them locally (in case you have the necessary credentials), since everything is in one module. Otherwise, you can make your changes in the subfolder of the module and push them to the feature branch from there, without needing two different repos.

Also, the feature branch and the project repo would be nicely linked with the submodule. Otherwise, this connection is more hidden in the GitHub actions?

I don't see a major disadvantage, but I also don't really see an advantage over submodules. It tries to solve a very similar problem but imo is a bit less clean and more hacky.

@nicolas-kuechler
Copy link
Owner Author

Apart from group_vars, I guess also the roles to init a host type must be outside of the repo.
e.g., setup-common, setup-client, setup-server

  • How do you envision the workflow with repotemplate.py (i.e., how to init experiment designs)?

  • For the public repo with the suite, how do we ensure that the demos can still run? (because of the folder structure, we would expect that group_vars, design files, host type init roles, etc. are in the parent of the repo if we rely on a hardcoded relative path)
    I guess we would have to make the path for them somehow configurable. Then we could have a folder in the root of the repo that contains these files for an example together with the demo scripts client.py and server.py

@nicolas-kuechler
Copy link
Owner Author

Regarding the naming, a lot of the terminology is from experimental design (DOE or DOX) wiki link, hence we could also name it accordingly.

  • DOE-Suite (DOES)
  • Auto-DOE-Suite (ADOS)
  • DOX-Suite (DOXS)
  • Auto-DOX (ADOX)
  • Auto-DOX-Suite (ADOXS)

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 13, 2021

Apart from group_vars, I guess also the roles to init a host type must be outside of the repo.
e.g., setup-common, setup-client, setup-server

Yes. One can configure multiple paths for ansible to search the roles in (src). That shouldn't be an issue.

How do you envision the workflow with repotemplate.py (i.e., how to init experiment designs)?

I was asking myself if we really still need those folders outside of the submodule? Because if every project works on its own feature branch, then it could also add its own files directly to the experiment suite folder structure?

But otherwise, the repotemplate.py currently works in the folder it's executed in. Thus, if you execute it outside of the repo, it will configure the files there (given you have the right folders). The only problem are the templates that it uses. Either those need to be outside too, or we need to store the path to them somehow. We could set an Environment variable to the experiment suite repo path. Or assume it is always in a subfolder with the experiment repo name.

For the public repo with the suite, how do we ensure that the demos can still run?

One thing that I tried out locally was set the custom path in group_vars/all/main.yml in the experiment repo. Then we could instruct people to change this path. Or we use environment variables.

Some of the differences can be solved with the inventory file, since we can use a different one in the repo than what users would use with their files outside of the repo. But we have to check if that still works now that the inventory file is generated.

I'm not so sure anymore that it is much cleaner when we take those files out of the repo. What do you think?

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 13, 2021

Regarding the naming, a lot of the terminology is from experimental design (DOE or DOX) wiki link, hence we could also name it accordingly.

Makes sense to me. Personally, I prefer DOE over DOX because the later reminds me of a file extension (e.g., docx). DOES would be a nice acronym.

@nicolas-kuechler
Copy link
Owner Author

I was asking myself if we really still need those folders outside of the submodule? Because if every project works on its own feature branch, then it could also add its own files directly to the experiment suite folder structure?
...
I'm not so sure anymore that it is much cleaner when we take those files out of the repo. What do you think?

I still think personally that it is cleaner to have the additional files outside of the repo.

Project A
- experiments (or maybe without this grouping folder)
   - DOES-PPS (the pps fork of the DOES repo)
   - DOES Config (experiment designs, state, host type roles, etc.)
- code from project A etc.

I don't think we should have a feature branch per project and instead, we should only have such a feature branch if we require a specific functionality within DOES.
For example, it would make sense to have a feature branch in which we replace AWS and the task spooler with running experiments on Leonhard.
However, this feature branch could then be used by any project that wants to run experiments on Leonhard.

If you just want to run a regular experiment, then using the main branch of DOES-PPS should be sufficient.

@nicolas-kuechler
Copy link
Owner Author

nicolas-kuechler commented Sep 13, 2021

Makes sense to me. Personally, I prefer DOE over DOX because the later reminds me of a file extension (e.g., docx). DOES would be a nice acronym.

Okay, then let's do something with:
nicolas-kuechler/doe-suite and then DOE(S) - Design of Experiments Suite

The fork could then be named: pps-lab/doe-suite-pps

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 13, 2021

I don't think we should have a feature branch per project and instead, we should only have such a feature branch if we require a specific functionality within DOES.

Ah, I see. Yes, that does sound cleaner.

With this workflow, we could give DOES the almost the same folder structure:

- config (example experiment designs, state, host type roles, etc.)
- src (the actual ansible project)

And then one variable in src/group_vars/all that defines the path. In DOES it defaults to ../config and when you fork the repo we instruct users to change it to ../../does_config. Or we actually check for those two paths by default and any other would need to be configured.

@nicolas-kuechler
Copy link
Owner Author

nicolas-kuechler commented Sep 13, 2021

I like the folder structure, but not sure about this point:

And then one variable in src/group_vars/all that defines the path. In DOES it defaults to ../config and when you fork the repo we instruct users to change it to ../../does_config. Or we actually check for those two paths by default and any other would need to be configured.

Ideally, I think it would be good that by default you don't have to change anything in src. (i.e., don't adjust a path)
Because to publish this you would have to commit this and then we end up in the situation that you need a separate branch per project.
Edit: I see that you proposed to update this in the fork and not for every project. Still, I think ideally I would like to avoid this update.

Checking both feels a bit brittle and not really clear to the user because you are trying to access ../../does_config and base your decision on the fact whether this folder is present or not.

I have three options in mind and I'm not sure if they work and what is the best choice:

  • V1: we pass the config directory as a required cmd argument.
    pipenv run ansible-playbook experiment.yml -e "suite=example id=new config=./../doe_config"

  • V2: we expect that there is a doe_configfolder in cmd when we call the ansible-playbook in another folder: ansible-playbook src/experiment.yml .... Or, then ansible-playbook doe-suite/src/experiment.yml ...

  • V3: we pass a cmd argument flag demo or something like this which means that we use ../does_config and otherwise we use ../../does_config

I think I like V1 the best because it is explicit even though a bit more verbose.

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 13, 2021

Edit: I see that you proposed to update this in the fork and not for every project. Still, I think ideally I would like to avoid this update.

Yes, the idea was to use the same structure for all your projects. But I recon that changing the fork is not the nicest setup.

To your proposed versions:

  • V1: I'm not sure people will appreciate the long command to start the tests. I propose V4 below that might improve this.
  • V2: I´d need to check if we get the CWD of ansible, but I think this should be possible. This seems a viable option to me.
  • V3: The viable issues that you raised before still apply here. It's a bit a rigid structure.

Proposal for V4:

  • V4: Use an environment variable for the path. Then, users can either use it in the command:
    DOE_CONFIG=./../doe_config pipenv run ansible-playbook experiment.yml -e "suite=example id=new"
    
    and the others can add DOE_CONFIG=./../doe_config to a .env file in the folder (read by pipenv) or globally set the variable, and then use the short command.

I favor V2 and V4 at the moment.

@nicolas-kuechler
Copy link
Owner Author

I think V4 is a good idea.

I was thinking a bit more about how to place/structure the demo project within the does repo.
When we directly have does_config on the top-level together with src then I think it's a bit unclear where to put the code of the demo project. So maybe we should "fully embed" a demo-project in the does repo:

  • does
    • src
      • experiment.yml
      • ... # roles, etc.
    • demo-project # structure of a demo project
      • does_config
        • designs
          • example.yml 1
        • table
        • group_vars # host type specific group vars
        • roles # host type specific init roles
      • does_results
        • example_1631302286
          • state # move the state closer to the results (and don't have it in doe_config)
          • experiment1
          • ...
      • does_submodule_placeholder # in an actual project, here would be the submodule for does
      • client.py # code of the demo project
      • server.py # code of the demo project
      • simple.py # code of the demo project

What do you think?

@Miro-H
Copy link
Collaborator

Miro-H commented Sep 22, 2021

What do you think?

Conceptionally, this sounds good to me. Do I understand correctly, that does_submodule_placeholder in a real project would point to does, i.e., most of the ansible code would be in does_submodule_placeholder/src?

I would have to try out if this works out with ansible. I guess we should use src as inventory directory. For the host types, we'd load group_vars manually and but extend the search path for roles.

@nicolas-kuechler
Copy link
Owner Author

nicolas-kuechler commented Sep 22, 2021

Conceptionally, this sounds good to me. Do I understand correctly, that does_submodule_placeholder in a real project would point to does, i.e., most of the ansible code would be in does_submodule_placeholder/src?

yes!

An environment variable points to demo-project and we expect that there is a folder does_config and we create a does_results folder.
In the does_config folder ansible should find the remaining things that it requires.

Miro-H added a commit that referenced this issue Oct 5, 2021
Miro-H added a commit that referenced this issue Oct 5, 2021
@Miro-H
Copy link
Collaborator

Miro-H commented Oct 5, 2021

@nicolas-kuechler FYI: I am finally trying this approach on the mh/ansible-controller branch. There are some bugs left, but I think this should work.

Miro-H added a commit that referenced this issue Oct 7, 2021
nicolas-kuechler pushed a commit that referenced this issue Oct 8, 2021
* [WIP] ansible controller: untested implementation of setup and config; need to solve repo forking first before testing

* Fixing setup and SSH keygen. Still WIP

* replace deprecated ec2 role with ec2_instance

* WIP bugfixing aws-controller

* WIP bugfixing aws-controller, updating git/aws key generation

* Fix regression bugs from rebase (mainly the tagging of instances)

* Fix awscli, boto3, and ansible installation

* Add untested roles to install seal and abc

* Restructure repo as discussed in #7

* Removed deprecated examples

* Fix demo example

* Update repotemplate for project dir environment variable

* Fix result storage

* Remove pps specific roles

* Fix minor bugs from restructuring

* Fix bug from rebase

Co-authored-by: Miro Haller <[email protected]>
@nicolas-kuechler nicolas-kuechler linked a pull request Oct 8, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants