You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
so i was adding rustic to my emacs config and -- not to brag, this is real context (and a curse, not a blessing) -- "my emacs" is actually a declarative tool-chain that produces an AppImage with no* runtime deps. i build everything from coreutils to inkscape into it, packages are AOT native-compiled, autoloads concatenated, etc., and (ideally) i can copy it onto an air-gapped x86 distro and run it as-is.
so, with the runtime being a bit wild, i need rustic-doc-setup to know where the built-in rust-docs are. i see it's plopping a script into .local/bin and figure it's self-extracting, a little odd but at least it's not downloading a vscode ext... but nah, it's actually downloading that script from it's own repo? that means i already have it, when i "build" the package from master! it's probably already in my AppImage? (i checked, it's actually not, but it'd be trivial to include, embedded in an .el [so that existing tooling like file globs picks it up] or otherwise)
anyway, i put off fixing that (can do it if/when i like the docs and add a build step to format them at build-time), for now i just setup the tool-chain to patch the path in rustic-doc-convert.sh after its downloaded (which happens every time, it's always overwritten)
i've seen projects do stuff like this where they have a "code" repo and a "data" repo, and use a resource system to clone the data repos, that kinda makes sense on a separation of concerns, but this is all package code from the same repo, and i don't quite get the motive?
to keep up to date with rust docs? (haven't checked LOC churn)
to meet confused package-repository inclusion requirements?
artifact of initial merge into the repo?
???
shouldn't this code just be kept up to date through inclusion in regular package updates, as standalone or self-extracting data, or at least sidecar a SHA256 into the repo that can be used for integrity verification or freshness? (without a signature i think the two use-cases are exclusive, but both good ideas)
if you're reading this id love to know your take on whether this strikes you as straight-forward best practice, and if you're one of the devs id love to know more about what motivated this architecture that what pros/cons were at play
not judging, love the packages, just trying to understand c:
* curls to bash metaphorically -- ik curl is not involved
is it weird that `rustic-doc-setup` (specifically) curls to bash?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
so i was adding rustic to my emacs config and -- not to brag, this is real context (and a curse, not a blessing) -- "my emacs" is actually a declarative tool-chain that produces an AppImage with no* runtime deps. i build everything from coreutils to inkscape into it, packages are AOT native-compiled, autoloads concatenated, etc., and (ideally) i can copy it onto an air-gapped x86 distro and run it as-is.
so, with the runtime being a bit wild, i need
rustic-doc-setup
to know where the built-in rust-docs are. i see it's plopping a script into.local/bin
and figure it's self-extracting, a little odd but at least it's not downloading a vscode ext... but nah, it's actually downloading that script from it's own repo? that means i already have it, when i "build" the package from master! it's probably already in my AppImage? (i checked, it's actually not, but it'd be trivial to include, embedded in an.el
[so that existing tooling like file globs picks it up] or otherwise)anyway, i put off fixing that (can do it if/when i like the docs and add a build step to format them at build-time), for now i just setup the tool-chain to patch the path in
rustic-doc-convert.sh
after its downloaded (which happens every time, it's always overwritten)i've seen projects do stuff like this where they have a "code" repo and a "data" repo, and use a resource system to clone the data repos, that kinda makes sense on a separation of concerns, but this is all package code from the same repo, and i don't quite get the motive?
shouldn't this code just be kept up to date through inclusion in regular package updates, as standalone or self-extracting data, or at least sidecar a SHA256 into the repo that can be used for integrity verification or freshness? (without a signature i think the two use-cases are exclusive, but both good ideas)
if you're reading this id love to know your take on whether this strikes you as straight-forward best practice, and if you're one of the devs id love to know more about what motivated this architecture that what pros/cons were at play
not judging, love the packages, just trying to understand c:
* curls to bash metaphorically -- ik curl is not involved
1 vote ·
Beta Was this translation helpful? Give feedback.
All reactions