HIRING FOR FAIR DATA PROJECT: Job requirements #16
Labels
back-end / API design
Design of the back-end / API for developers and power users
data processing / FAIR data
Processing raw data into FAIR datasets
front-end design
Design of the front-end tool for typical end users
sustainability
Long-term sustainability of the project
The tasks we need completed fall into three broad categories: back-end, front-end and metadata/data processing.
Back-end
archivist
package (SUSTAINABILITY: Automated data collection for the Canadian COVID-19 Data Archive #2) probably fall under back-end tasks as well.Front-end
Metadata/data processing
One issue with the classification above is that the sub-tasks and skills required to complete them aren't necessarily cleanly divided into these three separate categories. For example, both setting up Dataverse and Geodisy require a similar skillset, and the entire stack must be integrated. Furthermore, someone with a deep knowledge of the data and subject area who may be the best person to develop a metadata taxonomy and add metadata may not be the best person to actually write the code necessary to integrate each dataset into a data processing pipeline. As such, it may be best not to think of each section as three separate jobs of roughly equal size, and instead develop job descriptions based on the general skillsets required.
Thoughts, @colliand?
The text was updated successfully, but these errors were encountered: