You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<p>This page evaluates the extent to which the author-published research artefacts meet the criteria of badges related to reproducibility from various organisations and journals.</p>
246
-
<p><em>Caveat: Please note that these criteria are based on available information about each badge online, and that we have allowed troubleshooting for execution and reproduction. We cannot guarantee that the badges below would have been awarded in practice by these journals.</em></p>
246
+
<p><em>Caveat: Please note that these criteria are based on available information about each badge online, and that we have likely differences in our procedure (e.g. allowed troubleshooting for execution and reproduction, not under tight time pressure to complete). We cannot guarantee that the badges below would have been awarded in practice by these journals.</em></p>
<p>Exploring methods for overlaying figures. Not timed as not about reproduction of this study, but about how we are going to do this each time when reproducing.</p>
291
291
<p>Decided that it’s not helpful to do this - spend more time fiddling around with getting them to resize and overlay correctly - and that the simplest option here would be to compare by eye.</p>
<li>Required some minor changes to environment in order for scripts to run</li>
236
236
<li>However, not certain whether the badges would allow minor troubleshooting (or major troubleshooting) in order for a script to run?</li>
237
237
<li>By my criteria (allowing troubleshooting) this would be fine. The step up from this criteria is reproduction (which I again allow troubleshooting) - so it would make sense that this is getting it to run, whilst next step is getting sufficiently similar results. May just need to add a <strong>caveat</strong> that this is with troubleshooting allowed (which may not be journal policy) - in same way that caveat, this is my interpretation of badges from available information and cannot guarantee would or would not be awarded.</li>
238
+
<li><strong>Chat with Tom:</strong> Fine to just caveat.</li>
238
239
</ul></li>
239
240
<li><code>hour</code>: Reproduced within approximately one hour (excluding compute time)
240
241
<ul>
241
242
<li>Took longer than an hour, but I wasn’t trying to get it done in that time</li>
242
243
<li>If I hadn’t spent time reading and documenting and fiddling with the seeds, then I anticipate I could’ve run it within an hour</li>
243
244
<li>However, <strong>I’m assuming to follow our process and fail it</strong> (for consistency with how we are working and timing)</li>
245
+
<li><strong>Chat with Tom:</strong> Fine to just caveat.</li>
244
246
</ul></li>
245
247
<li><code>documentation_readme</code>: “Artefacts are clearly documented and accompanied by a README file with step-by-step instructions on how to reproduce results in the manuscript”
246
248
<ul>
247
249
<li>I wouldn’t say it explicitly meets this criteria</li>
248
250
<li>Although it was simple enough that it could do it anyway - directed me to notebook, run that, job done.</li>
249
251
<li><strong>Uncertain on this one</strong></li>
252
+
<li>Uncertainty is fine - just make a choice, and justify and document that choice in the logbook</li>
<li><code>2.5.5 Components - entry and exit points</code>
271
271
<ul>
272
272
<li><strong>Get a second opinion</strong> on this - have I been too harsh?</li>
273
+
<li><strong>Chat with Tom</strong> - fine, just describe and justify.</li>
273
274
</ul></li>
274
275
<li><code>5.1 Software or programming language</code>: “Where frameworks and libraries have been used provide all details including version numbers.”
275
276
<ul>
276
277
<li>This part of criteria is <strong>not</strong> in the report, but is in the linked code.</li>
277
278
<li>Passed them anyway, as other parts of this criteria (OS, version, build DES software, Python) were provided - and also, wouldn’t think you would put this extra level of information in the report?</li>
279
+
<li><strong>Chat with Tom</strong> - take a consistent approach (only basing on article) and therefore partially met if missing this (as purpose is to learn what people do and do not do)</li>
278
280
</ul></li>
279
281
<li><code>5.3 Model execution</code>: “State the event processing mechanism used e.g. three phase, event, activity, process interaction.”
280
282
<ul>
281
283
<li><strong>Feeling quite unclear on this</strong>. Did some research onto it but that hasn’t really cleared it up…</li>
282
284
<li>Based answer on Tom’s, but don’t feel confident guessing at this for future ones</li>
<h3class="anchored" data-anchor-id="comparing-best-practice-audit-results-with-monks-and-harper">Comparing best practice audit results with Monks and Harper</h3>
385
388
<p>Compared my decisions for the best practice audit against those made in Monks and Harper.</p>
386
389
<p>Their GitHub is <ahref="https://github.com/TomMonks/des_sharing_lit_review">TomMonks/des_sharing_lit_review</a>, which provides the file <code>bp_audit.zip</code>. I used the provided code to clean this and saved it as <code>bp_audit_clean.csv</code>, which can then view here:</p>
<li>I said yes as its within Zenodo, but they said no</li>
530
533
<li>I have amended my answer, as on reflection, I agree that this should be part of the artefacts themselves, and not just meta-data on Zenodo, as the artefacts are what you download, but have kept this as “partially met”</li>
531
-
<li><strong>Get a second opinion on this</strong></li>
0 commit comments