Skip to content

Commit cc684f3

Browse files
committed
cms-2016-pileup-dataset: add file information and fix usage links
Adds documentation, adds file information, fixes usage links and other minor metadata information such as publishing year.
1 parent 329cdad commit cc684f3

File tree

340 files changed

+121393
-29
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

340 files changed

+121393
-29
lines changed

.gitignore

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -74,3 +74,8 @@ cms-2016-collision-datasets/inputs/hlt-config-store
7474
cms-2016-collision-datasets/inputs/das-json-store
7575
cms-2016-collision-datasets/inputs/das-json-config-store
7676
cms-2016-collision-datasets/outputs/*.json
77+
cms-2016-pileup-dataset/inputs/config-store
78+
cms-2016-pileup-dataset/inputs/das-json-store
79+
cms-2016-pileup-dataset/inputs/mcm-store
80+
cms-2016-pileup-dataset/outputs/
81+
cms-2016-pileup-dataset/cookies.txt

README.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,7 @@ Specific data ingestion and curation campaigns:
5050
- `cms-2015-collision-datasets-hi-ppref <cms-2015-collision-datasets-hi-ppref>`_ - helper scripts for CMS 2015 heavy ion release (proton-proton reference collision datasets)
5151
- `cms-2015-simulated-datasets <cms-2015-simulated-datasets>`_ -- helper scripts for the CMS 2015 open data release (simulated datasets)
5252
- `cms-2016-collision-datasets <cms-2016-collision-datasets>`_ -- helper scripts for the CMS 2016 open data release (collision datasets)
53+
- `cms-2016-pileup-dataset <cms-2016-pileup-dataset>`_ -- helper scripts for the CMS 2016 open data release (pileup dataset)
5354
- `cms-2016-simulated-datasets <cms-2016-simulated-datasets>`_ -- helper scripts for the CMS 2016 open data release (simulated datasets)
5455
- `cms-YYYY-luminosity <cms-YYYY-luminosity>`_ -- helper scripts for the CMS luminosity information records (any year)
5556
- `cms-YYYY-run-numbers <cms-YYYY-run-numbers>`_ -- helper scripts for enriching CMS dataset run numbers (any year)

cms-2016-pileup-dataset/README.md

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
# cms-2016-pileup-dataset
2+
3+
This directory contains helper scripts used to prepare CMS 2016 open data
4+
release regarding pile-up dataset.
5+
6+
- `code/` folder contains the python code;
7+
- `inputs/` folder contains input text files with the list of datasets for each
8+
year and input files;
9+
- `outputs/` folder contains generated JSON records to be included as the CERN
10+
Open Data portal fixtures.
11+
12+
Every step necessary to produce the final `*.json` files is handled by the
13+
`code/interface.py` script. Details about it can be queried with the command:
14+
15+
```console
16+
$ python3 code/interface.py --help
17+
```
18+
19+
Please make sure to get the VOMS proxy file before running these scripts:
20+
21+
```console
22+
$ voms-proxy-init --voms cms --rfc --valid 190:00
23+
```
24+
25+
Please make sure to set the EOS instance to EOSPUBLIC before running these scripts:
26+
27+
```console
28+
$ export EOS_MGM_URL=root://eospublic.cern.ch
29+
```
30+
Please make sure to have a valid `userkey.nodes.pem` certificate present in
31+
`$HOME/.globus`. If not, you have to run the following on top of the regular
32+
CMS certificate documentation:
33+
34+
```console
35+
$ cd $HOME/.globus
36+
$ ls userkey.nodes.pem
37+
$ openssl pkcs12 -in myCert.p12 -nocerts -nodes -out userkey.nodes.pem # if not present
38+
$ cd -
39+
```
40+
41+
First step is to create EOS file index cache:
42+
43+
```console
44+
$ python3 ./code/interface.py --create-eos-indexes inputs/CMS-2016-premix.txt
45+
```
46+
47+
This requires the index data files to be placed in their final location. However, for
48+
early testing on LXPLUS, all steps can be run without the EOS file index cache
49+
by means of adding the command-line option `--ignore-eos-store` to the commands below.
50+
51+
We can now build sample records by doing:
52+
53+
```console
54+
$ python3 ./code/interface.py --create-das-json-store --ignore-eos-store inputs/CMS-2016-pileup-dataset.txt
55+
56+
$ auth-get-sso-cookie -u https://cms-pdmv.cern.ch/mcm -o cookies.txt
57+
$ python3 ./code/interface.py --create-mcm-store --ignore-eos-store inputs/CMS-2016-pileup-dataset.txt
58+
59+
$ python3 ./code/interface.py --get-conf-files --ignore-eos-store inputs/CMS-2016-pileup-dataset.txt
60+
61+
$ python3 ./code/interface.py --create-records --ignore-eos-store inputs/CMS-2016-premix.txt
62+
```
63+
64+
Each step builds a subdirectory with a cache (`das-json-store`, `mcm-store` and
65+
`config-store`). They are large, do not upload them to the repository, respect
66+
the `.gitignore`.
67+
68+
The output JSON files for the dataset records will be generated in the
69+
`outputs` directory.
70+
71+
The three configuration files from `./inputs/config-store` are to be copied to
72+
`/eos/opendata/cms/configuration-files/MonteCarlo2016/`. Don't forget to add
73+
the `*.py` extension.

cms-2016-pileup-dataset/code/dataset_records.py

Lines changed: 24 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@
4343
recommended_cmssw = "CMSSW_10_6_30"
4444
collision_energy = "13TeV"
4545
collision_type = "pp"
46-
year_published = "2023"
46+
year_published = "2024"
4747

4848
LINK_INFO = {}
4949

@@ -191,7 +191,7 @@ def get_all_generator_text(dataset, das_dir, mcm_dir, conf_dir, recid_info):
191191
step = {}
192192
process = ''
193193
output_dataset = get_output_dataset_from_mcm(dataset, mcm_step_dir)
194-
if output_dataset:
194+
if output_dataset:
195195
step['output_dataset'] = output_dataset[0]
196196
release = get_cmssw_version_from_mcm(dataset, mcm_step_dir)
197197
if release:
@@ -213,7 +213,7 @@ def get_all_generator_text(dataset, das_dir, mcm_dir, conf_dir, recid_info):
213213
generator_names = get_generator_name(dataset, mcm_step_dir)
214214
if generator_names:
215215
step['generators'] = generator_names
216-
216+
217217
m = re.search('-(.+?)-', step_dir)
218218
if m:
219219
step_name = m.group(1)
@@ -243,8 +243,8 @@ def get_all_generator_text(dataset, das_dir, mcm_dir, conf_dir, recid_info):
243243

244244
step['type'] = process
245245

246-
# Extend LHE steps
247-
if step_name.endswith('LHEGEN'):
246+
# Extend LHE steps
247+
if step_name.endswith('LHEGEN'):
248248
step['type'] = "LHE GEN"
249249
for i, configuration_files in enumerate(step['configuration_files']):
250250
if configuration_files['title'] == 'Generator parameters':
@@ -265,7 +265,7 @@ def get_all_generator_text(dataset, das_dir, mcm_dir, conf_dir, recid_info):
265265
else:
266266
if 'generators' in step:
267267
generators_present = True
268-
268+
269269
return info
270270

271271
def populate_containerimages_cache():
@@ -283,7 +283,7 @@ def create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm
283283
dataset = get_dataset(dataset_full_name)
284284
dataset_format = get_dataset_format(dataset_full_name)
285285
year_created = '2016'
286-
year_published = '2023' #
286+
year_published = '2024' #
287287
run_period = ['Run2016G', 'Run2016H'] #
288288

289289
additional_title = 'Simulated dataset ' + dataset + ' in ' + dataset_format + ' format for ' + year_created + ' collision data'
@@ -324,7 +324,7 @@ def create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm
324324
rec['distribution']['formats'] = [dataset_format.lower(), 'root']
325325
rec['distribution']['number_events'] = 27646400 # this is computed from the number of files (17279 * 1600 = 27646400) was: get_number_events(dataset_full_name, das_dir)
326326
rec['distribution']['number_files'] = 17279 # known but maybe get from eos - was: get_number_files(dataset_full_name, das_dir)
327-
rec['distribution']['size'] = 0 # FIXME: check from eos - was: get_size(dataset_full_name, das_dir)
327+
rec['distribution']['size'] = 55600743325296 # known via grep '"size"' inputs/eos-file-indexes/*.json | awk '{print $NF}' | tr ',' ' ' | paste -s -d+ | bc
328328

329329
doi = get_doi(dataset_full_name, doi_info)
330330
if doi:
@@ -334,7 +334,7 @@ def create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm
334334

335335
rec_files = get_dataset_index_files(dataset_full_name, eos_dir)
336336
if rec_files:
337-
rec['files'] = []
337+
rec['files'] = []
338338
for index_type in ['.json', '.txt']:
339339
index_files = [f for f in rec_files if f[0].endswith(index_type)]
340340
for file_number, (file_uri, file_size, file_checksum) in enumerate(index_files):
@@ -373,7 +373,7 @@ def create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm
373373
rec['pileup'] = {}
374374
if pileup_dataset_recid:
375375
rec['pileup']['description'] = "<p>To make these simulated data comparable with the collision data, <a href=\"/docs/cms-guide-pileup-simulation\">pile-up events</a> are added to the simulated event in the DIGI2RAW step.</p>"
376-
rec['pileup']['links'] = [
376+
rec['pileup']['links'] = [
377377
{
378378
"recid": str(pileup_dataset_recid),
379379
"title": pileup_dataset_name
@@ -400,7 +400,7 @@ def create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm
400400
# recomended global tag and cmssw release recommended for analysis
401401
rec['system_details'] = {}
402402
rec['system_details']['global_tag'] = recommended_gt
403-
rec['system_details']['release'] = recommended_cmssw
403+
rec['system_details']['release'] = recommended_cmssw
404404
if recommended_cmssw in CONTAINERIMAGES_CACHE.keys():
405405
rec["system_details"]["container_images"] = CONTAINERIMAGES_CACHE[recommended_cmssw]
406406

@@ -431,15 +431,15 @@ def create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm
431431
rec['usage']['description'] = "These simulated data are not meant to be analysed on their own. The dataset can be used to add pile-up events to newly simulated event samples using CMS experiment software, available through the CMS Open Data container or the CMS Virtual Machine. See the instructions for setting up one of the two alternative environments and getting started in"
432432
rec['usage']['links'] = [
433433
{
434-
"description": "Running CMS analysis code using Docker",
435-
"url": "/docs/cms-guide-docker"
436-
},
434+
"description": "Running CMS analysis code using Docker",
435+
"url": "/docs/cms-guide-docker#images"
436+
},
437437
{
438-
"description": "How to install the CMS Virtual Machine",
439-
"url": "/docs/cms-virtual-machine-2016-2018"
440-
},
438+
"description": "How to install the CMS Virtual Machine",
439+
"url": "/docs/cms-virtual-machine-cc7"
440+
},
441441
{
442-
"description": "Getting started with CMS open data",
442+
"description": "Getting started with CMS open data",
443443
"url": "/docs/cms-getting-started-miniaod"
444444
}
445445
]
@@ -455,7 +455,7 @@ def create(dataset, doi_info, recid_info, eos_dir, das_dir, mcm_dir, conffiles_d
455455
if os.path.exists(filepath) and os.stat(filepath).st_size != 0:
456456
print("==> " + dataset + "\n==> Already exist. Skipping...\n")
457457
return
458-
458+
459459
Record= create_record(dataset, doi_info, recid_info, eos_dir, das_dir, mcm_dir, conffiles_dir)
460460

461461
with open(filepath, 'w') as file:
@@ -478,7 +478,7 @@ def create_records(dataset_full_names, doi_file, recid_file, eos_dir, das_dir, m
478478
#build the record only for the PREMIX dataset
479479
if 'PREMIX' in dataset_full_name:
480480
create(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm_dir, conffiles_dir, records_dir)
481-
481+
482482
#records.append(create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm_dir, conffiles_dir))
483483
#return records
484484

@@ -517,10 +517,10 @@ def get_step_generator_parameters(dataset, mcm_dir, recid, force_lhe=0):
517517
if mcdb_id > 1:
518518
print("Got mcdb > 1: " + str(mcdb_id))
519519
configuration_files['title'] = 'Generator parameters'
520-
configuration_files['url'] = "/eos/opendata/cms/lhe_generators/2015-sim/mcdb/{mcdb_id}_header.txt".format(mcdb_id=mcdb_id)
521-
return [configuration_files]
522-
else:
523-
dir='./lhe_generators/2016-sim/gridpacks/' + str(recid) + '/'
520+
configuration_files['url'] = "/eos/opendata/cms/lhe_generators/2015-sim/mcdb/{mcdb_id}_header.txt".format(mcdb_id=mcdb_id)
521+
return [configuration_files]
522+
else:
523+
dir='./lhe_generators/2016-sim/gridpacks/' + str(recid) + '/'
524524
files = []
525525
files = [f for f in os.listdir(dir) if os.path.isfile(os.path.join(dir, f))]
526526
confarr=[]
@@ -543,4 +543,3 @@ def get_step_generator_parameters(dataset, mcm_dir, recid, force_lhe=0):
543543
return [configuration_files]
544544
except:
545545
pass
546-

cms-2016-pileup-dataset/code/eos_store.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ def get_dataset_volume_files(dataset, volume):
9090
"Return file list with information about name, size, location for the given dataset and volume."
9191
files = []
9292
dataset_location = get_dataset_location(dataset)
93-
output = subprocess.check_output('eos find --size --checksum ' + dataset_location + '/' + volume, shell=True)
93+
output = subprocess.check_output('eos oldfind --size --checksum ' + dataset_location + '/' + volume, shell=True)
9494
output = str(output.decode("utf-8"))
9595
for line in output.split('\n'):
9696
if line and line != 'file-indexes':
@@ -141,7 +141,7 @@ def create_index_files(dataset, volume, eos_dir):
141141
copy_index_file(dataset, volume, filename, eos_dir)
142142

143143

144-
def main(datasets = [], eos_dir = './inputs/eos-file-indexes'):
144+
def main(datasets = [], eos_dir = './inputs/eos-file-indexes/'):
145145
"Do the job."
146146

147147
if not os.path.exists(eos_dir):

cms-2016-pileup-dataset/code/interface.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
@click.option('--create-eos-indexes/--no-create-eos-indexes', default=False,
1616
show_default=True,
1717
help="Create EOS rich index files")
18-
@click.option('--eos-dir', default='./inputs/eos-file-indexes',
18+
@click.option('--eos-dir', default='./inputs/eos-file-indexes/',
1919
show_default=True,
2020
help='Output directory for the EOS file indexes')
2121
@click.option('--ignore-eos-store/--no-ignore-eos-store',
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
/Neutrino_E-10_gun/RunIISummer20ULPrePremix-UL16_106X_mcRun2_asymptotic_v13-v1/PREMIX 10.7483/OPENDATA.CMS.VWUA.G7SB

0 commit comments

Comments
 (0)