Skip to content

Commit

Permalink
cms-2016-pileup-dataset: add file information and fix usage links
Browse files Browse the repository at this point in the history
Adds documentation, adds file information, fixes usage links and other
minor metadata information such as publishing year.
  • Loading branch information
tiborsimko committed Mar 18, 2024
1 parent 329cdad commit a17cec2
Show file tree
Hide file tree
Showing 8 changed files with 108 additions and 29 deletions.
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -74,3 +74,8 @@ cms-2016-collision-datasets/inputs/hlt-config-store
cms-2016-collision-datasets/inputs/das-json-store
cms-2016-collision-datasets/inputs/das-json-config-store
cms-2016-collision-datasets/outputs/*.json
cms-2015-pileup-dataset/inputs/config-store
cms-2015-pileup-dataset/inputs/das-json-store
cms-2015-pileup-dataset/inputs/mcm-store
cms-2015-pileup-dataset/outputs/
cms-2015-pileup-dataset/cookies.txt
1 change: 1 addition & 0 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ Specific data ingestion and curation campaigns:
- `cms-2015-collision-datasets-hi-ppref <cms-2015-collision-datasets-hi-ppref>`_ - helper scripts for CMS 2015 heavy ion release (proton-proton reference collision datasets)
- `cms-2015-simulated-datasets <cms-2015-simulated-datasets>`_ -- helper scripts for the CMS 2015 open data release (simulated datasets)
- `cms-2016-collision-datasets <cms-2016-collision-datasets>`_ -- helper scripts for the CMS 2016 open data release (collision datasets)
- `cms-2016-pileup-dataset <cms-2016-pileup-dataset>`_ -- helper scripts for the CMS 2016 open data release (pileup dataset)
- `cms-2016-simulated-datasets <cms-2016-simulated-datasets>`_ -- helper scripts for the CMS 2016 open data release (simulated datasets)
- `cms-YYYY-luminosity <cms-YYYY-luminosity>`_ -- helper scripts for the CMS luminosity information records (any year)
- `cms-YYYY-run-numbers <cms-YYYY-run-numbers>`_ -- helper scripts for enriching CMS dataset run numbers (any year)
Expand Down
73 changes: 73 additions & 0 deletions cms-2016-pileup-dataset/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# cms-2016-pileup-dataset

This directory contains helper scripts used to prepare CMS 2016 open data
release regarding pile-up dataset.

- `code/` folder contains the python code;
- `inputs/` folder contains input text files with the list of datasets for each
year and input files;
- `outputs/` folder contains generated JSON records to be included as the CERN
Open Data portal fixtures.

Every step necessary to produce the final `*.json` files is handled by the
`code/interface.py` script. Details about it can be queried with the command:

```console
$ python3 code/interface.py --help
```

Please make sure to get the VOMS proxy file before running these scripts:

```console
$ voms-proxy-init --voms cms --rfc --valid 190:00
```

Please make sure to set the EOS instance to EOSPUBLIC before running these scripts:

```console
$ export EOS_MGM_URL=root://eospublic.cern.ch
```
Please make sure to have a valid `userkey.nodes.pem` certificate present in
`$HOME/.globus`. If not, you have to run the following on top of the regular
CMS certificate documentation:

```console
$ cd $HOME/.globus
$ ls userkey.nodes.pem
$ openssl pkcs12 -in myCert.p12 -nocerts -nodes -out userkey.nodes.pem # if not present
$ cd -
```

First step is to create EOS file index cache:

```console
$ python3 ./code/interface.py --create-eos-indexes inputs/CMS-2016-premix.txt
```

This requires the index data files to be placed in their final location. However, for
early testing on LXPLUS, all steps can be run without the EOS file index cache
by means of adding the command-line option `--ignore-eos-store` to the commands below.

We can now build sample records by doing:

```console
$ python3 ./code/interface.py --create-das-json-store --ignore-eos-store inputs/CMS-2016-pileup-dataset.txt

$ auth-get-sso-cookie -u https://cms-pdmv.cern.ch/mcm -o cookies.txt
$ python3 ./code/interface.py --create-mcm-store --ignore-eos-store inputs/CMS-2016-pileup-dataset.txt

$ python3 ./code/interface.py --get-conf-files --ignore-eos-store inputs/CMS-2016-pileup-dataset.txt

$ python3 ./code/interface.py --create-records --ignore-eos-store inputs/CMS-2016-premix.txt
```

Each step builds a subdirectory with a cache (`das-json-store`, `mcm-store` and
`config-store`). They are large, do not upload them to the repository, respect
the `.gitignore`.

The output JSON files for the dataset records will be generated in the
`outputs` directory.

The three configuration files from `./inputs/config-store` are to be copied to
`/eos/opendata/cms/configuration-files/MonteCarlo2016/`. Don't forget to add
the `*.py` extension.
49 changes: 24 additions & 25 deletions cms-2016-pileup-dataset/code/dataset_records.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@
recommended_cmssw = "CMSSW_10_6_30"
collision_energy = "13TeV"
collision_type = "pp"
year_published = "2023"
year_published = "2024"

LINK_INFO = {}

Expand Down Expand Up @@ -191,7 +191,7 @@ def get_all_generator_text(dataset, das_dir, mcm_dir, conf_dir, recid_info):
step = {}
process = ''
output_dataset = get_output_dataset_from_mcm(dataset, mcm_step_dir)
if output_dataset:
if output_dataset:
step['output_dataset'] = output_dataset[0]
release = get_cmssw_version_from_mcm(dataset, mcm_step_dir)
if release:
Expand All @@ -213,7 +213,7 @@ def get_all_generator_text(dataset, das_dir, mcm_dir, conf_dir, recid_info):
generator_names = get_generator_name(dataset, mcm_step_dir)
if generator_names:
step['generators'] = generator_names

m = re.search('-(.+?)-', step_dir)
if m:
step_name = m.group(1)
Expand Down Expand Up @@ -243,8 +243,8 @@ def get_all_generator_text(dataset, das_dir, mcm_dir, conf_dir, recid_info):

step['type'] = process

# Extend LHE steps
if step_name.endswith('LHEGEN'):
# Extend LHE steps
if step_name.endswith('LHEGEN'):
step['type'] = "LHE GEN"
for i, configuration_files in enumerate(step['configuration_files']):
if configuration_files['title'] == 'Generator parameters':
Expand All @@ -265,7 +265,7 @@ def get_all_generator_text(dataset, das_dir, mcm_dir, conf_dir, recid_info):
else:
if 'generators' in step:
generators_present = True

return info

def populate_containerimages_cache():
Expand All @@ -283,7 +283,7 @@ def create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm
dataset = get_dataset(dataset_full_name)
dataset_format = get_dataset_format(dataset_full_name)
year_created = '2016'
year_published = '2023' #
year_published = '2024' #
run_period = ['Run2016G', 'Run2016H'] #

additional_title = 'Simulated dataset ' + dataset + ' in ' + dataset_format + ' format for ' + year_created + ' collision data'
Expand Down Expand Up @@ -324,7 +324,7 @@ def create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm
rec['distribution']['formats'] = [dataset_format.lower(), 'root']
rec['distribution']['number_events'] = 27646400 # this is computed from the number of files (17279 * 1600 = 27646400) was: get_number_events(dataset_full_name, das_dir)
rec['distribution']['number_files'] = 17279 # known but maybe get from eos - was: get_number_files(dataset_full_name, das_dir)
rec['distribution']['size'] = 0 # FIXME: check from eos - was: get_size(dataset_full_name, das_dir)
rec['distribution']['size'] = 55600743325296 # known via grep '"size"' inputs/eos-file-indexes/*.json | awk '{print $NF}' | tr ',' ' ' | paste -s -d+ | bc

doi = get_doi(dataset_full_name, doi_info)
if doi:
Expand All @@ -334,7 +334,7 @@ def create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm

rec_files = get_dataset_index_files(dataset_full_name, eos_dir)
if rec_files:
rec['files'] = []
rec['files'] = []
for index_type in ['.json', '.txt']:
index_files = [f for f in rec_files if f[0].endswith(index_type)]
for file_number, (file_uri, file_size, file_checksum) in enumerate(index_files):
Expand Down Expand Up @@ -373,7 +373,7 @@ def create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm
rec['pileup'] = {}
if pileup_dataset_recid:
rec['pileup']['description'] = "<p>To make these simulated data comparable with the collision data, <a href=\"/docs/cms-guide-pileup-simulation\">pile-up events</a> are added to the simulated event in the DIGI2RAW step.</p>"
rec['pileup']['links'] = [
rec['pileup']['links'] = [
{
"recid": str(pileup_dataset_recid),
"title": pileup_dataset_name
Expand All @@ -400,7 +400,7 @@ def create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm
# recomended global tag and cmssw release recommended for analysis
rec['system_details'] = {}
rec['system_details']['global_tag'] = recommended_gt
rec['system_details']['release'] = recommended_cmssw
rec['system_details']['release'] = recommended_cmssw
if recommended_cmssw in CONTAINERIMAGES_CACHE.keys():
rec["system_details"]["container_images"] = CONTAINERIMAGES_CACHE[recommended_cmssw]

Expand Down Expand Up @@ -431,15 +431,15 @@ def create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm
rec['usage']['description'] = "These simulated data are not meant to be analysed on their own. The dataset can be used to add pile-up events to newly simulated event samples using CMS experiment software, available through the CMS Open Data container or the CMS Virtual Machine. See the instructions for setting up one of the two alternative environments and getting started in"
rec['usage']['links'] = [
{
"description": "Running CMS analysis code using Docker",
"url": "/docs/cms-guide-docker"
},
"description": "Running CMS analysis code using Docker",
"url": "/docs/cms-guide-docker#images"
},
{
"description": "How to install the CMS Virtual Machine",
"url": "/docs/cms-virtual-machine-2016-2018"
},
"description": "How to install the CMS Virtual Machine",
"url": "/docs/cms-virtual-machine-cc7"
},
{
"description": "Getting started with CMS open data",
"description": "Getting started with CMS open data",
"url": "/docs/cms-getting-started-miniaod"
}
]
Expand All @@ -455,7 +455,7 @@ def create(dataset, doi_info, recid_info, eos_dir, das_dir, mcm_dir, conffiles_d
if os.path.exists(filepath) and os.stat(filepath).st_size != 0:
print("==> " + dataset + "\n==> Already exist. Skipping...\n")
return

Record= create_record(dataset, doi_info, recid_info, eos_dir, das_dir, mcm_dir, conffiles_dir)

with open(filepath, 'w') as file:
Expand All @@ -478,7 +478,7 @@ def create_records(dataset_full_names, doi_file, recid_file, eos_dir, das_dir, m
#build the record only for the PREMIX dataset
if 'PREMIX' in dataset_full_name:
create(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm_dir, conffiles_dir, records_dir)

#records.append(create_record(dataset_full_name, doi_info, recid_info, eos_dir, das_dir, mcm_dir, conffiles_dir))
#return records

Expand Down Expand Up @@ -517,10 +517,10 @@ def get_step_generator_parameters(dataset, mcm_dir, recid, force_lhe=0):
if mcdb_id > 1:
print("Got mcdb > 1: " + str(mcdb_id))
configuration_files['title'] = 'Generator parameters'
configuration_files['url'] = "/eos/opendata/cms/lhe_generators/2015-sim/mcdb/{mcdb_id}_header.txt".format(mcdb_id=mcdb_id)
return [configuration_files]
else:
dir='./lhe_generators/2016-sim/gridpacks/' + str(recid) + '/'
configuration_files['url'] = "/eos/opendata/cms/lhe_generators/2015-sim/mcdb/{mcdb_id}_header.txt".format(mcdb_id=mcdb_id)
return [configuration_files]
else:
dir='./lhe_generators/2016-sim/gridpacks/' + str(recid) + '/'
files = []
files = [f for f in os.listdir(dir) if os.path.isfile(os.path.join(dir, f))]
confarr=[]
Expand All @@ -543,4 +543,3 @@ def get_step_generator_parameters(dataset, mcm_dir, recid, force_lhe=0):
return [configuration_files]
except:
pass

4 changes: 2 additions & 2 deletions cms-2016-pileup-dataset/code/eos_store.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ def get_dataset_volume_files(dataset, volume):
"Return file list with information about name, size, location for the given dataset and volume."
files = []
dataset_location = get_dataset_location(dataset)
output = subprocess.check_output('eos find --size --checksum ' + dataset_location + '/' + volume, shell=True)
output = subprocess.check_output('eos oldfind --size --checksum ' + dataset_location + '/' + volume, shell=True)
output = str(output.decode("utf-8"))
for line in output.split('\n'):
if line and line != 'file-indexes':
Expand Down Expand Up @@ -141,7 +141,7 @@ def create_index_files(dataset, volume, eos_dir):
copy_index_file(dataset, volume, filename, eos_dir)


def main(datasets = [], eos_dir = './inputs/eos-file-indexes'):
def main(datasets = [], eos_dir = './inputs/eos-file-indexes/'):
"Do the job."

if not os.path.exists(eos_dir):
Expand Down
2 changes: 1 addition & 1 deletion cms-2016-pileup-dataset/code/interface.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
@click.option('--create-eos-indexes/--no-create-eos-indexes', default=False,
show_default=True,
help="Create EOS rich index files")
@click.option('--eos-dir', default='./inputs/eos-file-indexes',
@click.option('--eos-dir', default='./inputs/eos-file-indexes/',
show_default=True,
help='Output directory for the EOS file indexes')
@click.option('--ignore-eos-store/--no-ignore-eos-store',
Expand Down
1 change: 1 addition & 0 deletions cms-2016-pileup-dataset/inputs/doi-sim.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
/Neutrino_E-10_gun/RunIISummer20ULPrePremix-UL16_106X_mcRun2_asymptotic_v13-v1/PREMIX 10.7483/OPENDATA.CMS.VWUA.G7SB
2 changes: 1 addition & 1 deletion cms-2016-pileup-dataset/inputs/recid_info.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
RECID_INFO = {
"/Neutrino_E-10_gun/RunIISummer20ULPrePremix-UL16_106X_mcRun2_asymptotic_v13-v1/PREMIX": 30566,
"/Neutrino_E-10_gun/RunIISummer20ULPrePremix-UL16_106X_mcRun2_asymptotic_v13-v1/PREMIX": 30595,
}

0 comments on commit a17cec2

Please sign in to comment.