Skip to content

Commit b9cdf98

Browse files
authored
Merge pull request uscms-software-and-computing#95 from LinaresToine/Intern_Antonio
First draft Antonio Intern
2 parents 6358709 + 9b08779 commit b9cdf98

File tree

2 files changed

+42
-0
lines changed

2 files changed

+42
-0
lines changed
34.7 KB
Loading

pages/interns/LinaresToine.yml

+42
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
---
2+
layout: intern
3+
pagetype: intern
4+
shortname: LinaresToine
5+
permalink: /interns/LinaresToine.html
6+
intern-name: Antonio Linares
7+
title: Antonio Linares - USCMS S&C Intern
8+
active: True
9+
dates:
10+
start: 2023-05-01
11+
end: 2024-04-30
12+
photo: /assets/images/team/Antonio-Linares.jpg
13+
institution: University of Wisconsin Madison
14+
15+
project_title: CMS T0 Operator
16+
project_goal: >
17+
"My name is Antonio Linares, I studied Physics at Universidad de los Andes in Bogota, Colombia.
18+
During my bachelor’s degree I realized I enjoy Computer Science, so after graduation I took
19+
my career to follow that path. My desire towards learning more about computer systems
20+
while staying in direct touch with Physics motivated me to be an intern as a Tier 0
21+
Operator for CMS at Fermilab. Ever since I started the internship, I have learned a lot about
22+
computing. To put it in perspective, allow me to describe the job.
23+
During data taking, the LHC produces 8 to 10 GB of data per second. This massive rate puts a lot
24+
of pressure on the resources available, since all data must be saved, reconstructed, and then
25+
sent to several desired destinations. The main issue during saving the data is the storage: at 10
26+
GB/s, any technical or availability problem with the disk or tape resources may result in loss of
27+
data. To keep up with the high data taking rate, Tier 0 must generate all the output data in a
28+
timely manner to allow the deletion of the LHC data. This is done through reconstruction
29+
workflows with the use of thousands of computers at CERN. Again, any technical issue that
30+
arises through the reconstruction of data will keep the input data from being deleted and
31+
potentially risking storage for new data. All the output data is processed again to generate more
32+
compressed files of the data and allow their distribution to desired destinations.
33+
Tier 0 is then a job consisting heavily of operations such as monitoring the performance of the
34+
computer resources, the progress of the reconstructing workflows, the available storage, the
35+
data taking rates, among other things. Also, testing of new software for the data reconstruction
36+
is often necessary during data taking, which also concerns Tier 0. Finally, when everything is
37+
running smoothly or no data is being taken, Tier 0 focuses on developing tools to facilitate their
38+
tasks. All of this means that a Tier 0 Operator ends up understanding the roots of how the CMS
39+
system is set up. This translates into massive computing knowledge."
40+
mentors:
41+
- Jennifer Adelman-McCarthy - (FNAL)
42+
---

0 commit comments

Comments
 (0)