Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
ATLAS experiment
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Data systems=== ====Data generation==== Earlier particle detector read-out and event detection systems were based on parallel shared [[Bus (computing)|buses]] such as [[VMEbus]] or [[FASTBUS]]. Since such a bus architecture cannot keep up with the data requirements of the LHC detectors, all the ATLAS data acquisition systems rely on high-speed point-to-point links and switching networks. Even with advanced [[electronics]] for data reading and storage, the ATLAS detector generates too much raw data to read out or store everything: about 25 [[megabyte|MB]] per raw event, multiplied by 40 million [[beam crossing]]s per second (40 [[Hertz#SI multiples|MHz]]) in the center of the detector. This produces a total of 1 [[Byte#Multiple-byte units|petabyte]] of raw data per second. By avoiding to write empty segments of each event (zero suppression), which do not contain physical information, the average size of an event is reduced to 1.6 [[megabyte|MB]], for a total of 64 [[terabyte]] of data per second.<ref name=fact_sheets/><ref name=the_bible/><ref name="TPoveralldetector"/> ====Trigger system==== The [[trigger (particle physics)|trigger]] system<ref name=fact_sheets/><ref name=the_bible/><ref name="TPoveralldetector"/><ref>{{cite journal|title=ATLAS Trigger and Data Acquisition: Capabilities and commissioning|author=D. A. Scannicchio|journal=Nuclear Instruments and Methods in Physics Research Section A|year=2010|volume=617|issue=1/3|doi=10.1016/j.nima.2009.06.114|pages=306β309|bibcode = 2010NIMPA.617..306S}}</ref> uses fast event reconstruction to identify, in real time, the most interesting [[event (particle physics)|events]] to retain for detailed analysis. In the second data-taking period of the LHC, Run-2, there were two distinct trigger levels:<ref>{{cite journal|title=ATLAS Run-2 status and performance|author=ATLAS collaboration|journal=Nuclear and Particle Physics Proceedings|year=2016|volume=270|doi=10.1016/j.nuclphysbps.2016.02.002|pages=3β7|bibcode=2016NPPP..270....3P|url=https://cds.cern.ch/record/2048973}}</ref> # The Level 1 trigger (L1), implemented in custom hardware at the detector site. The decision to save or reject an event data is made in less than 2.5 ΞΌs. It uses reduced granularity information from the calorimeters and the muon spectrometer, and reduces the rate of events in the read-out from 40 [[Hertz#SI multiples|MHz]] to 100 [[Hertz#SI multiples|kHz]]. The L1 rejection factor in therefore equal to 400. # The High Level Trigger trigger (HLT), implemented in software, uses a computer battery consisting of approximately 40,000 [[Central processing unit|CPUs]]. In order to decide which of the 100,000 events per second coming from L1 to save, specific analyses of each collision are carried out in 200 ΞΌs. The HLT uses limited regions of the detector, so-called Regions of Interest (RoI), to be reconstructed with the full detector granularity, including tracking, and allows matching of energy deposits to tracks. The HLT rejection factor is 100: after this step, the rate of events is reduced from 100 to 1 [[Hertz#SI multiples|kHz]]. The remaining data, corresponding to about 1,000 events per second, are stored for further analyses.<ref name="CERN">{{cite news |work=[[ATLAS collaboration]] Research News |title=Trigger and Data Acquisition System|url=https://atlas.cern/discover/detector/trigger-daq |date=October 2019}}</ref> ====Analysis process==== ATLAS permanently records more than 10 [[Byte#Multiple-byte units|petabyte]]s of data per year.<ref name=fact_sheets/> Offline [[event reconstruction]] is performed on all permanently stored events, turning the pattern of signals from the detector into physics objects, such as [[Particle jet|jets]], [[photon]]s, and [[lepton]]s. [[Grid computing]] is being used extensively for event reconstruction, allowing the parallel use of university and laboratory computer networks throughout the world for the [[central processing unit|CPU]]-intensive task of reducing large quantities of raw data into a form suitable for physics analysis. The [[software]] for these tasks has been under development for many years, and refinements are ongoing, even after data collection has begun. Individuals and groups within the collaboration are continuously writing their own [[computational physics|code]] to perform further analyses of these objects, searching the patterns of detected particles for particular physical models or hypothetical particles. This activity requires processing 25 [[Byte#Multiple-byte units|petabyte]]s of data every week.<ref name=fact_sheets/>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)