- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !
What? It's your chance to expose your work project and to present it within the bigger context of your experiment / department. You can make one on your own, or with a small group. Of course all students and supervisors are welcome to join us!
Please note that we only have 20-30 places available.
When? Thursday 25th July 2024 at 5pm to approx. 6h30pm
How? Please register on this event. Registration opens July 9th at 15h00
Where? Mezzanine of the Main Building- 500/1-201
More details:
Preparation of your Poster:
Posters can be as big as you like as long as they fit on the pannel. We recommend to print it in A0 (84.09 x 118.9 cm) or A1 (59.46 x 84.09 cm). The panels we use for the poster session are the same as the summer student notice board outside the Auditorium, so please just make sure it fits!
Where to print your Poster?
Your poster can be printed at the CERN Printshop. Once you have created your poster, you will just have to convert it into a pdf file and send it to the Printshop via the online submission form. You will then be informed when the poster has been printed and is ready for collection.
Note that if you want to print 2 or more copies you will have to provide a budget code (you will need to ask your supervisor for your group budget code).
The CERN Printshop is located on the ground floor of building 510 (opposite the Main Building): 510 R-007. The Printshop reception is open from: Monday-Friday, only by appointment in the morning and every afternoon from 13h to 16h30.
Please make sure that you submit your poster request during normal working hours, and not to leave it until the last moment! Note that for large conferences, the waiting list in front of you can be very long.
For those participants who have not sent the topic of their posters yet, please send it as soon as possible to the Summer Student Team!
We look forward to seeing you there, don't forget to invite your supervisor and colleagues to join us!
This project focuses on developing a graph neural network (GNN) to classify b-jets at the LHCb experiment. Accurate identification of jet flavours is crucial for event reconstruction. Utilizing deep learning, specifically GNNs, will improve the accuracy and efficiency of this classification process. Additionally, this GNN will be applied for classification of c-jets and fat jets.
To ensure high-quality data acquisition at ATLAS, the detector status is monitored by a team of shifters in the control room. We aim to improve the quality of anomaly detection and decrease the workload of the control room staff by developing a machine learning model to watch the incoming time-series data on the status of the detectors and flag anomalies. The goal is for this model to run online, alerting staff of problems in real-time so the appropriate corrections can be made.
Doubly Charmed Baryon(Xi_cc) can be considered as a system composed of tight coupling c-c and a light quark. The discovery and measurement of Xi_cc in the experiment can benefit the study of the three-body bonding system, quark model, and some other theoretical prediction models. In the poster, we introduce the reasons for Xi_cc searching, present status of the search, and methods we use to reduce background events and discover Xi_cc.
Vector boson scattering (VBS) is an important test to experimentally probe the electroweak symmetry breaking mechanism. In order to measure the boson polarisation in VBS signal, there is a need to optimize the event and the selection to maximize the sensitivity. In this poster, we report the first result of the analysis and the current modelings.
This poster focuses on improving vertex reconstruction in high-pile-up scenarios at the HL-LHC using hit-time information. The study compares algorithms with and without time data, demonstrating improved accuracy and reduced CPU usage. The findings highlight the benefits of integrating time data into vertex-finding processes.
The Schottky noise signals have been a powerful diagnostics tool in many storage rings and synchrotrons (ISR, SPS, Tevatron, RHIC). In the LHC the Schottky-based diagnostic is problematic due to the presence of strong coherent components in the measured
spectra. The goal of this project is to analyze LHC Schottky spectra using Machine-Learning (ML) techniques, such as denoising autoencoders and variational autoencoders with adjusted loss function.
Simulation of showers in calorimeters is typically the most time-consuming part of simulation. Therefore, machine learning models have been developed to enable fast shower simulation. The project involves integrating ML model inference for the FCCee CLD detector with the production simulation framework DD4HEP and validating their performance.
Status of 3D tracking with the Downstream Stations of the SND@LHC detector
An investigation into the importance different ID choices on scale factor on the efficiency of a specific trigger. This investigation will focus on the studies from the E/Gamma group work and will utilise the tag and probe framework. The final aim of this project is to find a universal ID type and provide evidence of its effectiveness.
An overview of the ATLAS Trigger System and the Trigger Menu, focusing on some of the operational tasks that ensure high-quality data and the different procedures used to optimise the system.
This project aims to improve the production and precision of isotopically enriched radioactive ion beams using advanced laser ionization techniques. At CERN-ISOLDE, Resonance Ionization Laser Ion Sources (RILIS) are employed for their high selectivity and efficiency. The Laser Ion Source and Trap (LIST) enhances beam purity by spatially separating the laser ionization region from the hot atomization cavity, reducing isobaric contamination.
The Perpendicularly Illuminated Laser Ion Source and Trap (PI-LIST) addresses Doppler broadening by creating a crossed laser and atom beam environment, achieving sub-Doppler resolution of 100-200 MHz. Despite these advancements, francium contamination due to its low ionization potential remains a challenge. To mitigate this, COMSOL simulations are conducted to explore solutions such as repeller coatings, improved thermal insulation, and materials with lower work functions.
Crystal collimation is a type of advanced beam cleaning where bent silicon crystals are used to steer beam halo particles toward an absorber. Crystals are materials with a highly organized atomic structure, so when charged particles interact with a crystal at the right impact conditions, they become trapped in the potential well generated by neighboring crystalline planes. These particles are forced to follow the direction of the atomic lattice, oscillating in the relatively small space between the planes. This phenomenon is referred to as “channeling.” The current work done in the Non-linear Dynamics and Collimation (NDC) group uses bent crystals to channel beam halo particles in the Large Hadron Collider. It is particularly useful for channeling lead ion beams, as the crystal applies the same steering angle for intact lead ions and fragments. For this reason, crystal collimation is being implemented for the High Luminosity-LHC project to improve the ion collimation cleaning efficiency. For my project, I am using the simulation program SixTrack to simulate the results of a recent Machine Development (MD) study in the LHC from May 15th, 2024. This poster reviews the script and how I have confirmed that it is a valid method for analyzing the results of different SixTrack simulations.
Monitoring of CMS distributed storage is paramount to ensure efficient use of disk and tape resources allocated to the experiment. The CMS Monitoring team currently uses MongoDB to store data for the tabular data monitoring and DataTables to visualize the data. However, this pipeline can be improved and streamlined to enhance the performance.
This project aims to implement OpenSearch for storage and Grafana for access to overcome the performance issues and to unify the tabular data monitoring pipeline with other dataset monitoring pipelines.
The nature of the neutrino is one of the most fundamental questions in modern neutrino physics: is it a Dirac or Majorana particle? In this project, we focus on studying the possibility of observation of the neutrinoless double beta decay, a process associated with the Majorana neutrino. Specifically, we will investigate this decay process where the neutrons are detected by the Zero Degree Calorimeter (ZDC) detector on both sides of Interaction Point 1 (ATLAS).
This project utilizes data collected by the LHCb detector during Run 3 to study the production cross section of ( J/\psi ) at an energy of (\sqrt{s} = 13.6) TeV. This research serves a dual purpose: verifying NRQCD predictions for heavy quarkonium production and conducting performance validation for the detector. The analysis focuses on the decay channel ( J/\psi \rightarrow \mu^+ \mu^- ), which has a relatively large branching fraction, to calculate the double-differential production cross-section of ( J/\psi ) across different transverse momentum and rapidity intervals.
In this project, I optimized our approach by parallelizing the execution of test without having to rebuild the Docker image in every run. The Docker image we build contains all necessary dependencies, GPU drivers (if applicable), and the Xsuite packages in the versions we need to test. Previously, Docker images were built separately on each test runner machine, a process that was inefficient and resource-intensive. Now, we build Docker images once on a single machine and distribute these artifacts to other machines for testing. This not only enhances efficiency but also allows us to store images as artifacts that can be easily downloaded and reused for debugging purposes. We utilize CERN-hosted VMs with GPUs on OpenStack, emphasizing a scalable and flexible testing environment through rapidly configurable virtual machines. We run tests nightly, and the infrastructure also allows for tests to be triggered on demand, providing flexibility and timely feedback on the system's current state.
The SND experiment focuses on the detection of neutrinos from particle collisions produced at the LHC, in a currently unexplored pseudo-rapidity range.
Colisions produced at the LHC, in a currently unexplored pseudo-rapidity range.
The goal of this work is to eliminate most of the electronic noise maximize the efficiency.
Currently SND is detecting only 100 neutrinos per year, therefore, any minimal change in the noise filtering or in the general conditions of the experiment, can achieve an increase in neutrino detection and therefore a greater achievement in the goal of the experiment.
Using ZFit indirect constraint of the Charm-Yukawa coupling strength by using a maximum likelihood fit to analyse the Higgs Gamma Gamma decay. Higgs Production depends on the Charm-Yukawa coupling strength explicitly, which will affect the shape of the differential cross-section. The Likelihood is modelled using a Multivariate Gaussian which depends on the Charm-Yukawa coupling strength.
This poster investigates AtlFast3 (AF3) and Geant4 (FullSim) Monte Carlo simulations for charged and doubly charged Higgs boson searches in the same-sign WW and WZ channels. The validity of AF3 is investigated, with the final aim of the project being to optimize the analysis as a charged Higgs boson search in the vector-boson fusion (VBF) channel.
The Contact UP Application is developed to address the manual and error-prone process of entering contact information from business cards. Using an OCR (Optical Character Recognition) pipeline, the application accurately extracts and processes key details such as names, organization names, job titles, phone numbers, emails, and websites. This automation significantly enhances efficiency and accuracy in managing contact information for CERN members.
Discover the critical role of time in enhancing the LHCb ECAL Upgrade II Clustering. Explore innovative machine learning techniques for precise photon reconstruction, energy measurement, and improved particle identification. Learn about advancements in scintillating modules and radiation tolerance.
CERN’S North Area consists of many secondary beamlines and some of those originate from the same production target and primary proton beam. To serve multiple secondary lines at the same time the so-called Wobbling Stations are used. Two of these beamlines are the H2 and H4 beamlines that are connected to the T2 target wobbling station. BDSIM provides the opportunity to simulate different wobbling settings, using Monte Carlo techniques that allow accurate predictions of the real beamlines. So far, a model regarding the wobbling station has been created and is being tested with different settings by changing the target and TAX placement and the input of the currents. A sampler has been placed before and after the TAX, which filters everything that enters the beamlines, and is able to recognize the protons from all the other particles. With the T2 wobbling model having been tested and validated, it is now possible to use its output in simulations of the beamlines downstream.
The Standard Model predicts Lepton Flavour Universality (LFU), where all lepton flavors should interact with equal strength. Testing LFU involves comparing the ratios of branching fractions in leptonic and semileptonic decays, with any deviation suggesting new physics beyond the Standard Model. This project aims to explore how LFU can be tested using semileptonic baryon decays. By generating and analyzing plots of various variables, we can understand the decay shapes, identify the daughter particles, and develop fits that clearly distinguish between signal and background in the respective resonances.
The Future Circular Electron – Positron Collider (FCC-ee) needs to produce record luminosities given the physics goals. Due to limitations such as beam lifetime, top-up injection is required to maintain high beam intensity and luminosity. In this study beam behaviour, efficiency and optimisation at top-up injection scheme will be examined by particle tracking.
The Search for Hidden Particles (SHiP) experiment aims to investigate hidden particles beyond the Standard Model of particle physics. One of the critical components of this experiment is the decay vessel, where particles decay into measurable products. Designing and optimizing this decay vessel is crucial to ensure its structural integrity, efficiency, and effectiveness in the experiment. This project involves the creation of the optimized vessel's design and describes the subsequent modal analysis to obtain approximate dispersion curves useful for seismic analysis and non-destructive testing.
This project consists of developing a web application for the LAr Operations team of the ATLAS detector at the LHC.
The purpose is to to facilitate real-time visualisation of data allowing the team to predict problems in the front-end hardware, perform preventive replacements and avoid data loss. The application is developed using Django (Python), Bootstrap, AJAX and is also integrated with DCS (Distributed Control System) and COOL (run Condition) databases.
In this study, we compare the TCAD simulation of a silicon sensor with an equivalent COMSOL simulation. Furthermore, we aim to understand to what extent the results can be approximated without using the doping profile and other complexities. We use Garfield++ to simulate signal induction and analyze the resulting signal shapes. Specifically, we focus on resistive silicon detectors (or AC-LGADs) for which TCAD and COMSOL simulations are already available. We aim to integrate the doping profile into the COMSOL simulation, export the data to Garfield++, and compare the signal shapes.
ROOT is an open-source C++ data analysis framework widely used in High Energy Physics (HEP) to efficiently analyze petabytes of data. A typical HEP analysis generally consists of filtering data, producing new columns from existing data and computing histograms. The goal of this project is to develop efficient batch histogramming implementations in ROOT that make use of GPUs.