10th MEFT Workshop
Anfiteatro PA1
Instituto Superior Técnico
![]() |
MEFT Workshop is a 2-day conference in which the students of Physics Engineering at Técnico will share their Master thesis topics with colleagues, professors, and friends. In this event, each student is allowed a total of 10 minutes to present a video (4 minutes) and a short pitch (3 minutes), followed by 3 minutes of questions from the chairpersons and audience. This is, above all, a symbolic event, which marks the beginning of our scientific careers. Every student is encouraged to take ownership of their projects and embrace the scientific journey ahead while sharing this special moment with our colleagues, friends, and professors. |
-
-
Opening Session
-
1
Modelling of a $CO_2$ coaxial plasma torch driven by microwave power pulsing
As humanity is faced with the urgency of the climate crisis, already suffering from its consequences such as rising temperatures and severe weather events, one of the century's great challenges is to reduce emissions and find new, renewable energy sources.
The conversion of $CO_2$, the biggest polluter, into oxygen and carbon monoxide, a valuable fuel, through the innovative use of non-thermal plasmas is the subject of this research. These plasmas have the exciting characteristic of generating highly energetic free electrons, which facilitate molecular bond breaking while maintaining the gas at relatively low temperatures.
The core of this work is the development of a theoretical model for a $CO_2$ coaxial plasma torch driven by microwave power pulsing, expanding upon a pre-existing hydrodynamical 2D model for helium, adapting it to the more complex structure of the $CO_2$ molecule and accounting for more processes like recombination of ions/ electrons and transport of neutral species. The simulation results will be compared to real, experimental data of the system established at the Karlsruhe Institute of Technology.Speaker: Mariana Da Silva Ribeiro
-
1
-
2
Presentation 01: Modelling of a CO2 coaxial plasma torch driven by microwave power pulsing
As humanity is faced with the urgency of the climate crisis, already suffering from its consequences such as rising temperatures and severe weather events, one of the century's great challenges is to reduce emissions and find new, renewable energy sources.
The conversion of CO2, the biggest polluter, into oxygen and carbon monoxide, a valuable fuel, through the innovative use of non-thermal plasmas is the subject of this research. These plasmas have the exciting characteristic of generating highly energetic free electrons, which facilitate molecular bond breaking while maintaining the gas at relatively low temperatures.
The core of this work is the development of a theoretical model for a CO2 coaxial plasma torch driven by microwave power pulsing, expanding upon a pre-existing hydrodynamical 2D model for helium, adapting it to the more complex structure of the CO2 molecule and accounting for more processes like recombination of ions/ electrons and transport of neutral species. The simulation results will be compared to real, experimental data of the system established at the Karlsruhe Institute of Technology.Speaker: Mariana Ribeiro (Instituto Superior Técnico) -
3
Presentation 02: Modeling of atomic oxygen in the effluent of a CO2 microwave discharge
Atmospheric carbon dioxide (CO2) has been increasing since the beginning of the industrial revolution mainly from the combustion of fossil fuels. This makes it very important to find alternative energy sources in which CO2 is not emitted and ways to remove it artificially from the atmosphere. One solution for both this problems is the Fischer–Tropsch cycle, but that requires the dissociation of CO2 into carbon monoxide, which is challenging to be done efficiently.
This project studies a system that is trying to do it under low-temperatures plasmas using microwave discharges. More specifically it tries to explain the source of atomic oxygen that forms in the post-discharge region. The study is mostly computational using a kinetic description of the free electrons and a volume averaged chemistry solver.
Speaker: Rui Martins (Instituto Superior Técnico) -
4
Presentation 03: Monte Carlo study of electron-electron collisions in low-temperature plasmas
Low-temperature plasmas are characterized by a strong non-equilibrium nature which can be used to favour certain chemical processes thus making these plasmas suitable for a wide range of applications in materials processing, plasma medicine, biology and agriculture, among others. In these systems, the dominant energy transfer mechanisms are collisions between electrons and neutrals. However, for higher ionization degrees (~ 10⁻⁵ - 10⁻³), electron-electron collisions are more frequent and can become more relevant in describing the electron kinetics of the plasma.
In this work, we aim to study the effects of electron-electron collisions in low-temperature plasmas using simulations based on a Monte Carlo approach. This corresponds to solving the electron Boltzmann equation stochastically instead of solving it directly. This study is made more difficult due to the long-range nature of this type of collisions which makes them fundamentally different from electron-neutral collisions. Moreover, the Null Collision Method that is frequently used in Monte Carlo codes becomes very inefficient when considering electron-electron collisions. These are some of the issues that this work will aim to address namely through the development of a more efficient null collision method.
Additionally, the effects of electron-electron collisions will also be studied in different physical systems of interest such as atomic and molecular gases as well as considering different electric and magnetic field configurations.
The preliminary results already obtained illustrate the tendency of electron-electron collisions to push the electrons towards thermalization and can provide insight into the possible effects of anisotropic scattering.
Speaker: Gonçalo Cardoso (Instituto Superior Técnico) -
5
Presentation 04: The importance of plasma flows in improving plasma performance
Nuclear fusion is a promising solution to the world energy problems, but it requires a mixture of deuterium and tritium to be heated to temperatures exceeding 100 million degrees Celsius. Regimes with improved confinement, such as the H-mode, are being considered for a future fusion reactor. The existence of a strong shear (gradient) in the plasma flow is thought to be fundamental for the turbulence suppression, explaining the transition to H-mode. The origin of the flow shear is still not fully understood and therefore flow measurements are required to better understand the turbulence suppression process.
This proposal focuses on the characterization of the edge flow profile when approaching the transition to H-mode at JET (Joint European Torus, EUROfusion, UK), contributing to a better understanding of the transition physics and its triggering mechanisms. To this purpose, detailed measurements were obtained by Doppler backscattering, a microwave diagnostic that measures the propagation velocity of the turbulent structures. A large dataset is available at JET with varied plasma parameters that should allow for the characterization of the flow shear and its dependence on the plasma parameters. This work will help to clarify the importance of shear flows in the H-mode access.Speaker: Mário Vaz (Instituto Superior Técnico) -
6
Presentation 05: Plasmas for Fertilizers
This work explores the possibility of synthesis of NH3 through plasma processes induced by an electrical discharge in a gas. The two-term approximation for the Boltzmann equation and the rate-balance system of equations are used as a self-consistent theoretical model for the study of this system, solved with a computational implementation which couples both with convergence cycles. Exploring the dependence of results on the main creation mechanisms included in the model and the current available experimental data, comparing to the simulation results, we hope to better understand the dynamics of the system and optimize ammonia production. Further work is necessary on simulation development, plasma diagnostics, chemistry model adjustment, as well as the possibility of applying machine learning tools to the generated data (simulated or experimental).
Speaker: Diogo Simões (Instituto Superior Técnico) -
10:30
Coffe Break
-
7
Presentation 06: Plasma-assisted removal of polymer residues from 2D materials
Two-dimensional (2D) materials stand out as promising candidates for electronic and sensor applications, owing to their exceptional characteristics including flexibility, transparency, high carrier mobility, and tuneable bandgap. Despite significant progress in 2D material growth, the use of incompatible substrates requires a transfer process, during which the material is contaminated by organic polymers.
Cleaning 2D layers poses a challenge due to potential quality compromise. Current methods struggle with film damage during the polymer removal process. Plasma etching emerges as an efficient, fast, and selective cleaning method.
This project focuses on optimizing plasma cleaning techniques for 2D materials (e.g., graphene, transition metal dichalcogenides). By exploring diverse chemistries and process conditions, I aim to enhance cleaning efficiency while preserving material integrity, contributing to the advancement of 2D material applications in electronics.
Speaker: Carlos Cunha (Instituto Superior Técnico) -
8
Presentation 07: Learning Efficient Reduced Models of Nonlinear Plasma Dynamics from Data
Plasma exists over a wide range of both temporal and spatial scales, going from the fusion reactors to the huge, and far away, galaxies. A better understanding of plasma physics is needed both for our fundamental understanding of the universe, but also to enable the development of new technologies.However, plasma modelling can be very challenging,due to its multi-scale nature.
Machine learning is enabling new ways to uncover models from data,but this comes at a great cost: the models may be very hard to understand. Using symbolic and sparse regression, it is now possible to extract interpretable models exclusively from data.
The goal is to apply sparse regression techniques to the plasma data, generated in a PIC(Particle-In-Cell) simulation, in order to uncover reduced models of nonlinear plasma dynamics, and compare them with known nonlinear plasma models, analysing the results and discussing their implications.Speaker: Alexandre Sequeira (Instituto Superior Técnico) -
9
Presentation 08: Particle Drifts and Radiation Reaction in Astrophysics
Under the presence of strong electromagnetic fields, the electromagnetic radiation emmited by an accelerated charged particle will have a significant impact on its own trajectory. In these cases, it becomes crucial to include a radiation reaction force into the equations of motion to accurately describe the dynamics of charged particles.
In this work, we study how radiation reaction will alter the motion of individual charges, specifically focusing on how the different particle drifts are affected by this mechanism. We derive estimations for the modified particle drifts and compare them with numerical simulations.
Finally, we investigate how radiation reaction will affect the distribution function of a relativistic plasma. As the plasma particles cool down from radiation reaction, the plasma will develop a kinetically unstable momentum distribution that has the conditions necessary for coherent radiation. We focus on studying whether this property is generalized for more complex astrophysical configurations, where we expect particle drifts to play an important role.
Speaker: Francisco Assunção (Instituto Superior Técnico) -
10
Presentation 09: Nonlinear Optics with Ultrashort Mid-Infrared Laser Pulses
The amplification of intense laser pulses is a complex process, as they can easily damage the optical components they pass through. Not only that, but the creation of intense ultrashort pulses in the Mid-Infrareds has always been limited by the inexistent of good gain materials. Optical Parametric Chirped Pulse Amplification (OPCPA) is a technique that combines OPA and CPA, solving both of these problems. CPA won the Nobel prize in 2018, and it allows amplifying intense pulses by stretching them temporally, mitigating the damage caused during the manipulation of these pulses, and compressing them back after amplification. OPA is the amplification method based on nonlinear effects. It has the advantage of high gains in short lengths and over large bandwidths, without heat deposition. The main disadvantage of OPCPA is the complexity of the setups needed, when compared to simpler methods like mode locking. The aim of this project is to study the processes involved in OPCPA and how to optimise them, as well as building the system needed for amplifying a 1030 nm laser pulse and posteriorly a 3000 nm laser pulse.
Speaker: David Matias Cristino (Instituto Superior Técnico) -
11
Presentation 10: Ultra-fast Lasers Towards TRIR Spectroscopy
In this work, we present Time-Resolved InfraRed (TRIR) absorption spectroscopy as a tool to study ultrafast molecular dynamics, and the process to develop an ultra-fast material study workstation. This research emphasises the critical role of ultra-fast lasers in capturing the transient states of molecules, essential for understanding their dynamic behaviour. The focus is on the design, implementation, calibration, and benchmarking of this system. Studying molecular dynamics is vital for unravelling the complexities of chemical reactions and physical phenomena at a molecular level, impacting fields ranging from biochemistry to material sciences. TRIR spectroscopy stands out as a significant method in observing and interpreting ultrafast molecular events, which are pivotal in advancing scientific and industrial applications.
Speaker: João Marques (Instituto Superior Técnico) -
12
Presentation 11: Propagação de impulsos laser com simetria cilíndrica
With the increase in demand for ultra-short high-powered pulses, the understanding of Spatio Temporal Couplings (STCs) is now more important than ever. Interaction with certain optical instruments (prisms, gratings, etc…) means very short pulses can no longer be reasonably predicted using the simple Gaussian equation. We must therefore take the changes that occur in a pulse into account, by introducing a coupling between the temporal and a spatial coordinate.
While first order coupling is already widely considered/taken advantage of when engineering laser systems (examples include the “flying focus” and the “attosecond lighthouse”), second order couplings are relatively unexplored.
The increased complexity of second order couplings means we cannot describe certain couplings across all domains, forcing us instead to rely on numerical computations. Therefore, my first goal with this project is to simulate ultra-short pulses, with and without STCs, using Wolfram Mathematica. Using different visualization and analysis mechanisms, it will be possible to find differences and similarities between the different coupling types, and hopefully find novel applications for second order couplings, that are not available in lower order.Speaker: Miguel Mendes (Instituto Superior Técnico) -
13
Presentation 12: Shining light on solids: what can high harmonic generation teach us about light and matterSpeaker: William Narciso (Instituto Superior Técnico)
-
14
Presentation 13: Numerical and experimental study of a fixed oscillating water column
The generation of ocean energy by waves is currently still in the research and development phase. This is partly because the technology is not yet mature, and partly because large-scale initiatives are not economically viable. The economic feasibility of the energy conversion system can be increased by integrating wave energy converters into breakwaters. By pooling the costs of construction, installation, maintenance and operation, electricity prices can be reduced.
The design of an oscillating water column integrated into a breakwater is the main topic of this study. A numerical model will be developed to estimate the electricity production of the Mutriku wave power plant, the first commercial and first multi-turbine wave power plant in Europe which holds the record for the highest energy production and the record for the longest lifetime since it's still in operation since 2011.
A 1:20 model will be tested in the Instituto Superior Técnico Wave tank under regular and irregular wave conditions. To validate the numerical model, a comparison will be made between the numerical and experimental results using different types of turbines. The numerical model will later be used for the Turbowave PCP of EVE, Basque Country, Spain.
Speaker: Catarina Cartaxo (Instituto Superior Técnico) -
12:40
Lunch Break
-
15
Presentation 14: Optimization of the local reconstruction in a high granular calorimeter using a heterogenous computing model
The second phase of the LHC will collect an unprecedented amount of proton-proton data at the highest centre-of-mass energies ever achieved. The machine is expected to provide an average of 140 simultaneous collisions each bunch crossing at a luminosity of around 5x10³⁴ cm⁻² s-¹. This poses a challenge to the detectors which will have to cope with a harsh radiation environment and will be subject to a high flux of particles. Hence, it is also a challenge for the computing system which will have to reconstruct high multiplicity events identify the hard collision of interest, and provide the performance required to improve the measurement of the Higgs properties to a percent-level precision.
Part of the upgrades of the CMS experiment focuses on the usage of fine granular detectors including precision timing in the calorimeters and with dedicated timing layers placed in front of both the barrel and endcap calorimeters. As part of this strategy, the upgraded endcap calorimeter (HGCAL) will provide measurements of energy and time for particles using approx. 6M channels. The next years will be dedicated to the construction and commissioning of HGCAL.
In this project, the algorithms calibration of the energy and time measurements in HGCAL will be explored using data from beam tests, cosmic muons, laser or charge injection to establish a fast local reconstruction algorithm. The algorithms will be implemented using the heterogenous computing paradigm such that they can be run in both CPU and GPUs and meet the budget foreseen of O(50 ms) to decode the binary data and perform the local reconstruction for a high-level trigger.
Speaker: Daniela Cardoso (Instituto Superior Técnico) -
16
Presentation 15: Exploring Hidden Societal Biases in Twitter Cascades: A Sociophysics Study
Human biases influence behavior and society, sometimes leading to discrimination and poor judgment. While algorithms were initially thought to be free from human biases, it's now understood that they can amplify existing biases, especially when trained on human-generated data. To address this, methods for identifying and mitigating biases in machine learning (ML) algorithms have been developed, focusing on auditing training datasets or model outputs. However, these tools struggle to identify unknown biases. Our project aims to uncover hidden biases using statistical anomaly detection methods, focusing on social media, particularly Twitter. We'll analise Twitter data, including user information like gender, nationality, and follower count, to detect patterns of biased information sharing. By examining the growth model parameters of Twitter cascades (sequences of retweets), we aim to find statistical differences that indicate bias. This project offers practical experience in handling large datasets and statistical models, and potentially, new insights into identifying online biases.
Speaker: Tomás Silva (Instituto Superior Técnico) -
17
Presentation 16: 3D imaging from underground with muon tomography
Muon tomography uses the natural flux of muons created by cosmic rays in the Earth's atmosphere to image large structures, being sensitive to their shape and density. The LouMu team operates an RPC-based muon telescope at an underground gallery at the Lousal mine, testing muography as a new geophysical survey technique. A first target was a known regional geological fault crossing the gallery, which was successfully imaged from two different telescope positions.
The goal of this project is to use the existing two large muon data sets for a full 3D reconstruction of the ground above the gallery, for the best characterization of the Corona fault and surrounding rock, and searching to identify possible secondary structures. The results are necessary to fully compare the usefulness of muon tomography to other geophysical survey methods.
Speaker: Isabel Alexandre (Instituto Superior Técnico) -
18
Presentation 17: Planning for muon tomography campaigns in urban settings
Muon tomography takes advantage of the natural flux of muons created by the interaction of cosmic rays with the Earth’s atmosphere to image large structures, being sensitive to their shape and density. Being a non-invasive method, muography comes forth as a prime candidate to use in urban settings, giving us an unprecedented upwards look into the hidden spaces beneath our cities. This work will focus on studying the possibility of using muography along the Monsanto – Santa Apolónia drainage tunnel below Lisbon, which provides a valuable case study of the cross profile of the city, since it intersects different geological formations and provides access to the installation of the detector, enabling the study of the soil properties and both over and underground constructions.
As a result, the main objectives of this study include determining what kind of natural and man-made structures exist in the path of the drainage tunnel and which ones are possible to detect, estimating the time scale needed for the data acquisition in order to provide enough statistical significance to point out the existence of those objects and preparing for a potential campaign of carrying out this experiment on the test site.Speaker: Francisco Ferreira (Instituto Superior Técnico) -
19
Presentation 18: Factorization of multiple in-medium gluon emissions
Despite some results indicating that factorization survives in most phenomenological relevant scenarios, the underlying assumptions may be too restrictive and erase most of the relevant dynamics. Motivated by the ongoing effort to verify the factorizability of successive in-medium emissions, this work aims to obtain the matrix elements for the emission of two gluons by an energetic quark in the presence of a deconfined medium and numerically calculate the resulting differential cross-section. This is made possible by recent results obtained by the Pheno Group at LIP, in the formulation of the problem along with its potential numerical implementation, allowing for greater generality.
Speaker: Afonso Guerreiro (Instituto Superior Técnico) -
20
Presentation 19: Glasma Role in Jet Quenching Effects
Quantum Chromodynamics (QCD) is a rich area of Particle Physics with much still to be understood. Namely, analytical derivations from first principles of the emergent phenomena of hadronisation and associated colour confinement have yet to be found. This drives a search for inputs from the experimental exploration of QCD, such as the heavy ion collisions performed at RHIC and the LHC.
In these collisions, the enormous energy densities reached upon impact allow reaching exotic phases of quark-gluon matter whose study may contribute to advancing our knowledge of QCD. One such phase is the Quark-Gluon Plasma (QGP), where quarks and gluons (partons) exist unconfined but strongly-coupled to each other, and which is well described by relativistic hydrodynamics as a low-viscosity fluid. Some of the properties of the QGP have been inferred from studying its effect on rare high-momentum quarks and gluons which traverse it before generating showers of hadrons - jets - which are detected. The loss of energy these jets experience traversing the QGP when compared to their conterparts formed in vacuum is dubbed jet quenching.
This work aims to study the contribution from the pre-QGP medium, named Glasma, to the observed jet-quenching via use of the Colour Glass Condensate (CGC) formalism developed to describe the environment within the nucleons of atomic nuclei; this suggests it is adequate to describe the earliest stages of the medium formed in the collision. In this formalism, the saturation of low-momentum gluons in the nucleons allows them to be treated as classical colour fields which obey the Classical Yang-Mills equations.
Throughout the months of this work, we are expected to perform: a literary review on Jet Quenching and the CGC; becoming acquainted with techniques to calculate quantities in the CGC relevant to jet quenching; reproducing the results of two papers calculating transport coefficients in the CGC for proper times 0 and larger than 0 post-collision; combining results to obtain a consistent description of transport coefficients between proper times 0 and 0.1 fm/c.
Speaker: José dos Santos (Instituto Superior Técnico) -
21
Presentation 20: Improving the vacuum baseline for in-medium jet physics studies
In ultra-relativistic heavy-ion collisions, it is possible to reach extreme conditions of temperature and density that allow to recreate the primordial state of the Universe where the fundamental degrees of freedom of Quantum Chromodynamics (quarks and gluons), are deconfined: the Quark-Gluon Plasma (QGP). The study of this hot and dense medium is at the forefront of the physics research at the most energetic heavy-ion colliders: RHIC (BNL) and the LHC (CERN). Jets - collimated bunches of particles that result from the branching of highly energetic partons — are produced concurrently with the QGP in the collision and thus modified with respect to their vacuum counterparts (e.g., jets produced in proton-proton collisions). Such modifications, resulting from the interaction between a jet and the QGP, are collectively referred to as "jet quenching” and provide detailed insights into the properties of the QGP.
Current theory-to-data comparisons in heavy-ions jet physics primarily rely on the adaptation of widely used Monte Carlo proton-proton event generators, so as to incorporate QGP-induced effects. The accuracy of these tools is thus a critical ingredient in the inference of QGP properties. By now, there have been several experimental observations of a poor description of proton-proton results by jet quenching Monte Carlo generators. One of the missing ingredients in these codes are next-to-leading corrections in the matrix element generation.
This work aims to delve into the differences arising from using a LO or NLO vacuum baseline for jet quenching studies. This will be done combining simulation tools, such as PYTHIA and MADGRAPH and also analytical computations of medium-induced modifications.
Speaker: Diogo Costa (Instituto Superior Técnico) -
16:00
Coffe Break
-
22
Presentation 21: Searching for Beyond the Standard Model particles decaying to muon pairs in SND@LHCSpeaker: Henrique Santos (Instituto Superior Técnico)
-
23
Presentation 22: Anomaly Detection in Searches for New Physics at the LHC
Anomaly Detection has recently emerged as a novel path to explore the Large Hadron Collider’s (LHC) data in the search for phenomena beyond the Standard Model (SM) of Particle Physics. Technically, it relies on machine learning algorithms with the ability to model the SM background expectation and detect potential New Physics events that differ from that background. This approach complements the program of searches at the LHC, so far unsuccessful in finding evidence for beyond-SM phenomena, with the great advantage of signal-agnosticism: instead of searching for a specific signal of a given theory model, the search is augmented to any suspicious event that does not look like a SM one. This paradigm change is valuable in probing the increasing panoply of theories proposed to revise the SM shortages - lack of dark matter candidates, an answer to the matter/anti-matter asymmetry, and the hierarchy problem, among others. This proposal consists of studying different algorithms of anomaly detection and test their performance in a variety of flagship cases of New Physics searches in the LHC. In addition, more robust implementations will be investigated, to ascertain the feasibility of data-driven models, which are affected by the presence of systematic uncertainties in experimental data.
Speaker: Inês Pinto (Instituto Superior Técnico) -
24
Presentation 23: New Physics Searches at the LHC using Anomaly Detection
The Standard Model (SM) of Particle Physics is notably descriptive and predicted new particles well in advance. Still, there is paramount evidence for the need of New Physics beyond the Standard Model (BSM), Conventionally, searches are driven by specific signals and theory assumptions, preventing a complete exclusion of new phenomena. A new paradigm is to use Anomaly Detection techniques in order to conduct more generic analyses, able to discover any event unforeseen by the SM.
In the spirit of implementing model agnostic searches for New Physics, in this work we will systematically compare various Anomaly Detection methods. The purpose is to analyze the performance of different
methods when applied to the same benchmark signals using identical evaluation metrics. This involves exploring variations and combinations of methods found in the literature. We will give focus our attention exploring clustering and graph network methods during this project.Speaker: Inês Moreira (Instituto Superior Técnico) -
25
Presentation 24: Searching for Higgs boson anomalous couplings with the ATLAS detector
There are many observed phenomena in Nature which the Standard Model of Particle Physics (SM), despite its successes, is not able to describe. One of the major questions left unaddressed by the SM is the observed asymmetry between matter and antimatter in the Universe. Violation of charge-parity (CP) symmetry in the Higgs boson sector is a well motivated way to address the discrepancy between theory and observation, and requires precise measurements of the Higgs boson couplings. In fact, after the Higgs boson discovery, one of the main goals of the Large Hadron Collider (LHC) is to precisely measure its properties, in the quest to find signs of new physics.
The Higgs boson is a particularly good probe for new physics given its unique characteristics and connection to the electroweak symmetry breaking mechanism. In this project, the student will join the LIP ATLAS team to search for anomalous couplings between the Higgs boson and the W boson, in the WH production channel. The student will explore new likelihood-free inference methods and apply them to this search for the first time, taking advantage of state-of-the-art machine learning to place powerful constraints on new physics.
Speaker: Marta Silva (Instituto Superior Técnico) -
26
Presentation 25: Search for CP-violating components in leptonic WH production with the ATLAS detector
A big open question in our understanding of the universe comes from there being more matter than antimatter in the universe today, while the theory predicts the Big Bang produced the same amount of each. A condition for this difference to exist is a violation of Charge and Parity symmetry. According to some new theories, this violation could originate in interactions with the Higgs boson that the Standard Model of Particle Physics does not account for.
The goal of this project is to look for potential anomalous couplings between the Higgs and W bosons that could suggest this is a source of the Charge-Parity violation required to explain the matter-antimatter asymmetry, using data from the ATLAS experiment.
Speaker: Beatriz Rosalino (Instituto Superior Técnico) -
27
Presentation 26: Minimal U(1) two-Higgs-doublet models for quark and lepton flavour
The Standard Model (SM) of particle physics describes three of the four fundamental forces in nature: electromagnetic (EM), weak, and strong interaction. Besides its theoretical elegance, the SM provides a unified framework to explain a plethora of natural phenomena, leading to predictions that stand up to rigorous experimental testing. Nonetheless, some questions remain unanswered within this standard theory.
In my presentation, I will address neutrino oscillations by considering effective neutrino masses described by the Weinberg operator. I will also explore the longstanding flavour puzzle, i.e., the lack of a unique guiding principle to explain the observed fermion masses and mixing patterns. A popular approach to tackling this matter involves requiring some elements in lepton and quark mass matrices to vanish by imposing Abelian flavour symmetries. However, as the SM does not allow the implementation of this approach, I focus on one of its simplest extensions: the two-Higgs-doublet model (2HDM).
Speaker: José Rocha (Instituto Superior Técnico)
-
-
-
28
Presentation 27: Hubble-induced phase transitions: The Gauss-Bonnet case
Phase transitions in the early universe are critical events that played a crucial role in shaping the structure and properties of the cosmos.
Understanding these transitions is essential for constructing a comprehensive picture of the early universe's evolution.
In this work, we present a detailed description of the symmetry breaking dynamics of a beyond the Standard Model extension. This comes in the form of a scalar field non-minimally coupled to the Gauss-Bonnet invariant, which induces a phase transition following a period of inflation.In addition to a classical treatment, characterized by the evolution of the field towards large expectation values, we quantize this field and observe the rapid amplification of the infrared perturbation modes. This observation carries intriguing consequences, particularly enabling the utilization of classical lattice simulations, a focal point in the subsequent phase of this project.
The attractiveness of this toy model lies in its independence from the specifics of the model of inflation and the energy scales at which the phase transition occurs.
Notably, a stochastic gravitational wave background created at an energy scale lower than that of inflation can more easily fall within the observable window of next-generation gravitational wave observatories.Speaker: Tomás Mendes (Técnico Lisboa) -
29
Presentation 28: Electrically Charged Hyperboloidal Evolution
Starting from an asymptotically flat spacetime, we slice it according to Hyperboloidal slices, contraty to the usual Cauchy slices. These allow one to reach Future Null Infinity in a smooth way, which serves as a good approximation to the position of an observer on Earth with respect to sources of gravitational radiation. Until now, the work has managed to simulate a complex scalar field, i.e., a pair particle-anti-particle, which serves as the source term for Maxwell's equations in their covariant form. Convergence has been tested and future plans to add a black hole with charge are discussed.
Speaker: João Álvares -
30
Presentation 29: Numerical Studies of Hawking Radiation via hyperboloidal slices
Hawking Radiation, proposed by physicist Stephen Hawking in 1974, is a theoretical result stating that black holes emit particles in accordance to Planck’s Distribution of Thermal Radiation. This outcome arises by coupling classical spacetime to quantum fields. The emission of particles is not a standalone characteristic of black holes, rather it’s a consequence of gravitational collapse. By dividing the dynamic scenario into an initial flat, stationary region, an intermediate dynamic collapsing region, and a final stationary Schwarzschild region, one finds that the vacuum state of the initial region differs from that of the final region. This leads to particle creation from an initial vacuum state for an observer in the future.
The project focuses on a classical and numerical study of Hawking Radiation by propagating a massless scalar field from past null infinity to future null infinity. The field is subsequently extracted and Fourier Analysis is performed to identify negative energy mode contributions and thus accertain particle creation. A distinct feature of this setup is the evolution of the field on hyperboloidal slices - spacelike surfaces extending towards null infinity – obtained by using a compactified radial coordinate and redefining the time coordinate. This approach allows the exploration of asymptotic regions of spacetime in the computational domain and correct radiation extraction.
The final objective of the research is to evolve the field on a dynamical background corresponding to gravitational collapse. Initial steps involve testing the code on stationary backgrounds such as Minkowski, Minskowski with a potential, and Schwarzschild spacetimes, where particle creation is not expected.
Speaker: Pedro Baptista -
31
Presentation 30: Improved Binary Black Hole Initial Data
The past decade has witnessed remarkable progress in the detection of gravitational waves, with milestones such as the direct observation of binary black hole mergers and binary neutron star coalescence. Numerical relativity stands as a crucial tool for understanding gravity, particularly in extreme scenarios where analytical solutions become impractical.
The critical aspect of constructing initial data for binary black hole simulations is addressed, recognizing the need to move beyond conventional conformally flat methods. The challenge lies in capturing realistic radiation content from the system’s past history at initial time. The proposed methodology involves leveraging the Post-Newtonian formalism to construct the initial data, creating a binary black hole toy code for verification, and implementing solutions in the SpECTRE code.
These steps aim to efficiently solve the Extended Conformal Thin-Sandwich (XCTS) equations fed by arbitrary realistic initial data, offering improved initial data for more accurate evolutions.The utilization of the SpECTRE code, renowned for its efficiency in handling complex astrophysical problems, enhances the computational capabilities required for these simulations. We ultimately seek to improve initial data of binary black hole systems, fostering more precise simulations and contributing to the ongoing exploration of gravitational waves in the universe.
Speaker: João Rebelo -
32
Presentation 31: Extreme mass-ratio inspirals into bosonic stars
Massive bosonic fields can form confined structures, held together by their own gravity, known as boson stars, in the case of scalar fields, or Proca stars, in the case of vector fields. Such bosonic structures have been proposed as possible “dark matter stars” and, for ultralight fields, have been shown to be a good description of dark matter haloes. If such stars are sufficiently massive, small compact objects orbiting around them could be detected due to gravitational-wave emission. The aim of this thesis will be precisely to study binary systems in which a small stellar compact object orbits a supermassive bosonic star. In particular, the final goal will be to study extreme mass-ratio inspirals into Proca stars, and compare the results to the case where the central object is a boson star or a black hole.
Speaker: João Silva -
33
Presentation 32: Unveiling Sources of 1/f Noise in Magnetic Tunnelling Junctions
Magnetic Tunneling Junctions (MTJs) are fundamental components of spintronics, offering high sensitivity to magnetic fields and many potential applications. However, these devices are susceptible to various noise sources, the most problematic being the so-called 1/f noise that is particularly detrimental to device performance at low frequencies.
1/f noise is ubiquitous in electronic devices and has been extensively studied in many conventional electronic components. Even in semiconductors, its origins and dependence on materials and device geometry remain controversial. The origins of this noise in MTJs remain even less clear than in non-magnetic semiconductor devices.
The goal of this project is to do a systematic investigation of 1/f noise in MTJs, which crucial to unveil its underlying mechanisms allowing for the development of effective noise mitigation strategies and enhancing the reliability of MTJ-based technologies.
Speaker: Rafael Dias -
34
Presentation 33: Bias-driven Non-Equilibrium Phase Transitions
In the field of condensed matter physics, recent research on quantum transport setups has been instrumental in advancing quantum computing and sensing technologies, as well as contributing to theoretical physics. This thesis delves into bias-driven non-equilibrium phase transitions, specifically examining the dynamics and characteristics of these transitions beyond equilibrium scenarios. The focus is on model quantum transport setups with quantum dots, which are nanoscale systems that exhibit quantum phenomena with pronounced clarity. Through the use of Keldysh path integral and stochastic equations, this study establishes a comprehensive theoretical framework for understanding quantum dots in non-equilibrium conditions. Key findings offer fresh insights into the behavior of quantum dots under bias-driven voltage, revealing the interplay between quantum coherence, electron-electron interactions, and environmental coupling.
Speaker: José Afonso -
35
Presentation 34: Learning Dynamics of Neural Networks
Nowadays, the increasing amount of data requires more and more sophisticated tools that are capable of processing it, without the need of human assistance. Fields such as computer vision, speech recognition, and self-driving cars rely on artificial neural networks and their interconnected structure to find common patterns in data. This is due to their structure which is inherently adaptable in the aim of a specific goal given by the creator of the network. However, there still lacks a solid theoretical foundation to explain many observed behaviours, such as the double descent effect in large networks, the effectiveness of stochastic gradient descent for nonlinear problems, and the link between overparameterization and overfitting. Addressing these knowledge gaps is challenging but important. As a way to better understand how these tools learn, statistical treatments will be applied, as large neural networks can be viewed as dynamically interacting degrees of freedom. The setup will consist of the study of the Hessian matrix of the loss function of the student network in the teacher-student framework, which allows for an intimate control of the learning process, through easy accessibility to the dataset and the student network. Moreover, it is expected that chaotic behaviour will be observed, which will be quantified through various metrics, such as Lyapunov Exponents, Kolmogorov Entropy and Attractor Dimension.
Speaker: Miguel Moreira -
10:36
Coffe Break
-
36
Presentation 35: Assessing Quantum Computer's Performance in the NISQ era
Achieving undeniable quantum supremacy requires the precise manipulation of many high-quality qubits. Whether the endeavor of building such a quantum computer is possible remains a central question with profound implications for both physics and technology. Currently, we are living in the "noisy intermediate-scale quantum" (NISQ) era, characterized by small and imperfect quantum processors. Our immediate challenge is to improve their quality and scalability. To address this challenge, we need tools to compare various strategies and architectures for quantum computer construction. This task is complex, as numerous factors, such as the number of qubits, connectivity, quantum gates, and compatibility with classical software, influence a quantum computer’s performance.
Quantum Volume (QV) serves as a performance metric for quantum computers. It represents the largest random quantum circuit with the same number of qubits as layers that can be run with reliable results. In this way, it assesses a quantum computer’s capability without delving into its specific details. Computing QV involves determining the average over random circuits of the fidelity between an ideal state (resulting from a faultless computer) and an imperfect state (resulting from a faulty one).
This project aims to gain a deeper understanding of QV and explore how it depends on factors like expressibility (the range of achievable unitaries in a circuit), connectivity, and gate size. This analysis will help us decide when QV is suitable and when additional refinements are necessary for meaningful comparisons of quantum processors.Speaker: Rodrigo Pereira -
37
Presentation 36: Liouvillian Tomography in Noisy Intermediate-scale Quantum Computers
Almost 40 years after Feynman first predicted the need for quantum computers, John Preskill coined the term noisy intermediate-scale quantum (NISQ) computers to describe the current stage of development of quantum computers. The NISQ era is characterized by an increased number of qubits available as well as by a significant lack of precision in manipulating these. Therefore, suitable diagnosis tools that can identify and characterize dissipation and decoherence in NISQ computers are needed to improve these devices.
In this work, a protocol for the estimation of the time-dependent Liouvillian that describes the evolution of qubits under elementary pulses in a NISQ processor is proposed. Notably, this protocol can capture non-Markovian dynamics, marking a significant advantage over similar protocols for Liouvillian estimation.Speaker: Diogo Aguiar -
38
Presentation 37: Development of a new variational quantum algorithm for MaxCut: QAOA and QEMC hybrid algorithm
Amidst the quantum revolution, promises of groundbreaking advancements abound, yet the current reality falls short, as quantum computers grapple with the limitation of insufficient qubits. We are now in the NISQ era - Noisy Intermediate-Scale Quantum - marked by imperfect qubits and error-prone quantum gates. To harness utility from these machines, we embrace Hybrid Quantum-Classical Computing, a strategy mitigating noise through the collaboration with classical processors.
In this presentation, I unveil my work on a novel hybrid quantum-classical algorithm tailored for the "Maximum Cut" problem. Beyond its relevance in machine learning, statistical physics, circuit design, and data clustering, the problem's NP-complete classification underscores its significance in tackling broader NP problems, like the knapsack problem and integer factorization. The latter plays a crucial role in internet security/secrecy through public key cryptography schemes, such as RSA encryption, entirely reliant on our classical computers' inability to factor large numbers. Motivated by these implications, my research delves into merging two existing variational quantum algorithms into a singular, optimized interpolation. The aim is to extract, and combine, the strengths of both algorithms, offering a promising avenue for addressing NP-complete problems efficiently.
Speaker: Afonso Azenha -
39
Presentation 38: Quantum simulation with cold and hot atomic vapors
The main topic of my thesis is the study of the dynamics of ultra-cold atoms; a cloud of atoms will be cooled (laser cooling) to the order of hundreds $mK$ and trapped using an external magnetic field, hence the name for the experimental apparatus: Magneto-optical-trap.Through this experimental apparatus, I will study the turbulent dynamics of this cloud, in particular, I will study the photon bubble instability and the resulting regime of photon bubble turbulence.
Speaker: Pedro Monteiro -
40
Presentation 39: Gravitacional analogues in cold atomic gases: Shock waves and gravitational collapse
One can derive a fluid model for a gas of neutral atoms which has been cooled down and trapped in a Magneto-Optical-Trap. Two main forces act on the system, one being the central trapping force, and the other a repulsive collective force induced by multiple-scattering and absorption of light, and whose intensity can be regulated by set-up parameters of the trap. This allows for its manipulation, for example, its sudden reduction. Moreover, these forces constitute a balance of forces similar to a spherically-symmetric star, in the sense that a repulsive collective force and a pressure gradient counteract a central attractive force.
These factors motivate the main objective of the project, which is to explore gravitational analogues in the ultra-cold gas, namely phenomena such as collapse, and shock waves, which are bound to happen in sudden compressions. To thoroughly analyse these complex phenomena, proper numerical methods must be employed.Speaker: Francisco Raposo -
41
Presentation 40: Glassy Dynamics in Frustrated Magnetic Spinners
The far-from-equilibrium nature of glass-forming many-body systems challenges the development of controlled experiments to study their structural and dynamical properties. In the context of an ongoing collaboration, a tabletop setup consisting of frustrated magnetic spinners has been proposed which allows to perform experiments at the human scale in a controlled manner.
Combining methods from (soft) condensed matter physics, statistical mechanics and computational physics, the objective of this project is to develop a theoretical and numerical model to study this system, to parameterise and to validate the model with the existing experimental results.
Speaker: Diogo Soares -
42
Presentation 41: Instrumentation, Control and Monitoring of Superheated Liquid Detectors for Detection of Nuclear Materials
The loss, theft, and smuggling of nuclear materials in an ever-increasingly modernized and connected world may pose a threat to the lives of many. Trafficking networks are finding new ways to smuggle radioactive materials across borders, while the development of detectors has seen little to no improvement.
This project aims to solve this problem through the development of a high-sensitivity Freon-based suspension Superheated Droplet Detector-SDD aided by microphone-based acoustic and optical instrumentation capable of discriminating true nucleation events from background acoustical noise.Speaker: Leonardo Rodrigues -
43
Presentation 42: Optimizing Gallium Oxide Thin Films for Electro-Optical Applications
Gallium oxide is an ultrawide bandgap semiconductor that has shown great promise in recent years due to its distinctive properties and wide range of potential opto-electronic applications. Ga₂O₃ thin films, which can be used in Deep-UV photodetectors, oxygen sensors or Resistive RAMs, for example, are inexpensive and easy to produce, benefit from commercial microfabrication techniques and can be applied in scalable designs. The purpose of this work is to use Radio-Frequency Magnetron Sputtering to obtain Ga₂O₃ thin films with different properties, characterizing them and optimizing this process. This analysis will be followed by the development of a device prototype based on a thin film, such as a solar-blind DUV photodetector or a waveguide in the DUV to Near InfraRed. These steps will be accompanied by the study, modification and improvement of a RFMS chamber developed for the deposition of Ga₂O₃ at high temperatures. As a whole, the thesis explores the entire end-to-end process, from the optimization of the deposition process, to the study and modification of the obtained thin films and, finally, the design and fabrication of a device prototype.
Speaker: Ana Sofia Sousa (Instituto Superior Técnico) -
12:36
Lunch Break
-
44
Presentation 43: Implementation of FISSIONIST's safety channels
FISSIONIST (FISSION reactor simulator at IST) is a full scope simulator of a low power fission reactor built by IST students, using a control system of the Portuguese Research Reactor (RPI). It is based on real control instrumentation of a fission reactor, connected to programmable impulse/current sources, which simulate the response of a neutron detector with a power range from less than 1 mW to more than 1 MW.
The aim of this project is the simulation, using a programmable current source, of the response from compensated ionization chambers connected to the FISSIONIST safety channels. Realistic physical models will be used to reflect the reactor’s kinetic behavior, regarding its power and period, depending on the position and movements of the control rods, with parameters of operation of the RPI.Speaker: Eduardo Ferreira -
45
Presentation 44: Retrieving Data from Damaged Audio Magnetic Tapes Using TMR Sensors
Optimizing tunnel magnetoresistive sensors to cater to the unique dimensions and types of magnetic audio tapes. Explore sensor geometries that mitigate damage of the tapes, improving the amount of information extracted. Study and improvement of the scanning methodology to enable reliable and accurate scans with good signal-to-noise ratio. Converting data from damaged magnetic tapes into a digital format. Using signal reconstruction methodologies to improve imperfect or incomplete signals.
Speaker: João Chaves -
46
Presentation 45: Spintronics-based computing for AI
The development of new hardware components is essential for supporting Artificial Intelligence (AI) computational tasks. Neuromorphic computing is witnessing a shift through the integration of advanced spintronics devices to replace CMOS technology. One example of these devices is the multilevel magnetic tunnel junctions (M2TJs) showing very interesting features to be employed on large neural architectures. My project aims to fabricate multilevel magnetic tunnel junctions (M2TJs) with enhanced levels of state control, specifically targeting 8 and 16 levels (3 and 4 bits) switchable with spin-orbit torque (SOT).
The central innovation lies in the development of M2TJs with one of the magnetic layers replaced by a Synthetic Layered Magnetic Multilayer Structure (SLMMS). This structure facilitates the realization of multiple stable magnetic states, crucial for achieving higher bit depth in memory cells. The focus on SOT as a switching mechanism is driven by its scalability and energy efficiency, especially critical as we downscale the physical dimensions of the M2TJs. A critical aspect of this research is achieving a resistance of less than 5 ohms in the electrical contact lines and ensuring that the voltage breakdown of individual elements exceeds 0.8V.
Furthermore, my project will explore the integration of these M2TJs into a new crossbar architecture. The crossbar design will efficiently manage the multistate cells while maintaining the scalability and density necessary for high-performance computing applications.Speaker: Francisco Simões -
47
Presentation 46: Analog simulators of artificial life and AI accelerators based on optical neural networks
This research aims to translate Lenia—a computational model of artificial life based on cellular automata—into an optical system to meet the computational demands posed by AI hardware. We introduce nonlinearity into the optical system through the development of a physical nonlinear neural network. The initial phase involves the modeling of nonlinear material layers and comprehension of their modulation properties. To benchmark the physical neural network, we begin by addressing the realization of an optical AND gate, where binary states are encoded on an optical vortex basis. By training the design of phase masks using a gradient descent algorithm known as wavefront matching, two input beams traversing the phase masks can be mapped into an output beam according to the AND operation. Results with and without nonlinear layers are being explored to improve the accuracy of the nonlinear optical gate.
Speaker: Carolina Almeida -
48
Presentation 47: Smart Fingertip Tactile Sensors for Agrorobotics Applications
With a growing world population, food supply demand is sure to continue to grow as well. Therefore, more than ever, autonomous food harvesting techniques have become a necessity to match these ever-growing demands. Existing solutions relying on optical inspection face limitations, particularly in image quality dependence on factors such as lighting conditions. Tactile sensing-based classification is a very possible alternative, although with its own challenges. This thesis aims to develop a prototype device that is able to autonomously classify harvested food (such as fruits and vegetables) by their ripeness, hopefully contributing to a faster production process. The tactile sensing device would be built upon magnetoresistive-based magnetic field sensors with a biomimetic approach, utilizing a magnetorheological elastomer as soft skin. This architecture provides a large sensing area with a significant space resolution, while still being thin and flexible. In a subsequent stage, a machine learning classifier algorithm will be implemented to facilitate the differentiation of food products based on their ripeness. The combination of soft skin tactile sensing and machine learning holds promise for overcoming the limitations of current food harvesting techniques, ultimately aiding the food industry in meeting the escalating demands of a growing global population.
Speaker: Francisco Mêda -
49
Presentation 48: Design and validation of microfluidics structures on Lab-on-a-CD for biomedical diagnosis
Centrifugal microfluidic technology, applied in the context of Lab-on-a-CD platforms, presents a significant advancement in biomedical diagnosis. A partnership between INESC MN an VitalBio, allowed my research about the utilization of centrifugal microfluidics in blood work analysis and how it simplifies fluid handling in microscale channels, leading to more efficient, precise and fast results. This approach emphasizes fluid mechanics and material selection for CD fabrication, crucial for effective microscale fluid manipulation and cost effective technology scalability. The development process prioritizes prototype refinement, integrating complex systems into a compact and functional format. Collectively, these developments represent a notable progression in medical diagnostic techniques, highlighting the role of microfluidic innovations in enhancing healthcare solutions.
Speaker: Joana Ramos -
50
Presentation 49: How does the brain control the eye movements? An analysis-by-synthesis approach.
The development of biologically accurate models has enabled biologists and neurologists to study and investigate the mechanisms of biological processes without having to interfere with, and potentially damage, the real system.
The human body comprises a vast number of complex systems, and this work aims to study one of these - the control of human eye motion.
To that end, a physical model of a biomimetic robotic eye equipped with six tendons that provide the eye with six degrees of freedom (DOF) and biologically realistic properties is used to study the control strategies employed by the brain to plan and execute fast and accurate (saccadic) eye movements. The control of saccades is formulated as an optimization process that takes into account factors ('costs') such as error, duration and energy consumption. To find the optimal control strategies, a Reinforcement Learning optimization algorithm is employed to create a controller interface capable of generating optimal trajectories that minimize the total movement cost.
With this formulation, we expect that our model can replicate human-like saccadic motion, without imposing any additional constraints or a priori knowledge of the real system. The results will be compared to real-life measurements from human saccades to confirm the validity of the developed model.Speaker: Miguel Teixeira -
15:54
Coffe Break
-
51
Presentation 50: Evaluation of magnetic susceptibility in cardiac tissues using magnetic resonance imaging
Magnetic Resonance Imaging (MRI) obtained for diagnosis in Medicine contains the amplitude and phase of the signal, the former of which is normally used for anatomical identification of structures and their alteration in disease. The information is determined, among other variables, by the movement of the structures under analysis and influenced by their magnetic susceptibility, depending on the chemical components present.
Cardiac pathologies can cause the deposition or accumulation of substances in the heart, such as blood and its degradation products or calcification of the valvular apparatus. These have a significantly different magnetic susceptibility to other cardiac tissues, inducing alterations and artifacts in the images, particularly in the phase component of the signal.
To isolate the effect of magnetic susceptibility in the case of cardiac applications, it is necessary to minimize the impact of movement by modifying the shape of the gradients in order to avoid phase accumulation due to movement and synchronizing acquisition with cardiac and respiratory cycles. Having acquired a series of T2*-sensitive images with different weightings, the signal still has to be analyzed in order to remove potential contamination due to the presence of fat before estimating the magnetic susceptibility map.
In this project, we intend to implement a protocol for acquiring and analyzing cardiac MRI images to assess magnetic susceptibility in the heart in order to identify blood deposition or calcification.Speaker: Maria Beatriz Costa -
52
Presentation 51: Optimising the performance of a deep-learning based tool for automatic Cardiac MR planning
Cardiac Magnetic Resonance (CMR) is a non-invasive imaging modality and has become the gold standard technique for evaluating myocardial function, quantifying myocardial volumes and detecting myocardial scar. However, CMR requires highly trained operators with a good knowledge of cardiac anatomy and CMR exam planning. Due to the variability of cardiac morphology and body shape between patients, it can take considerable time for an operator to become proficient at planning the different cardiac planes. The examination times are also directly related to the training level of the operator; a proficient operator will save considerable scanner time. Thus, CMR is seen as a complex and time-consuming technique that is limited to high volume tertiary centres with operators specialised in CMR.
Recently, we have developed an automated deep learning-based planning tool [1] that performs the planning of the different cardiac planes without requiring user input. The goals were to 1) simplify CMR exams and reduce the overall training required for CMR operators; 2) provide reproducible and efficient studies with reduced examination time in all centres; 3) guarantee high-quality diagnostic images; 4) reduce the reliance on operators with advanced CMR training. Our current implementation provides the positioning for one specific view (e.g. small axis, two-chambers or four-chambers) but we believe that implementing an alternative architecture that would consider all three views at once would improve on the current performance.
Data required for this project are available through a collaboration with Linköping University (Sweden).
Speaker: Juna Santos -
53
Presentation 52: An electrical model of the heart
In this master thesis, we plan to develop a model for the propagation of electric signals in
the heart, including the auricles and ventriculus, to analyse the propagation
properties of the heart signal and the changes in the electric potential due to different
pathologies in the heart tissues as arrhythmias, tissue death, spiral waves and accessory
pathways.Speaker: João Olívia -
Closing Remarks
-
28