- Compact style
- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Help us make Indico better by taking this survey! Aidez-nous à améliorer Indico en répondant à ce sondage !
The third workshop on Parton Distributions and Lattice Calculations (PDFLattice2024) will be hosted at JLab. Sessions will start at 9 a.m. on Monday, November 18, and end at 5:30 p.m. on Wednesday, November 20, 2024.
The workshop continues the series started at the University of Oxford in 2017 and at Michigan State University in 2019. It aims to bring together the global PDF analysis and lattice-QCD communities to explore ways to improve the current understanding of the distributions of partons in the nucleon.
The format of the workshop will be similar to that adopted in previous editions, that is, few talks and ample time for discussions will be scheduled. A poster session is also planned. The 2024 edition of the workshop will focus on uncertainty quantification in PDF determinations from global analyses and lattice computations. As in previous editions, the workshop's outcome will be documented in a community paper. The workshop is in-person only.
Links to previous editions:
Links to previous white papers:
I introduce the 2024 edition of the PDFLattice workshop by looking at the lessons learnt over the two previous editions in the series, at the goals of the current edition, and at the challenges that we would like to address in the future.
We present recent updates from the MSHT global determination of the unpolarized PDFs, focussing on developments in aN3LO QCD fits, recent tests of the underlying fixed polynomial parameterisation fitting approach and discussion of the uncertainty definition in a global PDF fit, and a first direct comparison to the neural net (NNPDF) fit.
I review the status of methodologies for quantifying credibility intervals on parton distribution functions in global QCD analyses of experimental measurements.
The extraction of unpolarised parton distribution functions (PDFs) from lattice quantum chromodynamics (QCD) is now a mature enterprise. Significant theoretical and computational advances over the last decade have enabled calculations that quantify or estimate all systematic uncertainties. Entering the industrial era of precision lattice calculations will require a clear understanding of systematic uncertainties and the role of uncertainty quantification. I will present a broad overview of uncertainty quantification in lattice QCD calculations relevant to the extraction of unpolarised PDFs, with an emphasis on unresolved issues and future challenges.
The current scientific standard in PDF uncertainty estimation relies either on repeated fits over artificially generated data to arrive at Monte Carlo samples of best fits or on the Hessian method, which uses a quadratic expansion of the figure of merit, the $\chi^2$-function. Markov chain Monte Carlo methods allows one to access the uncertainties of PDFs without making use of quadratic approximations in a statistically sound procedure while at the same time preserving the correspondence between the sample and $\chi^2$-value. Rooted in Baysian statistics the $\chi^2$-function is repeatedly sampled to obtain a set of PDFs that serves as a representation of the statistical distribution of the PDFs in their function space. After removing the dependence between the samples (the so-called autocorrelation) the set can be used to propagate the uncertainties to physical observables. The final result is an independent procedure to obtain PDF uncertainties that can be confronted by the state-of-the-art in order to ultimately arrive at a better understanding of the proton's structure.
Parton distribution functions (PDFs) form an essential part of particle physics calculations. Currently, the most precise predictions for these non-perturbative functions are generated through fits to global data. A problem that several PDF fitting groups encounter is the presence of tension in data sets that appear to pull the fits in different directions. In other words, the best fit depends on the choice of data set. Several methods to capture the uncertainty in PDFs in presence of seemingly inconsistent fits have been proposed and are currently in use. These methods are important to ensure that uncertainty in PDFs are not underestimated. Here we propose a novel method for estimating the uncertainty by introducing a generalized statistical model inspired by unsupervised machine learning techniques, namely the Gaussian Mixture Model (GMM). Using a toy model of PDFs, we demonstrate how the GMM can be used to faithfully reconstruct the likelihood associated with PDF fits, which can in turn be used to accurately determine the uncertainty on PDFs, especially in presence of tension in the fitted data sets. We further show how this statistical model reduces to the usual chi-squared likelihood function for a consistent data set and provide measures to optimize the number of Gaussians in the GMM.
One of the goals of modern nuclear physics is to capture a picture of the inside of a proton. The distribution of the positions and momenta of quarks inside the proton are described through quantum correlation functions (QCFs). These QCFs cannot be directly measured but inferred through modeling and fitting to data from particle collider experiments. Our confidence and uncertainty in these QCFs are then reported via uncertainty bands, which are given by the distribution of free parameters that enter the models of the QCFs. What these uncertainty bands obscure is how much each source of uncertainty contributes to final posterior uncertainty. These sources of uncertainty are the uncertainty of the data, the distribution of the data, and the inherent model uncertainty. In this talk, I will introduce the concept of resolutions of the QCFs, how they separate these different uncertainty sources, and why these resolutions are a useful diagnostic tool for understanding how precise our picture of the proton is.
In this talk I will discuss the impact of perturbative improvements, i.e., resummations and renormalon subtraction, in the lattice QCD calculation of PDFs. I will compare the large-momentum effective theory (LaMET) and short-distance factorization (SDF) approaches, which can provide model-independent estimates of the PDF $x$-dependence and lowest Mellin moments, respectively. The leading renormalon subtraction is important for improving the perturbative convergence of LaMET, while the resummations can provide a quantitative way to estimate the theory uncertainties in both approaches.
We present a comprehensive study of the electromagnetic form factors (EMFFs) of the pion and kaon, as well as the generalized parton distributions (GPDs) of the pion, using lattice QCD. For the pion and kaon form factors, we compute the pion and kaon EMFFs at high momentum transfers, $-t$, up to 10 and 28 GeV$^2$, respectively, achieving good agreement with experimental results up to $-t$ $\lesssim$ 4 GeV$^2$ and providing benchmarks for forthcoming experiments. We also test the QCD collinear factorization framework, relating form factors to meson distribution amplitudes, at next-to-next-to-leading order (NNLO) in perturbation theory. Additionally, we report a lattice calculation of $x$-dependent pion GPDs at zero skewness with multiple values of momentum transfers. We determine the Lorentz-invariant amplitudes of the quasi-GPD matrix elements for both symmetric and asymmetric momenta transfers with similar values and show the equivalence of both frames. Then, focusing on the asymmetric frame, we utilize a hybrid scheme to renormalize the quasi-GPD matrix elements obtained from the lattice calculations. After the Fourier transforms, the quasi-GPDs are then matched to the light-cone GPDs within the framework of large momentum effective theory with improved matching, including the next-to-next-to-leading order perturbative corrections, and leading renormalon and renormalization group resummations. We also present the 3-dimensional image of the pion in impact-parameter space through the Fourier transform of the momentum transfer $-t$.
Discrete variations in modeling choices represent a common source of systematic error in extracting physics results from data, particularly in analysis of lattice simulations. Model averaging allows for the accounting of such systematic effects in a statistically consistent way. I will review the formalism of model averaging from a Bayesian perspective, and highlight key formulas and applications to lattice gauge theory.
Parton distribution functions (PDFs) at large $x$ are difficult to be extracted from experimental data, but are extremely important in understanding hadron structures as well as searching for new physics beyond the Standard Model. We study the large $x$ PDFs under the framework of large momentum $P^z$ expansion of lattice quasi-PDFs. In the threshold limit, the matching kernel of quasi-PDF can be factorized into the heavy light Sudakov hard kernel and space-like jet function, and their renormalization group equations allow us to resum the threshold logarithms regarding the spectator momentum. The pion valence PDFs calculated with the resummed matching kernel clearly expose the breaking down of perturbative matching for the spectator momentum $(1-x)P^z \sim \Lambda_{\rm QCD}$, and at the same time validate the perturbative matching if both spectator and active quark momenta $(1-x)P^z, x P^z$ are much larger than $\Lambda_{\rm QCD}$, where a good perturbative convergence is observed after the implementation of threshold resummation with leading renormalon resummation.
Measuring hadrons with highly boosted momentum are necessary for the calculation of parton physics and form factors from lattice QCD. However,
the worsening signal-to-noise has been one of the biggest problems for simulating boosted hadron states on the lattice, preventing us from accessing the form factors at very large $Q^2$ or parton physics with a largely suppressed $1/P^n_z$ power corrections. We propose a new set of interpolators for boosted hadrons that could significantly improve the signal-to-noise ratio compared to traditional interpolators. These new interpolators will be very helpful for PDF calculations from lattice.
We present the determination of the nucleon’s Sachs electric form factor using the hadronic tensor formalism and verify that it is consistent with that from the conventional three-point function calculation. We additionally obtain the transition form factor from the nucleon to its first radial excited tate within a finite volume. Consequently, we identify the latter with the nucleon-to-Roper transition form factor
G_E*(Q^2), determine the corresponding longitudinal helicity amplitude S_1/2(Q^2) and compare our findings with experimental measurements, for the first time using the hadronic tensor formalism.
One of the core challenges for the investigation of hadronic structure through exclusive processes is the inverse problem. Traditionally, the problem is considered to be one of inverting the convolution of the Compton form factors (CFFs) with the Wilson coefficient function in order to extract generalized parton distributions (GPDs). However, the EXCLAIM collaboration has devised a method for extracting GPDs in a neural network-based approach that accounts for the fundamental properties of GPDs, as well as lattice data and CFFs, avoiding the difficulty of de-convolution. We present details of our novel machine learning architecture and preliminary results for GPD forms in this approach.
Generalized parton distibutions (GPDs) are a key construct for understanding the spatial distribution of quarks and gluons inside nucleons. These distributions are accessed through deeply virtual exclusive processes, the cross sections of which are parameterized using a class of observables known as Compton Form Factors (CFFs). We present a spectator model-based parameterization of twist 2 GPDs in the quark, anti-quark, and the gluon sectors. Our model parameters are constrained using high precision electron-nucleon elastic scattering data, deep inelastic scattering data, and recent lattice QCD moment data. The errors on the parameters are determined using Markov chain Monte Carlo methods. Using these generalized parton distributions, various CFFs are presented in the kinematic regimes for both fixed target and electron ion collider settings.
From polarised Deep Inelastic scattering (DIS) asymmetries, and cross-section data, it is possible to extract the polarized structure functions $g_1$ and $g_2$. In the parton model picture of proton, the structure function $g_1^p$ is expressed in terms of $g_1^p = \Delta \Sigma + \Delta G$, the net quark and gluon helicity contributions to the proton spin. This spin structure function can also be used to understand the controversies associated with the sign problem of $\Delta G$. We present a rigorous approach to study the correlations between the spin structure function $g_1$ with $\Delta \Sigma$ and $\Delta G$. A likelihood analysis of the polarized DIS spin structure function data using Markov chain Monte Carlo (MCMC) sampling method is performed. The analysis shows preliminary results with a detailed explanation of the method employed.
Generalized parton distributions (GPDs) provide information about the internal structure of the proton, but GPDs do not enter cross sections directly; instead, they enter through, e.g., the Compton form factors (CFFs), which are the observables of deeply virtual Compton scattering (DVCS). Additionally, one of the most physically interesting quantities in hadron structure is the angular momentum that quarks and gluons contribute to the proton; angular momentum is extracted from an x-weighted integral of the GPDs – the second Mellin moment – and not the GPDs themselves. Hence, we propose a machine learning framework that allows us to construct a one-to-one map between CFFs, the observables of exclusive experiments, and Mellin moments.
AI/ML informed Symbolic Regression is the next stage of scientific modeling. We utilize a highly customizable symbolic regression package "PySR" to model the x and t dependence of the flavor isovector combination $H^{u-d}(x,t,ζ,Q^2)$ at ζ=0 and $Q^2$= 4 GeV$^2$. These PySR models were trained on GPD pseudodata provided by both Lattice QCD and phenomenological sources GGL, GK, and VGG. Symbolic convergence and consistency of the Lattice-Trained PySR models is demonstrated through the convergence of their Taylor expansion coefficients and their first moment in the forward limit, $A_{10}(t=0)$. In addition to PySR penalizing models with higher complexity and mean-squared error, we implement schemes that provide Force-Factorized and Semi-Reggeized PySR GPDs. We show that PySR disfavors a Force-Factorized model for non-factorizing GGL and GK sources, and that PySR Best Fit and Force-Factorized GPDs perform comparably well for the approximately factorizing VGG source.
*This work was supported by the DOE
We present recent updates from the CTEQ-JLab (CJ) global analysis of parton distribution functions. In particular, we focus on the higher-twist effects and off-shell nucleon modification in deuteron targets. We show that how theoretical biases may be introduced from the treatment and implementation choices of these corrections. The impact of their interplay on the extraction of d-quark PDF and structure functions at large-x will be discussed.
We present an analysis of Machine Learning techniques applied to the determination of ratios of nucleon 3-point and 2-point correlation functions in Euclidean time. These quantities are typically computed in the regime of Lattice QCD and are of interest in determining various measures of hadronic structure, such as parton distribution functions (PDFs) and generalised parton distributions (GPDs). Where direct lattice calculations are computationally expensive and time consuming, machine learning offers the opportunity for significant gain in computing efficiency, ideally without the loss of key physics, and while maintaining accuracy. In our analysis, we train various regression models and explore the performance on a sample of lattice QCD data. The correlation functions were computed at various values of $P_z$, the momentum of the nucleon, and $z$ the gauge link length, and several light quark masses $u$ and $d$. We study the correlations between the choices of $P_z$ and $z$, and use this to make predictions of high $P_z$ correlators for use in the LaMET framework. In particular, we analyse the evolution of uncertainties across multiple data partitions and parameter choices, quantifying the systematic uncertainties.
We calculate Euclidean hadronic matrix elements of two spatially separated local quark currents in position space, in order to determine the $x$ dependence of parton distribution functions (PDFs). This approach is often referred to by the term lattice cross section (LCS). We extend previous works on that topic by considering valence quark PDFs of the proton and adapt the formalism to our choice of operators. The calculation of the required four-point functions is carried out on a $n_f = 2+1$ gauge ensemble with lattice spacing $a = 0.0856~/mathrm{fm}$ and pseudoscalar masses $m_\pi = 355~\mathrm{MeV}$ and $m_K = 441~\mathrm{MeV}$.
QCD is a difficult theory of hadrons because it is described entirely unobservable partons, the quarks and gluons. In order to access parton distributions, hadronic observables such as experimental cross sections or lattice QCD matrix elements must have factorization approximations separated hadronic and partonic distance scales. These observables are sensitive to different regimes in momentum fraction x. This complementarity could be beneficial in extractions of PDFs, TMDs, and GPDs. In this talk I will highlight a few specific cases where modern lattice QCD can have significant impact.
Uncertainty quantification (UQ) plays a crucial role in the predictive power of nonperturbative quantum correlation functions in high precision phenomenology. My research explores novel approaches to UQ in the context of parton distribution functions (PDFs), using machine learning techniques to map between observables and underlying theoretical models and navigate the complex parametric landscape of phenomenological scenarios such as beyond the Standard Model (BSM) scenarios. By leveraging variational autoencoders (VAEs) and contrastive learning with similarity metrics, I investigate how the inherent uncertainties in phenomenological fits of collinear PDFs impact the landscape of new physics models. My approach integrates explainability methods to trace underlying theory assumptions back to the input feature space, specifically the x-dependence of PDFs. This allows for the identification of salient features that shape fits and model interpretations, providing new insights into the role of theory assumptions in comprehensive phenomenological fits. Furthermore, the lessons from uncertainty quantification in PDFs can inform studies of multi-dimensional quantum correlation functions such as generalized parton distributions (GPDs), connecting these tools to a broader phenomenological framework in QCD. My work aims to enhance the incorporation of lattice inputs and refine our understanding of nonperturbative QCD through next generation machine learning models, ultimately pushing the frontier of particle physics discovery.
The structure of hadrons relevant for deep-inelastic scattering are completely characterised by the Compton amplitude. It is possible to directly calculate the Compton amplitude by taking advantage of the familiar Feynman-Hellmann approach applied in the context of lattice QCD. In principle, the x-dependent structure functions can be recovered from the amplitude or the amplitude itself can be incorporated to global QCD analyses. In this contribution, I will be highlighting QCDSF Collaboration's recent developments on computing the Compton amplitude and extracting the (moments of) structure functions.
Unlike extracting parton distribution functions (PDFs) from experimentally measured cross sections, it is crucial to adopt an appropriate renormalization schedule for lattice QCD calculable parton correlation functions so that we can properly identify the infrared safe matching coefficients for the extraction of PDFs from these renormalized lattice correlation functions. In this talk, I will present a prescription to minimize the sensitivities on the specific choice of renormalization scheme for lattice correlation functions. Through numerical calculations and predictions, I will demonstrate the feasibility of extracting PDFs (including gluon PDFs and sea-quark PDFs) with considerable precision under this scheme. The feasibility of extracting higher precision PDFs using the latest N3LO matching coefficients will also be demonstrated.
We review the status of global analyses of nuclear PDFs with an emphasis on the impact of LHC data over the last decade and open questions that may be addressed by lattice QCD in the coming years.
Inclusive DIS at large Bjorken $x$ is revisited to highlight the importance of tracking off-lightcone effects in the proof of factorization theorems, even collinear ones. In DIS at threshold, in particular, the relevant physics develops around two opposite light-cone directions just like in TMD SIDIS, and the Collins Soper kernel emerges as a universal function in the rapidity evolution of the relevant correlators. The new factorization theorem thus offers a novel avenue for lattice calculations of the Collins-Soper kernel with collinear operators, and bridges different fields and communities.
We present the first lattice QCD calculation of the rapidity anomalous dimension of transverse-momentum-dependent distributions (TMDs), i.e. the Collins-Soper (CS) kernel, employing the recently proposed Coulomb-gauge-fixed quasi-TMD formalism as well as a chiral-symmetry preserving lattice discretization. This unitary lattice calculation is conducted using the domain wall fermion discretization scheme, a fine lattice spacing of approximately 0.08 fm, and physical values for light and strange quark masses. The CS kernel is determined analyzing the ratios of pion quasi-TMD wave functions (quasi-TMDWFs) at next-to-leading logarithmic (NLL) perturbative accuracy. Thanks to the absence of Wilson-lines, the Coulomb-gauge-fixed quasi-TMDWF demonstrates a remarkably slower decay of signals with increasing quark separations. This allows us to access the non-perturbative CS kernel up to transverse separations of 1 fm. For small transverse separations, our results agree well with perturbative predictions. At larger transverse separations, our non-perturbative CS kernel clearly favors certain global fits.
A likelihood analysis of the observables in deeply virtual exclusive processes is presented to derive a joint likelihood of the structure function parametrizing the deeply virtual Compton scattering cross section in QCD, for each observed combination of the kinematic variables defining the reaction.
The analysis includes twist-three corrections to the cross section formalism. In our approach, the derived likelihoods are explored using Markov chain Monte Carlo (MCMC) method through which we derive uncertainties and covariances. Finally, we explore methods which may reduce the magnitude of error bars/contours in the future.
Obtaining the $x$-dependent GPDs is crucial for understanding hadron tomography, but it has been hindered by low sensitivity in most known experimental processes like the DVCS and TCS. In this talk, I will compare those processes with new ones that offer enhanced sensitivity to the $x$-dependence. By using a pixelated GPD construction, we can visualize point-by-point sensitivity. I will also demonstrate how lattice data can provide complementary sensitivity to the $x$-dependence and advocate for a combined analysis.
Generalized parton distributions (GPDs) are functions of four variables, one of which is a renormalization scale. The functional dependence on this renormalization scale is fully determined by a renormalization group equation---or "evolution equation"---that can be derived from perturbative QCD. A fast numerical implementation of the scale evolution is vital to any global phenomenology effort, as well as connecting lattice calculations to empirical data. Moreover, for a framework leveraging neural networks, differentiability is also necessary. In this talk, I will discuss an ultra-fast, differentiable implementation of GPD evolution in momentum fraction space, in which the evolution equation itself is (approximately) rendered as a differential matrix equation.
The hadron mass can be obtained through the calculation of the trace of the energy-momentum tensor in the hadron which includes the trace anomaly and sigma terms. The trace anomaly form factor can provide information about the mass distribution within hadrons and can be accessed through the gravitational form factors (GFFs) which are the moments of GPD. In this talk, I will present the calculation of the glue part of the trace anomaly form factors of the pion up to $Q^2\sim 4.3~\mathrm{GeV}^2$ and the nucleon up to $Q^2\sim 1~\mathrm{GeV}^2$. The calculations are performed on a domain wall fermion ensemble with overlap valence quarks at seven valence pion masses varying from $\sim 250$ to $\sim 540$ MeV, including the unitary point $\sim 340$ MeV.