• The ICFA Data Lifecycle Panel presented its open science / FAIR data lifecycle recommendations, available via a web application (and an accompanying note), aimed at improving long-term usability of HEP data, not only preservation.

  • A key challenge highlighted is preserving analysis knowledge (software, workflows, and contextual documentation) so data remain reusable on very long timescales; the researcher is central, but success depends on institutions, collaborations, and host laboratories providing enabling conditions.

  • The recommendations were designed to be role-specific, actionable, and concrete, with tailored guidance for different “actor groups,” plus executive summaries and a glossary to support uptake.

  • Main messages emphasized: treating analysis software and workflow descriptions as integral research outputs; ensuring supplementary knowledge needed for reuse; strengthening software skills among researchers; and ensuring policies/resources exist to enable these practices.

  • The panel’s next step is an assessment process to determine “how we are doing,” starting with host laboratories and experiment management (and later, via appropriate channels, funders), because many actions require management-level decisions and resourcing.

  • The assessment approach assigns a status per recommendation (e.g., applied/partially applied/planned/considered/not applicable) and requires verifiable evidence such as links to relevant documentation; planned items should include timelines, and “not applicable” requires justification.

  • A separate verification step was proposed to ensure cited documentation is genuinely findable and accessible to the intended audience (even when internal), potentially involving community members or broader participation.

  • A dedicated assessment mode is being integrated into the recommendations web app, supporting team-based work, credentialed access, intermediate saving, and persistent storage; regular reassessments (roughly every 1–2 years) were proposed, with the first round serving as a baseline for future progress comparisons.

  • Indicative timeline discussed: identify contacts by mid-February, form assessment teams by around April, run assessments April–September, and report towards year-end, contingent on engagement and collaboration.

  • Coordination points were noted with CERN open science and data governance structures, including support to connect with relevant contacts and align with data governance topics (e.g., preservation/retention).

  • Open data policies are public (discoverable via the CERN Open Data Portal), while other working-practice documents may be internal; the assessment aims to collect/internal-reference such evidence without necessarily making it public.

  • Call for testers: Under the link provided everyone can switch on the "Assessment Mode" and try to input to test the feel of the assessment tool (if choosing, "local" as user, no password is required and inputs don't go to the database).