CTA Dev Meeting
-
-
15:00
→
15:10
CTA Release Roadmap 10m
Release 4.9.1-0.rc1
- Release date: 14/07
- Pre-prod deployment date: TBD
- Prod deployment date: -
- Found a bug in catalogue v13
Release 4.10.0-0
- Release date: Next week
- Pre-prod deployment date: Next week
- Prod deployment date: TBD
- Catalogue version v14
- Fixes UNIQUE INDEX bug on Physical Library foreign key
Release 4.10.1-0.rc1
- Release date: 14/07
- Pre-prod deployment date: TBD
- Prod deployment date: -
- For testing fixes of 4.9.1-0.rc1 (Repack VO and Physical Library Tools)
Public Release
- Latest version available on public repo: v4.8.7-1, v5.8.7-1.
- Versions v4.9.0-1, v5.9.0-1 were removed.
-
15:10
→
15:20
CTA dev topics 10m
Rework catalogue release procedure and deployment path
- Issue link: #397
Handing over Lasse's tasks
- Check for any issues/MRs reassigned to you
Sonarcloud
- Sonarcloud static analysis results
Catalogue v14 fix
- Issue link: #CTA-schema-4
"Needs discussion" topics
"Dev issue needed" topics
-
15:20
→
15:30
dCache Integration 10m
AOBs
-
15:40
→
15:50
AOB 10m
AOBs
PostgreSchedDB:
- David's code compiles again, we can safely merge:
https://gitlab.cern.ch/cta/CTA/-/merge_requests/356
- PGSCHED pipelines:
* fails default unit tests: https://gitlab.cern.ch/cta/CTA/-/pipelines/6257712
default CTA unitTests were not adjusted to be run with PGSCHED* disabled unit tests: https://gitlab.cern.ch/cta/CTA/-/pipelines/6257951
/opt/run/bin/init.sh: line 58: cta-objectstore-initialize: command not found
TO-DO-LIST:
- create new/extend existing pipeline scripts + fix init pod in Error when configuring objectstore in CI
- adapt first unit tests to run using PGSCHED
- code has ~120 methods marked as 'not implemented' [1] --> to be implemented if needed
- discover what else might be needed during testing
[1]scheduler/PostgresSchedDB/ArchiveJob.cpp
void ArchiveJob::failTransfer(const std::string & failureReason, log::LogContext & lc)
void ArchiveJob::failReport(const std::string & failureReason, log::LogContext & lc)
void ArchiveJob::bumpUpTapeFileCount(uint64_t newFileCount)
scheduler/PostgresSchedDB/ArchiveJobQueueItor.cpp
ArchiveJobQueueItor::ArchiveJobQueueItor()
const std::string &ArchiveJobQueueItor::qid() const
bool ArchiveJobQueueItor::end() const
void ArchiveJobQueueItor::operator++()
const common::dataStructures::ArchiveJob &ArchiveJobQueueItor::operator*() const
scheduler/PostgresSchedDB/ArchiveMount.cpp
const SchedulerDatabase::ArchiveMount::MountInfo &ArchiveMount::getMountInfo()
void ArchiveMount::setDriveStatus(common::dataStructures::DriveStatus status, common::dataStructures::MountType mountType,time_t completionTime, const std::optional<std::string>& reason)
void ArchiveMount::setTapeSessionStats(const castor::tape::tapeserver::daemon::TapeSessionStats &stats)
void ArchiveMount::setJobBatchTransferred(std::list<std::unique_ptr<SchedulerDatabase::ArchiveJob>> & jobsBatch, log::LogContext & lc)
scheduler/PostgresSchedDB/ArchiveRequest.cpp
void ArchiveRequest::update() {
std::list<ArchiveRequest::JobDump> ArchiveRequest::dumpJobs() {
scheduler/PostgresSchedDB/PostgresSchedDB.cpp
void PostgresSchedDB::waitSubthreadsComplete()
void PostgresSchedDB::ping()
std::map<std::string, std::list<common::dataStructures::ArchiveJob>, std::less<void> > PostgresSchedDB::getArchiveJobs() const
std::list<cta::common::dataStructures::ArchiveJob> PostgresSchedDB::getArchiveJobs(const std::string& tapePoolName) const
std::unique_ptr<SchedulerDatabase::IArchiveJobQueueItor> PostgresSchedDB::getArchiveJobQueueItor(const std::string &tapePoolName,
common::dataStructures::JobQueueType queueType) const
std::list<std::unique_ptr<SchedulerDatabase::ArchiveJob> > PostgresSchedDB::getNextArchiveJobsToReportBatch(uint64_t filesRequested,
log::LogContext & logContext)
SchedulerDatabase::JobsFailedSummary PostgresSchedDB::getArchiveJobsFailedSummary(log::LogContext &logContext)
std::list<std::unique_ptr<SchedulerDatabase::RetrieveJob>> PostgresSchedDB::getNextRetrieveJobsToTransferBatch(const std::string & vid, uint64_t filesRequested, log::LogContext &lc)
void PostgresSchedDB::requeueRetrieveRequestJobs(std::list<cta::SchedulerDatabase::RetrieveJob *> &jobs, log::LogContext &lc)
void PostgresSchedDB::reserveRetrieveQueueForCleanup(const std::string & vid, std::optional<uint64_t> cleanupHeartBeatValue)
void PostgresSchedDB::tickRetrieveQueueCleanupHeartbeat(const std::string & vid)
void PostgresSchedDB::setArchiveJobBatchReported(std::list<SchedulerDatabase::ArchiveJob*> & jobsBatch,
log::TimingList & timingList, utils::Timer & t, log::LogContext & lc)
std::list<SchedulerDatabase::RetrieveQueueStatistics> PostgresSchedDB::getRetrieveQueueStatistics(
const cta::common::dataStructures::RetrieveFileQueueCriteria& criteria, const std::set<std::string>& vidsToConsider)
void PostgresSchedDB::cancelRetrieve(const std::string& instanceName, const cta::common::dataStructures::CancelRetrieveRequest& rqst,
log::LogContext& lc)
std::map<std::string, std::list<RetrieveRequestDump> > PostgresSchedDB::getRetrieveRequests() const
std::list<RetrieveRequestDump> PostgresSchedDB::getRetrieveRequestsByVid(const std::string& vid) const
std::list<RetrieveRequestDump> PostgresSchedDB::getRetrieveRequestsByRequester(const std::string& vid) const
void PostgresSchedDB::deleteRetrieveRequest(const common::dataStructures::SecurityIdentity& requester,
const std::string& remoteFile)
void PostgresSchedDB::cancelArchive(const common::dataStructures::DeleteArchiveRequest& request, log::LogContext & lc)
void PostgresSchedDB::deleteFailed(const std::string &objectId, log::LogContext &lc)
std::map<std::string, std::list<common::dataStructures::RetrieveJob>, std::less<void> > PostgresSchedDB::getRetrieveJobs() const
std::list<cta::common::dataStructures::RetrieveJob> PostgresSchedDB::getRetrieveJobs(const std::string &vid) const
std::unique_ptr<SchedulerDatabase::IRetrieveJobQueueItor> PostgresSchedDB::getRetrieveJobQueueItor(const std::string &vid,
common::dataStructures::JobQueueType queueType) const
// PostgresSchedDB::repackExists()
bool PostgresSchedDB::repackExists() {
std::list<common::dataStructures::RepackInfo> PostgresSchedDB::getRepackInfo()
common::dataStructures::RepackInfo PostgresSchedDB::getRepackInfo(const std::string& vid)
void PostgresSchedDB::cancelRepack(const std::string& vid, log::LogContext & lc)
std::unique_ptr<SchedulerDatabase::RepackRequestStatistics> PostgresSchedDB::getRepackStatistics()
std::unique_ptr<SchedulerDatabase::RepackRequestStatistics> PostgresSchedDB::getRepackStatisticsNoLock()
std::unique_ptr<SchedulerDatabase::RepackRequest> PostgresSchedDB::getNextRepackJobToExpand()
std::list<std::unique_ptr<SchedulerDatabase::RetrieveJob>> PostgresSchedDB::getNextRetrieveJobsToReportBatch(
uint64_t filesRequested, log::LogContext &logContext)
std::list<std::unique_ptr<SchedulerDatabase::RetrieveJob>> PostgresSchedDB::getNextRetrieveJobsFailedBatch(
uint64_t filesRequested, log::LogContext &logContext)
std::unique_ptr<SchedulerDatabase::RepackReportBatch> PostgresSchedDB::getNextRepackReportBatch(log::LogContext& lc)
std::unique_ptr<SchedulerDatabase::RepackReportBatch> PostgresSchedDB::getNextSuccessfulRetrieveRepackReportBatch(log::LogContext& lc)
std::unique_ptr<SchedulerDatabase::RepackReportBatch> PostgresSchedDB::getNextSuccessfulArchiveRepackReportBatch(log::LogContext& lc)
std::unique_ptr<SchedulerDatabase::RepackReportBatch> PostgresSchedDB::getNextFailedRetrieveRepackReportBatch(log::LogContext& lc)
std::unique_ptr<SchedulerDatabase::RepackReportBatch> PostgresSchedDB::getNextFailedArchiveRepackReportBatch(log::LogContext &lc)
std::list<std::unique_ptr<SchedulerDatabase::RepackReportBatch>> PostgresSchedDB::getRepackReportBatches(log::LogContext &lc)
void PostgresSchedDB::setRetrieveJobBatchReportedToUser(std::list<SchedulerDatabase::RetrieveJob*> & jobsBatch,
log::TimingList & timingList, utils::Timer & t, log::LogContext & lc)
SchedulerDatabase::JobsFailedSummary PostgresSchedDB::getRetrieveJobsFailedSummary(log::LogContext &logContext)
void PostgresSchedDB::trimEmptyQueues(log::LogContext& lc)
void PostgresSchedDB::setThreadNumber(uint64_t threadNumber, const std::optional<size_t> &stackSize)
void PostgresSchedDB::setBottomHalfQueueSize(uint64_t tasksNumber)
std::list<SchedulerDatabase::RetrieveQueueCleanupInfo> PostgresSchedDB::getRetrieveQueuesCleanupInfo(log::LogContext& logContext)
void PostgresSchedDB::setRetrieveQueueCleanupFlag(const std::string&vid, bool val, log::LogContext& logContext)
scheduler/PostgresSchedDB/RepackReportBatch.cpp
RepackReportBatch::RepackReportBatch()
void RepackReportBatch::report(log::LogContext & lc)
scheduler/PostgresSchedDB/RepackRequest.cpp
uint64_t RepackRequest::getLastExpandedFSeq()
void RepackRequest::setLastExpandedFSeq(uint64_t fseq)
void RepackRequest::reportRetrieveCreationFailures(std::list<Subrequest> ¬CreatedSubrequests) {
void RepackRequest::expandDone()
void RepackRequest::fail()
void RepackRequest::requeueInToExpandQueue(log::LogContext &lc)
void RepackRequest::setExpandStartedAndChangeStatus()
void RepackRequest::fillLastExpandedFSeqAndTotalStatsFile(uint64_t &fSeq, TotalStatsFiles &totalStatsFiles)
void RepackRequest::update()
scheduler/PostgresSchedDB/RepackRequestPromotionStatistics.cpp
RepackRequestPromotionStatistics::RepackRequestPromotionStatistics()
SchedulerDatabase::RepackRequestStatistics::PromotionToToExpandResult RepackRequestPromotionStatistics::promotePendingRequestsForExpansion(size_t requestCount,
log::LogContext &lc)
scheduler/PostgresSchedDB/RetrieveJob.cpp
RetrieveJob::RetrieveJob()
void RetrieveJob::asyncSetSuccessful()
void RetrieveJob::failTransfer(const std::string &failureReason, log::LogContext &lc)
void RetrieveJob::failReport(const std::string &failureReason, log::LogContext &lc)
void RetrieveJob::abort(const std::string &abortReason, log::LogContext &lc)
void RetrieveJob::fail()
scheduler/PostgresSchedDB/RetrieveJobQueueItor.cpp
RetrieveJobQueueItor::RetrieveJobQueueItor()
const std::string &RetrieveJobQueueItor::qid() const
bool RetrieveJobQueueItor::end() const
void RetrieveJobQueueItor::operator++()
const common::dataStructures::RetrieveJob &RetrieveJobQueueItor::operator*() const
scheduler/PostgresSchedDB/RetrieveMount.cpp
const SchedulerDatabase::RetrieveMount::MountInfo &RetrieveMount::getMountInfo()
bool RetrieveMount::reserveDiskSpace(const cta::DiskSpaceReservationRequest &request,
const std::string &externalFreeDiskSpaceScript, log::LogContext& logContext)
bool RetrieveMount::testReserveDiskSpace(const cta::DiskSpaceReservationRequest &request,
const std::string &externalFreeDiskSpaceScript, log::LogContext& logContext)
void RetrieveMount::requeueJobBatch(std::list<std::unique_ptr<SchedulerDatabase::RetrieveJob>>& jobBatch,
log::LogContext& logContext)
void RetrieveMount::setDriveStatus(common::dataStructures::DriveStatus status, common::dataStructures::MountType mountType,
time_t completionTime, const std::optional<std::string> & reason)
void RetrieveMount::setTapeSessionStats(const castor::tape::tapeserver::daemon::TapeSessionStats &stats)
void RetrieveMount::flushAsyncSuccessReports(std::list<SchedulerDatabase::RetrieveJob *> & jobsBatch, log::LogContext & lc)
void RetrieveMount::addDiskSystemToSkip(const DiskSystemToSkip &diskSystemToSkip)
void RetrieveMount::putQueueToSleep(const std::string &diskSystemName, const uint64_t sleepTime, log::LogContext &logContext)
scheduler/PostgresSchedDB/RetrieveRequest.cpp
void RetrieveRequest::update() {
void RetrieveRequest::setFailureReason(const std::string & reason) {
bool RetrieveRequest::addJobFailure(uint32_t copyNumber, uint64_t mountId, const std::string & failureReason, log::LogContext & lc) {
void RetrieveRequest::setRepackInfo(const cta::postgresscheddb::RetrieveRequest::RetrieveReqRepackInfo & repackInfo) {
void RetrieveRequest::setJobStatus(uint32_t copyNumber, const cta::postgresscheddb::RetrieveJobStatus &status) {
void RetrieveRequest::setFirstSelectedTime(const uint64_t firstSelectedTime) {
void RetrieveRequest::setCompletedTime(const uint64_t completedTime) {
void RetrieveRequest::setReportedTime(const uint64_t reportedTime) {
void RetrieveRequest::setFailed() {
std::list<RetrieveRequest::RetrieveReqJobDump> RetrieveRequest::dumpJobs() {
-
15:50
→
16:00
CTA dev board review 10m
Objective
- Look at the active issues in our CTA dev board.
- Decide if they should be kept, removed, reassigned, prioritised, etc.
-
15:00
→
15:10