IT-ASDF: Data Streams feature in OpenSearch, Postmortem of CERNTS connection issues, Out of working hours support: two mechanisms in ServiceNow

Europe/Zurich
513/1-024 (CERN)

513/1-024

CERN

50
Show room on map
Zoom Meeting ID
63445832154
Description
IT Activities and Services Discussion Forum (ASDF)
Host
Jorge Garcia Cuervo
Alternative hosts
Charles Delort, Karolina Przerwa, Stefan Nicolae Stancu, Enrico Bocchi, Nikos Papakyprianou, Pablo Martin Zamora, Ismael Posada Trobo
Useful links
Join via phone
Zoom URL

Data Streams feature in OpenSearch

Can you confirm that the data is always written to the last index in the data stream, meaning that you cannot figure out which index contains the data for which day and have to query all indices for any time-based search?

Yes, new data is always written to the most recent write index in the data stream. 

(When querying a data stream, OpenSearch optimizes the search process using Apache Lucene's index segments and metadata to quickly identify and query only the relevant indices and segments that match the time-based criteria, avoiding unnecessary scans and ensuring high performance)

Is Data stream tolerating adding older (by timestamp) log entries after new ones are already on the disk?

Yes, Data Streams allow adding older log entries (by timestamp) even after newer ones, as the timestamp is just a field in the document, not an index key. OpenSearch writes all documents to the current write index regardless of their timestamps. Just ensure a consistent timestamp format across logs in the same data stream for proper querying.

Postmortem of CERNTS connection issues (OTG0153169)

Is the KVM virtualization somehow related to the issue?

Not sure, it is hard to know with all the different variables to control.

We couldn't reproduce the issue on bare metal, nor on Microsoft Hyper-V virtualisation, but we reproduced it every time on OpenStack. It points to KVM being part of the equation, but we cannot confirm.

Out of working hours support: two mechanisms in ServiceNow

Can we bypass roles.cern.ch to have groups directly mapped to ServiceNow?

No, but 2nd line support can create that mapping for you on request via this form

What will happen to tickets created by email to the Service Desk general email address?

They need to be sorted & escalated by the Service Desk first and then OWH routing rules will apply if that OWH group is not empty.

How will TI operators handle Best Effort / Rota / Piquet rules?

They will start calling through the list, starting from Piquet, then Rota, then Best Effort.
They also have a hard-copy printout of the calendar, as well as a link on their dashboard to the 'live' calendar. But perhaps best not to change it during the annual closure.

What is happening if OWH rules are empty

Standard routing rules are applied, whatever that is for your service.

You said it is up to the service manager to define if they want to declare Best Effort support, on a voluntary basis, but also that TD would be reviewing the list. What sort of review would that be? Will people then be forced to register Best Effort?

It will be done to prevent human mistakes and to ensure there are no obvious gaps in major services. Ultimately the BE contact can be some level of management.

What are the best practices to define internal roles for Best Effort / Rota / Piquet?

That is up to each service/section/group to decide.

Is there any API for accessing / modifying roles?

No, this is an old custom CERN development. It will hopefully be replaced at some point next year, tbc.

Is a phone number obligatory for defining roles? Some people are not willing to be called during certain hours or days.

Yes, a phone number is obligatory as it's the main channel to communicate for TI operators, but you can define times for when people accept to be contacted. 

Are the existing KBs and procedures valid for the services?

Yes, this is complementary to existing procedures. Every year, this is mainly to allow phone numbers to be visible in case of need. 

There are minutes attached to this event. Show them.