Page tree


Note: The one highlighted with yellow are currently excluded from the dashboard in production

Terminology 

Term 


Description

User site 

The site of the user who has logged in to the dashboard (in the document below we call the site of the user who logged in the dashboard  as user site)

Service request

The ticket that  user creates in service one

Helix ticket 

One service one ticket can have one to many corresponding tickets in Helix in categories such as Work order (WO), Incident (INC), and CASE.  An INC ticket  in Helix is mapped to only one Service one ticket. However, more than one WO tickets can be associated to one service one ticket. 

Open/Closed tickets 

Tickets with the following status:

CASE→   "Closed", "Pending", "Assigned", "Cancelled", "In progress", "New"

INC→ "Resolved", "Pending", "Closed", "In Progress", "Cancelled", "Assigned"

WO→ "Rejected", "Pending", "Cancelled", "Completed", "Assigned", "Waiting Approval", "Planning", "Closed", "In Progress"

Tickets pending user actions 

Tickets with the status:"Pending"  AND Status Reason: 

WO & INC→  "Status_Reason": "Client Action Required" 

CASE→ StatusReason:"Customer Response", "Customer Follow Up Required" 

Fastlane ticket

WO → tickets with "Escalated Fastlane"= "YES",

INC→ tickets with  "Escalated fastlane"=Yes

CASE → tickets with  "Escalated_Custom"=10

Today

In the document below, Today means the date that the user opens a dashboard and viewing the KPIs. The dashboard is being updated every hrs (Figures captured by KPIs is updated every hr)

Crisis/Maior incident 

WOs with the  work_order_template_used="Critical/Major Incident Communication" 

User view 

The DT DASH user view aims to provide users with comprehensive visibility into tickets raised  globally, their site's operations, and the details and status of their own tickets.


Important Note: The rows highlighted in yellow are obsolete, as these indicators have not been implemented in production.


Context Indicators/graphs / Fields Description
DT Global statusDT weather

This KPI represents a dynamic weather icon that reflects the current status of DT services. The icon adjusts and provides details based on last information received (according to refresh time) on factors such as service slowdowns or crisis, blocked apps, incidents, or user pending tickets . It transitions from sunny (indicating smooth service) to stormy (reflecting significant issues or disruptions), providing an intuitive, visual summary of system health and performance for quick monitoring and proactive action. DT-DASH Mappings:

My Requests My Requests tab

Source: Service One and Helix

Total of ongoing requests or closed / resolved in the last 30 days

My Requests Requests Needing my Action tab

Source: Service One and Helix

Total of ongoing requests that need user action to proceed.

Service one tickets in which at least one of the related helix tickets is with the  "Status_Reason": "Client Action Required".


Calculation at Helix ticket level: 

The number of INC and WO where 

status "pending"  

AND  "Status_Reason": "Client Action Required" 

And The number of CASE tickets where 

status "pending"  

AND statusReason: "Customer Response"AND "Customer Follow Up Required"

My RequestsMy Fastlaned Requests tab

Source: Helix

Fastlane is defined at WO, INC, and Case ticket level. 

The number of service request tickets for which the corresponding :

WO in helix is "Escalated Fastlane"= "YES",

AND

INC is "Total_Escalation_Level"= between 0 to 13 (not applicable for case tickets)

AND

CASE with  "Escalated_Custom"= 10

If the Service one ticket refers to WOs in helix, at least one  of the WO associated should be  "Escalated Fastlane"= "YES" to be counted as Fastlane.

My Requests - details tableService one request reference

Source: Service One

The ID of the service one ticket.

  • An INC ticket  in Helix is mapped to only one Service one ticket. However, more than one WO tickets can be associated to one Service One ticket. In this section only represents ticket id at service one level. The service one tickets with status= "waiting for approval" has not yet a corresponding ticket in Helix. When a service one ticket is approved, a Helix ticket will be created. 
  • The service one ticket ID should appear in the dashboard as soon as created in the service one (no matter if the corresponding ticket in Helix is created yet).
  • As soon as the corresponding helix  ticket  created, it should be considered for the related  calculations  in the dashboard. 
  • In the dashboard, the hyperlink should land the user to the service one to see the details. 
  • Including HELIX tickets for which there is a link with Service request and tickets in SR for which there is NO link to helix.
My Requests - details tableFulfillment number

Source: Helix

A specific reference associated with the technical process, for technical teams use in helix. One service request can be linked to one or more fulfillment references. No hyperlink is expected to the user because the communication is done via Service One reference. This is just informative in case user needs to refer to it when asked by technical teams.

My Requests - details tablePriority 

Source: Helix

Calculation: 

 if the service one ticket is an INC ticket in helix, "Priority"= the priority of the helix ticket 

 if the service one ticket is WO ticket in helix, "Priority"= Fastlane or Normal based on "Escalated Fastlane" attribute of WO

Exception: 

Incident tickets might become Fastlane. In this situation "Priority"= Fastlane or Normal based on " Esclated fastlane " attribute of INC .

if the service one ticket is a case ticket in helix, "Priority"= the priority of the helix ticket. 

Exception: 

Incident & Case tickets might become Fastlane.

In this situation:

INC→  "Priority"= Fastlane or Normal based on "Escalated fastlane"attribute of INC.

Case→ "Priority"= Fastlane or Normal based on "Escalated_custom"attribute of INC.

My Requests details tableStatus 

Source: Service One and Helix

CASE→   "Closed", "Pending", "Assigned", "Cancelled", "In progress"

INC→ "Resolved", "Pending", "Closed", "In Progress", "Cancelled", "Assigned"

WO→ "Rejected", "Pending", "Cancelled", "Completed", "Assigned", "Waiting Approval", "Planning", "Closed", "In Progress"

If still pending from approval in Servcie one, the status will show that status, Once a fulfillment reference is create din Helix, it reflects Helix status.

My Requests details tableDescription 

Source: Service One

The title of the service one ticket

Anonymisation if related to HR: Condition
IF (line_of_business == human-resources)
then "*** Anonymised info ***"
else > source value.

My Requests details tableSubmit date 

Source: Service One

The submission date of the service one ticket

My Requests details tableExpected resolution date 

Source: Service One and Helix

  • Identify the SR Type of Each Ticket: Extract the SR type from the user list for each ticket.

  • Find Corresponding MTTR in the Reference File: Match the SR type from the user list with the SR type in the reference file to retrieve the MTTR.

https://docs.google.com/spreadsheets/d/14tu7mSb4co4CXGHL-9y-4Y-mqErW6er4ei8dX6BjOuQ/edit?pli=1&gid=0#gid=0 Connect your Google account

  • Calculate “Expected resolution date”: Add the MTTR to the submit date of each ticket. Then, round the resulting date to the next day to get the “estimated resolution date”.

  • Calculate the days passed since the Submit Date and compare it with Expected resolution date”. 

  • Specify states including: 

    • If the status of the ticket is 'closed','cancelled','canceled','completed','rejected','resolved' => we display “completed” (grey background)

    • if the ongoing resolution date < 1.15 * estimated resolution date => “too early” (grey background)

    • if the ongoing resolution date >= 1.15 * estimated resolution date and <= 1.15 * estimated resolution date => “ontime” (green background)

    • if the ongoing resolution date > 1.15 * estimated resolution date => “overdue” (red background)

    • if not possible to compute => “not available” (grey background)

    In order to determine the estimated resolution date of a ticket (available only for workorders):

    • we compute the average resolution time of the tickets of the same service in the last 3 months

    • and add this average resolution time to the submit date.


My Requests details tableResolved date

Source: Service One and Helix


Calculation: 

if the service one ticket is an INC ticket in helix, "Resolved date"= the "last resolved date"  of the INC ticket 

if the service one ticket refers to one/many WO tickets in helix, "resolved date"= Max "Completed_date" of all WO associated AND all WOs' "status"= "Resolved","Closed" OR "Canceled"

My Daily DT ServicesCrisis & Major Incidents

Source: Helix

There is a WorkOrder (Crisis Ticket) managed by Siam team (part of Command Center) that is expected to update it upon any Crisis or Major Incident is identified. The table is then populated accordingly.

  • When WO (crisis ticket) is resolved, the status should turn mainly “complete”, “Closed”, “Rejected”, “Closed” .→ Green in the dashboard 
  • The WO (crisis ticket) with status “Cancelled” are excluded

  • The WOs with Status “In progress” always should appear on top of the list→ Red in the dashboard

  • When WO (crisis ticket) is still ongoing (NOT resolved yet), the status is mainly “In progress”. In case the other status such as “Pending”, “Planning”, “Assigned”, “Waiting Approval” appears, the red colorcoding rules should be applied.

 

My Daily DT ServicesApplication status

Source: Kadiska and SolMan

Connectivity per Application
The avg connectivity for a specific application. it is used to assess the performance of individual applications globally.

Calculation steps:

  • AVG (CT) per Site Application in the Last Hour:   Compute the average connection time for each application globally

  • AVG (CT) per Site Application in the Last 3 Months: Compute the mean and standard deviation of the connection time for each application and use these as the reference for evaluating the last hour's connectivity.  

  • Display the results in the dashboard using 4 states visual as below:

4 States explanation:

Fast:

  • When the AVG (CT) in the last hour is less than the mean CT of the last 3 months minus 1 standard deviations.

Medium:

  • When the AVG (CT) in the last hour is within 4 standard deviations below the mean CT of the last 3 months but less than the mean CT.

Slow:

  • When the AVG (CT) in the last hour is greater than the mean CT but less than the mean CT plus 4 standard deviations.

Blocked:

  • When the AVG (CT) in the last hour is greater than or equal to the mean CT plus 4 standard deviations, or there is no connectivity at all.

Example Calculation-> NOH

  • Last 3 Months Mean (μ) CT: 37ms

  • Last 3 Months Standard Deviation (σ) CT: 17ms

  • Fast: AVG (CT) in the last hour< (μ - σ) → 37ms - 17ms = 20ms

  • Medium: (μ - σ) <AVG (CT) in the last hour<(μ + σ) →20< AVG (CT) in the last hour<54

  • Slow: (μ + σ) <AVG (CT) in the last hour<(μ + 2*σ) →54< AVG (CT) in the last hour<37+2*17=71

  • Blocked: AVG (CT) in the last hour>(μ + 2*σ)

 Note: if there is no information related to the application up to 3 months, it will still be displayed, with the related last update time, due to the 3 months average calculation baseline.

My SiteName of the user site

Displays the user site by default, according to his HR register

My SiteNetwork speed

Average Connection Time-> AVG (CT)

KPI : Site Connectivity → gauge
The avg connectivity time at a specific site. it helps in evaluating the performance of each site individually.

4 states: Blocked, Slower, In average, Faster

Calculation steps:

  • AVG (CT) per Site in the last hr :   Compute the average connection time at each site.

  • AVG (CT) per Site in the Last 3 Months: Compute the mean and standard deviation of the connection time for at each site and use these as the reference for evaluating the last hour's connectivity.  

  • Display the results (last 3 months average vs current speed) in the dashboard

Applied a pseudo fibonacci sequence ( 1.33 - 1.66 - 2 - 3) > to confirm to finetune the sensitivity to capture from lower instability.

My SiteService Request Average Time to Resolve

Average number of days taken to resolve service requests in the user site, during the last 2 months.

Color coding: Color coding: comparing to previous period of 2 months vs last 2 months.

Green = Decrease versus previous 2 months; Orange = Stable; Red = increase

  1. Retrieve Ticket Data: WO  tickets with

     "Status": "Resolved" AND "Closed" 

    AND 

    Last Resolved Date falls within the last 2 months in the user site.

  2. Calculate the Time to Resolve by subtracting the 'Submit Date' from the "completed date" for each ticket. 
  3. Calculate AVG of the Time to resolve data


My SiteIncidents Average Time to Resolve

Average number of days taken to resolve Incidents in the user site, during the last 7 Days. 

Color coding: comparing to previous period of 7 days vs last 7 days

Green = Decrease versus previous 7 days; Orange = Stable; Red = increase

  1. Retrieve Ticket Data: WO  tickets with

     "Status": "Resolved" AND "Closed" 

    AND 

    Last Resolved Date falls within the last 7 days in the user site.

  2. Calculate the Time to Resolve by subtracting the 'Submit Date' from the "completed date" for each ticket. 
  3. Calculate AVG of the Time to resolve data


DSDS view 

The Site Manager view aims to provide comprehensive visibility into tickets raised per site, covering various aspects such as the number of tickets raised, resolved, their current status, priority levels, and required actions. 

Scope: 

Tickets assigned  to DSDS→  Tickets with Service Type: "User Service Restoration","Security incident","user service request"

DSDS view is not available for every user in Solvay, the  access should be approved by g giusti and ricardo neves. 


Context KPI nameIndicators/graphs / Fields 

Description 

WO resolution times (in days)WO resolution times (in days)KPI
  1. Retrieve Ticket Data:  WO tickets with (unit: days)

     "Status":  "Closed" , "Completed" And "Rejected"

    AND 

    "Completed date" falls within the  last 7 days  in the user site.

    Exclude:  Submit date- Completed date < 5 mins 
  2. Calculate the Time to Resolve  by subtracting the 'Submit Date' from the "Completed date" for each ticket. 
  3. Calculate Mean   of the Time to resolve data:
WO Rejected % of WO KPI

WO rejected: Tickets created in the user site with Status='Rejected'

Calculation:
Rejected % of WO=(Number of WO rejected with "submit date" falling in the  last 7 days /total number of tickets with "submit date" falling in the  last 7 days) * 100

WOOverdue%KPI

WO Overdue: Tickets created in the user site with the associating  ticket in the measurement site where:

svt_title='Standard Support Target'

AND 

goal_sched_goal_time < up_elapsed_time

Overdue % of WO=(Number of WO Overdue with "submit date" falling in the last 7 days /total number of WO tickets with "submit date" falling in the  last 7 days) * 100

Note: Mostly After feb 2024 we have svt_title='Standard Support Target'. Might have minimal impact on the calculation of this KPI

INC Resolved Incident last calendar weekKPIThe number of incidents created in the user site with "last resolved date" falls within last 7 days (not considering the time stamp for the calculation )
INC MTTR of last week resolved incidents (in days)KPI
  1. Retrieve Ticket Data:  INC    tickets with (unit is days)

     "Status": "Resolved" OR "Closed" 

    AND 

    Last Resolved Date falls within the last 7 days  in the user site.

  2. Calculate the Time to Resolve  by subtracting the 'Submit Date' from the "last resolved date" for each ticket. → timestamp considered for calculation  Exclude: Submit date- resolved date < 5 mins 
  3. Calculate Mean   of the Time to resolve data:
INCCreated incidents last calendar week KPIThe number of INC created in the user site with "submit date" falls within last 7 days
INC %unassigned backlog INC Pie chart

Scope is open tickets for both assigned & unassigned (excluding the ones with "Status": "Resolved", "Closed"  AND "Canceled" -open tickets)

Unassigned INC= Tickets created in the user site  WHERE "Assignee Login ID" is Null. Note that Status can be 'Assigned' but it is not reliable, we have many records with assigned status but assignee column is null. 

Assigned INC= the same as above with "Assignee Login ID" filled. 

Calculation:

% Unassigned INC: (number of unassigned INC with "submit date" falls within the last 7 days/ total open INC tickets per user site  with "submit date" falls within the last 7 days) *100

INC Pending Sub state wise backlog volume Pie chart 

Pending INC= Tickets created in the user site with "Status": "Pending" 

Calculation:

% Pending INC per 'status_reason'

:  (number of pending INC with with "submit date" falls within the last 7 days AND "status reason"= x/total INC tickets per user site  with "submit date" falls within the last 7 days)



Global view 

The global view is capturing inisghts globally without any filter on the  site. 

Context Indicator name Indicator/Graphs/ Field Description 
INC Incident age PI chart 
  • Filters:

    •  only open tickets are included.-> ( INC tickets with status: In progress, Pending, Assigned;)

    •  tickets submitted in the last 15 days for INC are captured

  • Categorization:

    • Calculates the age of the ticket in days-> Age=date difference of current date and submit date. This gives the number of days since the ticket was submitted.

    • Tickets are categorized into 5-day ranges (e.g., 0-5 days, 6-10 days, 11-15 days,).

  • Aggregation:

    • Groups results by service_type and age category.

  • Ordering in a bar chart:

    • Results are ordered by age_category in a bar chart

INC Incident age trendBar chartThe differences of the age of INCs per category  between the current period and the previous period.  For that the snapshot of age per category specified above is needed in the last 15 days 
INC Affected servicePie Chart

 Filters

  • Include only open tickets: INC tickets with statuses: In Progress, Pending, Assigned.

  • Filter tickets submitted in the last 15 days:

 Categorization

  • Affected Service Categorization:

    • Based on whether the ServiceCI field is filled:

      • Filled ServiceCI: Considered for SLA categorization.

      • Not Filled ServiceCI: Excluded from SLA analysis.

Aggregation

  • For Affected Services:

    • Count the number of tickets where the ServiceCI field is filled and those where it is not.

    • Present this as a pie chart to show the proportion of tickets with/without ServiceCI.

  • For SLA Categories:

    • Count the number of tickets in each SLA category (IN SLA, NO SLA, OVER SLA) for each affected service.

    • Aggregate data per service type to show a comparison of SLA performance.

 Visualization

  • Graph:

    • Create a stacked horizontal bar chart to represent SLA categories (IN SLA, NO SLA, OVER SLA) for (Top ten) affected service.

      • X-axis: Percentage (0%–100%).

      • Y-axis: Affected services.

      • Colors:

        • Blue for IN SLA.

        • Yellow for NO SLA.

        • Red for OVER SLA.

  • Pie Chart:

    • Show the proportion of tickets with ServiceCI filled vs. not filled.

Pie Chart:

      • Show the proportion of tickets with ServiceCI filled vs. not filled.

INCSLA statusstacked horizontal bar char

Filters

  • Include only open tickets: INC tickets with statuses: In Progress, Pending, Assigned.

  • Filter tickets submitted in the last 15 days:

 Categorization

  • SLA Categorization (for tickets with ServiceCI filled):

    • Use slm_status for SLA categorization:

      • IN SLA: (slm_status = Green: 1 & 3)

      • NO SLA: (slm_status = Yellow: 4 & 5)

      • OVER SLA: (slm_status = Red: 2

For SLA Categories:

    • Count the number of tickets in each SLA category (IN SLA, NO SLA, OVER SLA) for each affected service.

    • Aggregate data per service type to show a comparison of SLA performance.

Visualization

  • Graph:

    • Create a stacked horizontal bar chart to represent SLA categories (IN SLA, NO SLA, OVER SLA) for (Top ten) affected service.

      • X-axis: Percentage (0%–100%).

      • Y-axis: Affected services.

      • Colors:

        • Blue for IN SLA.

        • Yellow for NO SLA.

        • Red for OVER SLA.



Service Management view 



xxx