At ServiceNow, SQL is used internally by Data Analysts and Data Scientists who analyze customer and sales data to inform the company's product roadmap. Externally, the ServiceNow platform has the GlideRecord API which enables ServiceNow developers to query and analyze a company's ServiceNow database. Because of how crucial SQL is to ServiceNow, they frequently asks SQL questions during job interviews for ServiceNow developers, Data Engineers, and solutions architects.
In case you're studying for a ServiceNow SQL Assessment, here’s 10 ServiceNow SQL interview questions to practice, which are similar to commonly asked questions at ServiceNow – able to answer them all?
As a data analyst at ServiceNow, you are asked to analyze the average time taken to resolve tickets in your company. You'll need to work with "tickets" table which contains ticket_id, status, created_at, and resolved_at columns. The status column indicates whether a particular ticket is resolved (value = 1) or not (value = 0). Resolved_at is NULL if the ticket is not resolved.
Your task is to calculate the average time taken to resolve tickets each day, and provide a running average across days using window functions.
ticket_id | status | created_at | resolved_at |
---|---|---|---|
T101 | 1 | 06/08/2022 09:10:00 | 06/08/2022 10:15:00 |
T102 | 1 | 06/08/2022 09:30:00 | 06/08/2022 12:45:00 |
T103 | 0 | 06/08/2022 10:00:00 | NULL |
T104 | 1 | 06/09/2022 08:00:00 | 06/09/2022 09:30:00 |
T105 | 1 | 06/09/2022 09:20:00 | 06/09/2022 12:00:00 |
date | avg_resolve_time | running_avg |
---|---|---|
2022-06-08 | 105 | 105 |
2022-06-09 | 135 | 120 |
This SQL query first calculates the average resolve time for each date in the resolve_times Common Table Expression (CTE). In the main query, the running average is calculated using a window function which includes all rows from the start of the table (UNBOUNDED PRECEDING) up to and including the current row (CURRENT ROW). The result is then ordered by the date.
ServiceNow handles multiple incident reports every day. These reports come from multiple clients, and each client may submit multiple reports. Each incident report consists of a unique id, client id, date of submission, issue category, and status (open, in-progress, closed).
Your task is to design a SQL query that calculates the number of unresolved (status: open or in-progress) incident reports per client for the last 30 days.
report_id | client_id | submission_date | category | status |
---|---|---|---|---|
201 | 101 | 09/13/2022 | hardware | open |
204 | 102 | 09/16/2022 | software | in-progress |
215 | 101 | 09/20/2022 | network | closed |
223 | 103 | 09/25/2022 | software | open |
231 | 102 | 09/30/2022 | hardware | closed |
To solve this problem, we first need to filter the records that were submitted in the last 30 days and have the 'open' or 'in-progress' status. Then, we group these records by the client_id, getting the count unresloved reports for each client.
The result of the query will give a clear understanding of how many unresolved reports each client has, which is beneficial for understanding the workload distribution and for prioritizing tasks.
To practice a related problem on DataLemur's free online SQL code editor, attempt this Meta SQL interview question:
A database index is a data structure that improves the speed of data retrieval operations on a database table.
There are few different types of indexes that can be used in a database:
For a concrete example, say you had a table of ServiceNow customer payments with the following columns:
Here's what a clustered index on the column would look like:
A clustered index on the column would determine the physical order of the records in the table based on the . This means that the records with the earliest values would be stored together physically in the table, followed by records with later payment_date values. This speeds up queries that filter or sort the data based on the , as the records are already physically sorted in the table. For example, if you want to retrieve all the payments made in the month of June, the database can use the clustered index to quickly locate and retrieve the desired records without having to search through the entire table.
ServiceNow is a platform for service management. In our company, we store data about customer-incidents in the "incidents" table. Each incident has a level of impact and urgency, both scored 1-3 where 1 is high and 3 is low. It also has the status: "New", "In Progress", "Resolved", or "Closed".
Write a SQL query to retrieve all incidents which have high impact and urgency (score 1), and their status is either "In Progress" or "New". The incidents should be sorted by their creation date in ascending order.
incident_id | customer_id | created_date | impact | urgency | status |
---|---|---|---|---|---|
1001 | 50 | 06/01/2022 00:00:00 | 2 | 1 | "In Progress" |
1002 | 120 | 06/05/2022 00:00:00 | 1 | 1 | "New" |
1003 | 90 | 06/06/2022 00:00:00 | 1 | 1 | "Resolved" |
1004 | 200 | 06/10/2022 00:00:00 | 1 | 1 | "New" |
1005 | 60 | 06/20/2022 00:00:00 | 3 | 1 | "In Progress" |
incident_id | customer_id | created_date | impact | urgency | status |
---|---|---|---|---|---|
1002 | 120 | 06/05/2022 00:00:00 | 1 | 1 | "New" |
1004 | 200 | 06/10/2022 00:00:00 | 1 | 1 | "New" |
This query uses a WHERE clause to filter incidents based on multiple conditions. The AND operator is used to ensure both impact and urgency are high (equal to 1). The OR operator is used to specify the exceptional case where the status could be either "In Progress" or "New". The ORDER BY clause is used to sort the result based on the creation date in ascending order.
To practice another question about calculating rates, solve this SQL interview question from TikTok on DataLemur's online SQL coding environment:
A is a field in a table that references the of another table. It creates a link between the two tables and ensures that the data in the field is valid.
Say for example you had sales analytics data from ServiceNow's CRM (customer-relationship management) tool.
The FOREIGN KEY constraint ensures that the data in the field of the "opportunities" table is valid, and prevents the insertion of rows in the table that do not have corresponding entries in the table. It also helps to enforce the relationship between the two tables and can be used to ensure that data is not deleted from the accounts table if there are still references to it in the opportunities` table.
In your role as a data analyst at ServiceNow, you've been tasked with examining the company's IT ticket system. Each ticket has an opening and closing timestamp, and you've been asked to calculate the average resolution time for IT tickets created in each month of 2022.
Please provide your query using the following table as your data sample:
ticket_id | start_time | end_time |
---|---|---|
1 | 2022-01-01 09:00:00 | 2022-01-01 14:00:00 |
2 | 2022-01-02 12:00:00 | 2022-01-02 16:00:00 |
3 | 2022-02-01 09:00:00 | 2022-02-01 11:00:00 |
4 | 2022-02-02 18:00:00 | 2022-02-03 18:00:00 |
5 | 2022-03-01 06:00:00 | 2022-03-01 16:00:00 |
The and columns are both of type. The column is an integer.
We would like an output that looks like this:
month | avg_resolution_time_in_hours |
---|---|
1 | 4.50 |
2 | 15.00 |
3 | 10.00 |
This query determines the difference between the and of each ticket, converts that time difference to hours, and then calculates the average of those times for tickets created in each month of 2022. This should give you the average resolution time of IT tickets on a monthly basis for the year 2022.
To practice a similar SQL interview question on DataLemur's free interactive SQL code editor, solve this Facebook SQL Interview question:
Using a join in SQL, you can retrieve data from multiple tables and merge the results into a single table.
In SQL, there are four distinct types of JOINs. To demonstrate each kind, Imagine you had two database tables: an table that contains data on Google Ads keywords and their bid amounts, and a table with information on product sales and the Google Ads keywords that drove those sales.
: An INNER JOIN retrieves rows from both tables where there is a match in the shared key or keys. For example, an INNER JOIN between the table and the table could be performed using the keyword column as the shared key. This would retrieve only the rows where the keyword in the table matches the keyword in the table.
: A LEFT JOIN retrieves all rows from the left table (in this case, the table) and any matching rows from the right table (the Sales table). If there is no match in the right table, values will be returned for the right table's columns.
: A RIGHT JOIN retrieves all rows from the right table (in this case, the Sales table) and any matching rows from the left table (the table). If there is no match in the left table, values will be returned for the left table's columns.
: A FULL OUTER JOIN retrieves all rows from both tables, regardless of whether there is a match in the shared key or keys. If there is no match, values will be returned for the columns of the non-matching table.
ServiceNow runs tons of enterprise marketing campaigns. Given a table of user interactions with their online ads, and another table of product purchases after clicking through the ads, let's calculate the click-through conversion rates for each ad.
The conversion rate is calculated as the number of purchases after clicking an ad, divided by the number of clicks on the ad. We then multiply the result by 100 to get a percentage.
Here are the sample data tables:
click_id | user_id | click_time | ad_id |
---|---|---|---|
1 | 123 | 06/08/2022 00:00:00 | 5000 |
2 | 165 | 06/08/2022 01:00:00 | 5001 |
3 | 235 | 06/08/2022 01:30:00 | 5001 |
4 | 125 | 06/08/2022 02:00:00 | 5000 |
5 | 189 | 06/08/2022 03:00:00 | 5001 |
purchase_id | user_id | purchase_time | product_id |
---|---|---|---|
1 | 123 | 06/08/2022 00:05:00 | 2000 |
2 | 165 | 06/08/2022 01:10:00 | 2001 |
3 | 235 | 06/08/2022 01:35:00 | 2000 |
4 | 189 | 06/08/2022 03:05:00 | 2001 |
5 | 189 | 06/08/2022 03:05:00 | 2000 |
The above SQL block starts by creating two temporary tables, and . The first one computes the number of clicks for each ad, while the second one computes the number of purchases after each ad click. We make a join with a condition that purchase time should be greater than the click time. We then calculate the conversion rate by dividing the number of purchases by the number of clicks for each ad. To count clicks that didn't lead to purchases, we use a LEFT JOIN and replace missing values with 0 using the function.
This gives us the click-through conversion rate for each ad in the ServiceNow online marketing campaigns.
To solve a related problem on DataLemur's free interactive coding environment, solve this Facebook SQL Interview question:
ServiceNow, as a software company, would likely have a database table storing information about their clients such as their client_id, name, email, phone, and region. Imagine we have a scenario where the marketing team is planning a region-specific campaign and wants to filter out all clients that belong to a specific region that starts with 'Cal'. Create a PostgreSQL query that retrieves all records from the clients table where the region starts with 'Cal'.
client_id | name | phone | region | |
---|---|---|---|---|
101 | John Doe | john.doe@example.com | 123456789 | California |
102 | Jane Smith | jane.smith@example.com | 234567890 | Texas |
103 | Alice Johnson | alice.johnson@example.com | 345678901 | California |
104 | Bob Johnson | bob.johnson@example.com | 456789012 | Calgary |
105 | Charlie Brown | charlie.brown@example.com | 567890123 | Florida |
client_id | name | phone | region | |
---|---|---|---|---|
101 | John Doe | john.doe@example.com | 123456789 | California |
103 | Alice Johnson | alice.johnson@example.com | 345678901 | California |
104 | Bob Johnson | bob.johnson@example.com | 456789012 | Calgary |
This query uses the LIKE keyword in SQL to filter out clients whose region starts with 'Cal'. The percentage sign % is a wildcard in SQL that matches any sequence of characters. Since it is placed after 'Cal', it will match any string that starts with 'Cal'.
To solve another question about calculating rates, solve this SQL interview question from TikTok on DataLemur's interactive SQL code editor:
The clause is used to filter rows from the result set of a , , or statement. It allows you to specify a condition that must be met for a row to be included in the result set.
The clause is used to filter groups created by the clause. It is similar to the clause, but it is used to specify conditions on the groups created by the clause, rather than on the individual rows of the table.
Say you were working on a social media analytics project for ServiceNow.
Here is an example of a SQL query that you might write which uses both the WHERE and HAVING clauses:
This query retrieves the total impressions and average conversions for each platform in the table, the date of the campaign is in January 2023. The rows are grouped by platform and the clause filters the groups to include only those with more than 5000 impressions and an average conversion rate above 0.2.
The best way to prepare for a ServiceNow SQL interview is to practice, practice, practice. Besides solving the above ServiceNow SQL interview questions, you should also solve the 200+ DataLemur SQL Interview Questions which come from companies like FAANG and tech startups.
Each problem on DataLemur has hints to guide you, detailed solutions and best of all, there's an online SQL code editor so you can instantly run your query and have it checked.
To prep for the ServiceNow SQL interview you can also be a great idea to solve interview questions from other tech companies like:
However, if your SQL query skills are weak, forget about diving straight into solving questions – refresh your SQL knowledge with this DataLemur SQL tutorial.
This tutorial covers SQL topics like filtering strings using LIKE and math functions – both of which show up frequently in SQL interviews at ServiceNow.
For the ServiceNow Data Science Interview, in addition to SQL query questions, the other types of questions to prepare for are:
To prepare for ServiceNow Data Science interviews read the book Ace the Data Science Interview because it's got: