11 DoubleVerify SQL Interview Questions (Updated 2024)

Updated on

March 2, 2024

At DoubleVerify, SQL does the heavy lifting for extracting and analyzing ad verification data, and managing data quality checks regarding ad fraud detection and viewability measurements. So, it shouldn't surprise you that DoubleVerify often tests SQL coding questions during interviews for Data Analyst, Data Science, and BI jobs.

To help you ace the DoubleVerify SQL interview, here’s 11 DoubleVerify SQL interview questions – able to answer them all?

DoubleVerify SQL Interview Questions

11 DoubleVerify SQL Interview Questions

SQL Question 1: Calculate Campaign's Daily Impressions

DoubleVerify verifies the quality and effectiveness of digital media for brands. Assume you're given a table named and another named .

The table provides information about distinct advertising campaigns, their start and end dates, and the advertiser. The table logs each time an advertisement is shown (an impression).

Each row in the table has the following 4 columns:

Example Input:
101/01/202005/01/2020Brand A
205/01/202009/01/2020Brand A
303/01/202007/01/2020Brand B
404/01/202008/01/2020Brand C

Each row in the table has the following 5 columns:

Example Input:

Write a SQL query that calculates the number of impressions per campaign per day. The output should include the days with zero impressions. Round the number of impressions to the nearest integer without using a ROUND() function.

Example Output:


This query creates a series of dates for each campaign that spans from the start date to the end date. It then calculates the daily impressions for each date within the range. The COALESCE function is used to substitute null values with 0 in case there are days with no impressions.

p.s. Window functions show up pretty frequently during SQL interviews, so practice the 27+ window function questions on DataLemur

DataLemur SQL Questions

SQL Question 2: DoubleVerify Ad Performance Analysis

Given that DoubleVerify is a company focused on digital advertising, an appropriate question might involve analyzing and understanding advertisement engagement.

Let's consider a scenario where DoubleVerify wants to understand the daily engagement of each ad campaign and the total cost per campaign. The analyst is particularly interested in finding the top three campaigns with the highest costs and the average daily engagement for the past month.

Assume we have two database tables: and .

Example Input:

1DV Launch5000
2Autumn Sale8000
3Black Friday10000
4Holiday Deal4000
5Spring Clearance3000

Example Input:


The goal is to write a SQL query to calculate the average daily engagements and total cost for each campaign, then identify the top three costly campaigns in descending order of cost.


This query first joins the and tables on the column. It then filters out ad engagements that happened within the past month. Using the operation, the query then calculates the average daily engagements and total cost for each campaign over the past month. Finally, the and operations are used to find the top three costliest ad campaigns.

SQL Question 3: What sets the 'BETWEEN' and 'IN' operators apart?

The and operators are both used to filter data based on certain criteria, but they work in different ways. is used to select values within a range, while is used to select values that match a list.

For instance, if you have a table called that contains the salary of each employee, along with which country they reside in, you could use the operator to find all employees who make between 130kand130k and 160k:

To find all employees that reside in France and Germany, you could use the operator:

DoubleVerify SQL Interview Questions

SQL Question 4: Average Fraud Score by Advertiser

DoubleVerify is an organization that provides a variety of measurements to ensure the quality and effectiveness of digital advertising. Suppose you are asked to calculate the average fraud score per advertiser over the past year based on the data in the table.

The table has the following schema:

Example Input:
1014502022-08-01 14:05:0065
1014512022-05-02 08:10:0033
1054522021-12-24 16:15:0010
1054532022-02-01 12:00:0015
1024542022-07-01 10:00:0089

The task is to find the average per in the table.


This SQL query calculates the average fraud scores by grouping all the records based on the using the clause. The function is then used to compute the average fraud score for all the records within each group.

To practice a very similar question try this interactive Facebook Advertiser Status Question which is similar for requiring SQL calculations involving advertisers data or this Amazon Average Review Ratings Question which is similar for requiring calculating average ratings, similar to fraud scores.

SQL Question 5: What's a database view?

A database view is a virtual table that is created based on the results of a SELECT statement, and provides you a customized, read-only version of your data that you can query just like a regular table.

Views in SQL can help you enforce data security requirements by hiding sensitive data from certain users, and can improve performance for some queries by pre-computing the results for an intermediate step and storing them in a view (which can be faster than executing the intermediate query each time). However, their read-only nature means that on any underlying update, the view has to be re-computed.

SQL Question 6: Calculate the Average Quality Score Per Campaign

DoubleVerify, is a company that ensures the quality and effectiveness of digital advertising. Let's assume they grant quality scores to each ad impression. Providing advertisers with a deeper insight, formulate a question that calculates the average quality score per campaign on a monthly basis for 2022.

Here's our sample 'ad_impressions' table:

Example Input:
834130501/05/2022 00:00:0087
980256201/13/2022 00:00:0078
833330502/21/2022 00:00:0090
696278503/26/2022 00:00:0082
451778503/25/2022 00:00:0085

We want our output to resemble the following format:

Example Output:


Here is a PostgreSQL query that solves this:

This query first filters out the impressions for the year 2022. Then, it groups impressions by month and campaign, calculating the average quality score for each grouping. Finally, it orders the result by month and the average quality score in descending order, so one could easily spot which campaign performed the best in each month.

SQL Question 7: What are some similarities and differences between unique and non-unique indexes?


Some similarities between unique and non-unique indexes include:

  • Both indexes improve the performance of SQL queries by providing a faster way to lookup the desired data.
  • Both indexes use an additional data which requires more storage space which impacts write performance.
  • Both indexes can be created on one or more columns of a table

Some differences between unique and non-unique indexes include:

  • A unique index enforces the uniqueness of the indexed columns, meaning that no duplicate values are allowed in the indexed columns. A non-unique index allows duplicate values in the indexed columns.
  • A unique index can be used to enforce the primary key of a table, but a non-unique index cannot.
  • A unique index can have a maximum of one NULL value in the indexed columns, but a non-unique index can have multiple NULLs

SQL Question 8: Filter customer records based on campaign name

DoubleVerify is a company that builds software to measure and improve digital ad engagement. You are given a table which contains digital ad campaigns run by DoubleVerify. Each record has , , , , and . Your task is to find all campaigns that contain the word "Boost" in their name.

Example Input:
1"Digital Boost"06/08/2022 00:00:0006/18/2022 00:00:00"Walmart"
2"Summer Sale"06/10/2022 00:00:0006/25/2022 00:00:00"Amazon"
3"Boost Performance"06/15/2022 00:00:0006/30/2022 00:00:00"Ebay"
4"Holiday Savings"06/20/2022 00:00:0007/05/2022 00:00:00"Best Buy"
5"Back to School Boost"06/25/2022 00:00:0007/10/2022 00:00:00"Target"


This PostgreSQL query should return all the rows in the campaigns table where the campaign_name contains the word "Boost". The "%" wildcard character on either side of "Boost", along with LIKE, will match any string that contains "Boost" anywhere within it.

Example Output:

1"Digital Boost"06/08/2022 00:00:0006/18/2022 00:00:00"Walmart"
3"Boost Performance"06/15/2022 00:00:0006/30/2022 00:00:00"Ebay"
5"Back to School Boost"06/25/2022 00:00:0007/10/2022 00:00:00"Target"

SQL Question 9: Joining and Analysing Customers and Orders Data

Assume that you are a data analyst for DoubleVerify and you are asked to analyze customer data in respects to their orders. We have two tables, and .

The table consists of columns: , , , , and . The table contains: , , , , and .

Write a SQL query to find the top 5 customers who have placed the most orders, indicating the full name, email, and the total quantity of all orders placed.

Please format your result in the following manner:

Example Input:
Example Input:


This SQL query joins the table with the table on the and summarizes (using ) the total quantity of orders that each customer has placed. It then orders the result in descending order of and limits the result to the top 5 customers. It selects the full name of the customer, their email, and the total quantity of orders placed.

The function concatenates the and to produce a full name.

Because joins come up so often during SQL interviews, take a stab at this SQL join question from Spotify: SQL join question from Spotify

SQL Question 10: What do primary keys do?

The primary key of a table is a column or set of columns that serves as a unique identifier for each row. It ensures that all rows are distinct and does not allow null values.

For example, say you had stored some Facebook ad campaign data that DoubleVerify ran:

The column uniquely identifies each row in the table, and the PRIMARY KEY constraint ensures that no two rows have the same . This helps to maintain the integrity of the data in the table by preventing duplicate rows.

The primary key is also an important part of the table because it allows you to easily identify and reference specific campaigns in your Facebook Ad data. You can use it to join to other tables in the database, such as a table containing data on the results of the campaigns.

SQL Question 11: Calculate Compression Metric for Ad Campaigns

As an Ad Verification Company, DoubleVerify collects information about the impressions/ad views. They have asked you to write a query that calculates a "compression" metric for each campaign_id. The "compression" metric is defined as the absolute difference between the maximum and minimum view_counts, divided by the maximum view_counts for each campaign, rounding to 2 decimal places.

This metric gives an understanding of how evenly an ad campaign's impressions are across different advertisers. Provide a SQL query that calculates this "compression" metric.

Example Input:
Example Output:


This SQL query uses the GROUP BY clause to calculate statistics for each campaign_id. It then uses MAX(), MIN(), ABS(), ROUND() and basic arithmetic operators to compute the compression metric. The ABS() function ensures that we get a positive difference, MAX(view_counts) is used as the denominator as per the definition of the metric, and the whole expression is rounded to 2 decimal places using ROUND(). The USE of NUMERIC typecast is to ensure proper division.

To practice a very similar question try this interactive Alibaba Compressed Mean Question which is similar for it also requires performing calculations with aggregated data, rounding results, and dealing with multi-column data or this JP Morgan Chase Cards Issued Difference Question which is similar for it also involves computing differences between maximum and minimum values within a grouping.

Preparing For The DoubleVerify SQL Interview

The best way to prepare for a SQL interview, besides making sure you have strong SQL fundamentals, is to practice a ton of real SQL questions that were asked in recent job interviews. Beyond just solving the above DoubleVerify SQL interview questions, you should also solve the 200+ SQL questions on DataLemur which come from companies like Netflix, Google, and Amazon. DataLemur SQL Interview Questions

Each DataLemur SQL question has hints to guide you, full answers and most importantly, there's an interactive coding environment so you can right in the browser run your SQL query and have it executed.

To prep for the DoubleVerify SQL interview you can also be helpful to practice SQL problems from other tech companies like:

In case your SQL foundations are weak, don't worry about going right into solving questions – strengthen your SQL foundations with this SQL tutorial for Data Analytics.

Interactive SQL tutorial

This tutorial covers topics including CASE/WHEN statements and joining multiple tables – both of these show up routinely during DoubleVerify interviews.

DoubleVerify Data Science Interview Tips

What Do DoubleVerify Data Science Interviews Cover?

Besides SQL interview questions, the other question categories to prepare for the DoubleVerify Data Science Interview are:

DoubleVerify Data Scientist

How To Prepare for DoubleVerify Data Science Interviews?

To prepare for DoubleVerify Data Science interviews read the book Ace the Data Science Interview because it's got:

  • 201 interview questions taken from companies like Google, Tesla, & Goldman Sachs
  • a crash course on Stats, SQL & ML
  • over 900+ reviews on Amazon & 4.5-star rating

Acing Data Science Interview