At Playtika, SQL is used day-to-day for analyzing gaming behavior patterns, and managing the huge volume of gaming data for real-time decision making. Because of this, Playtika typically asks SQL questions in interviews for Data Science, Analytics, and & Data Engineering jobs.
Thus, to help you study for the Playtika SQL interview, here’s 10 Playtika SQL interview questions – can you answer each one?
Playtika, a leading company in the digital entertainment industry, wants to gain insight into the user ratings of different games on their platform.
As a data analyst, you are asked to write a SQL query that calculates the average rating per game each month. The rating of a game is based on a 5-star rating system which is stored in the table.
The table has the following columns:
Use the sample data below as the input for your SQL query.
review_id | user_id | submit_date | game_id | stars |
---|---|---|---|---|
1 | 123 | 2022-06-08 | 1001 | 4 |
2 | 265 | 2022-06-10 | 1002 | 4 |
3 | 362 | 2022-06-18 | 1001 | 3 |
4 | 192 | 2022-07-26 | 1002 | 3 |
5 | 981 | 2022-07-05 | 1002 | 2 |
Your result should display for each game, the month (in numeric format) and its average rating for that month.
month | game | avg_rating |
---|---|---|
6 | 1001 | 3.50 |
6 | 1002 | 4.00 |
7 | 1002 | 2.50 |
The query above uses the function to calculate the average rating of each game per month. The function is used to get the month from the . Finally, it uses the statement to group the result set by month and game_id, and it sorts the result set by month and game in ascending order. The result gives us the average monthly review rating for each game.
For more window function practice, try this Uber SQL problem on DataLemur's online SQL code editor:
As a data analyst at Playtika, your task is to help the team understand the gaming patterns, particularly the in-game purchases of the players. This helps in gaining insights and making decisions that generate higher revenue and enhance customer experiences. Design a database schema for this and then write a SQL query to find out the most popular game (maximum sales) for each day.
Given two tables and .
game_id | game_name |
---|---|
101 | Slotomania |
102 | World Series of Poker |
103 | Bingo Blitz |
104 | House of Fun |
transaction_id | user_id | game_id | transaction_date | amount |
---|---|---|---|---|
201 | 654 | 101 | 2022-06-01 | 10.00 |
202 | 123 | 102 | 2022-06-01 | 15.00 |
203 | 265 | 102 | 2022-06-01 | 10.00 |
204 | 654 | 104 | 2022-06-01 | 20.00 |
205 | 362 | 101 | 2022-06-02 | 15.00 |
206 | 192 | 102 | 2022-06-02 | 10.00 |
207 | 981 | 103 | 2022-06-02 | 30.00 |
208 | 654 | 104 | 2022-06-02 | 10.00 |
This query will back the date, the name of the game, and the maximum sales for that game on that date. By including and in the clause, the aggregation is performed on rows that have the same values in both columns, thus returning the maximum sales for each game on each date. The final clause ensures the results are presented in a chronological order, and for each date, the game with the highest sales is at the top.
A full outer join returns all rows from both tables, including any unmatched rows, whereas an inner join only returns rows that match the join condition between the two tables.
For an example of each one, say you had sales data exported from Playtika's Salesforce CRM stored in a datawarehouse which had two tables: and .
: retrieves rows from both tables where there is a match in the shared key or keys.
This query will return rows from the sales and tables that have matching customer id values. Only rows with matching values will be included in the results.
: retrieves all rows from both tables, regardless of whether there is a match in the shared key or keys. If there is no match, NULL values will be returned for the columns of the non-matching table.
Here is an example of a SQL full outer join using the sales and tables:
Playtika wants to analyze the customer activity for their games. They are building a dashboard showing how much each game is played on a daily basis and need your support. Write a SQL query that can extract this information from the database. The database has a 'users' table and a 'game_sessions' table. The 'users' table stores data about Playtikas users, and the 'game_sessions' table stores information about the game sessions including which user played it, which game was played, and when it was played.
user_id | signup_date | location |
---|---|---|
12 | 2020/09/12 | USA |
22 | 2021/03/27 | Spain |
33 | 2019/12/06 | Italy |
session_id | user_id | game_id | session_date | session_duration |
---|---|---|---|---|
101 | 12 | 99 | 2023/01/01 00:00:00 | 45 |
102 | 12 | 88 | 2023/01/01 00:00:00 | 60 |
103 | 22 | 99 | 2023/01/01 00:00:00 | 30 |
104 | 33 | 99 | 2023/01/01 00:00:00 | 45 |
Please note: The is mentioned in minutes.
date | game_id | sessions |
---|---|---|
2023/01/01 | 99 | 3 |
2023/01/01 | 88 | 1 |
This query groups game sessions by date and game. With the use of , it counts how many sessions were played for each game on a particular date. Therefore, the output of this query can serve as the basis for the desired customer activity dashboard. The dashboard can show how much each game is played on a daily basis.
Cross join and natural join are like two sides of a coin in the world of SQL.
Cross joins is like the wild and reckless cousin who creates a giant new table by combining every row from table A with every row from table B, no questions asked, no common key needed.
Natural joins are like the more refined and selective cousin who only combines rows from multiple tables if they have something in common (i.e., common columns/keys).
While cross join doesn't discriminate and will create a massive table if given the chance, natural join is more selective and only returns a table with the number of rows equal to the number of matching rows in the input tables. So, choose your JOIN wisely!
A digital marketing team at Playtika has been running several campaigns to promote various online casino games offered by the company. The team wants to analyze click-through conversion rates from a user viewing a game to adding it to their cart and then making a purchase. The data is stored across three tables:
ad_id | user_id | clicked_timestamp | game_id |
---|---|---|---|
1 | 134 | 01/01/2023 14:32:00 | 101 |
2 | 592 | 01/01/2023 14:45:00 | 102 |
3 | 287 | 01/01/2023 16:20:00 | 103 |
4 | 134 | 01/01/2023 17:25:00 | 104 |
5 | 592 | 01/02/2023 09:10:00 | 105 |
action_id | user_id | added_timestamp | game_id |
---|---|---|---|
10 | 134 | 01/01/2023 14:40:00 | 101 |
20 | 592 | 01/01/2023 15:00:00 | 102 |
30 | 287 | 01/01/2023 16:40:00 | 103 |
40 | 134 | 01/02/2023 09:30:00 | 105 |
purchases_id | user_id | purchase_timestamp | game_id |
---|---|---|---|
100 | 134 | 01/02/2023 14:10:00 | 101 |
200 | 287 | 01/02/2023 17:00:00 | 103 |
300 | 134 | 01/03/2023 10:35:00 | 105 |
Calculate the click-through conversion rates at each step of the funnel. Please provide the results by game.
The Click-Through Conversion Rate from viewing a game to adding it to cart can be calculated as follows:
This query calculates the number of unique users who viewed each game, added it to their cart, and made a purchase. It then calculates the click-through conversion rate from viewing to adding to cart, and from adding to cart to purchase.
To practice a similar SQL interview question on DataLemur's free online SQL coding environment, attempt this Facebook SQL Interview question:
Constraints are used to specify the rules concerning data in the table. It can be applied for single or multiple fields in an SQL table during the creation of the table or after creating using the ALTER TABLE command. The constraints are:
Say you were storing sales analytyics data from Playtika's CRM inside a database. Here's some example constraints you could use:
PRIMARY KEY constraint: You might use a PRIMARY KEY constraint to ensure that each record in the database has a unique identifier. For example, you could use the "opportunity_id" field as the primary key in the "opportunities" table.
FOREIGN KEY constraint: You might use a FOREIGN KEY constraint to link the data in one table to the data in another table. For example, you could use a foreign key field in the "opportunities" table to reference the "account_id" field in the "accounts" table.
NOT NULL constraint: You might use a NOT NULL constraint to ensure that a field cannot contain a NULL value. For example, you could use a NOT NULL constraint on the "opportunity_name" field in the "opportunities" table to ensure that each opportunity has a name.
UNIQUE constraint: You might use a UNIQUE constraint to ensure that the data in a field is unique across the entire table. For example, you could use a UNIQUE constraint on the "email" field in the "contacts" table to ensure that each contact has a unique email address.
CHECK constraint: You might use a CHECK constraint to ensure that the data in a field meets certain conditions. For example, you could use a CHECK constraint to ensure that the "deal_probability" field in the "opportunities" table is a value between 0 and 100.
DEFAULT constraint: You might use a DEFAULT constraint to specify a default value for a field. For example, you could use a DEFAULT constraint on the "stage" field in the "opportunities" table to set the default value to "prospecting"
At Playtika, one of our primary metrics for evaluating the performance of a game is the number of daily active users (DAU). Write a SQL query that calculates the average DAU for each game in our portfolio over the past month, then returns the game with the highest average DAU.
Assume we have two tables: and .
game_id | game_name |
---|---|
1001 | Slotomania |
1002 | World Series of Poker |
1003 | Bingo Blitz |
date | game_id | active_users |
---|---|---|
2022-07-01 | 1001 | 50000 |
2022-07-01 | 1002 | 30000 |
2022-07-01 | 1003 | 40000 |
2022-07-02 | 1001 | 55000 |
2022-07-02 | 1002 | 35000 |
2022-07-02 | 1003 | 42000 |
game_name | avg_DAU |
---|---|
Slotomania | 52500 |
This query calculates the average daily active users per game over the last month, by first joining with on . It then restricts the time range to the last month using a WHERE clause. After grouping by , it orders by the average daily active users in descending order and limits the query result to the top one – in other words, the game with the highest average DAU in the past month.
As a fresh data analyst at Playtika, you are provided with two tables - Customers and Purchases. The Customers table stores information about each player, and the Purchases table stores information about each player's in-game purchases.
Your task is to write a SQL query to find the most recent purchase for each customer and how much he/she spent. Assume today's date is '2022-09-08'.
The tables are structured as follows:
customer_id | first_name | last_name | signup_date | country |
---|---|---|---|---|
100 | John | Doe | 2020-01-01 | USA |
101 | Jane | Doe | 2021-02-01 | Canada |
102 | Bob | Smith | 2020-07-10 | USA |
103 | Alice | Johnson | 2019-05-05 | UK |
purchase_id | customer_id | game_id | purchase_date | amount_spent |
---|---|---|---|---|
200 | 100 | 500 | 2022-04-01 | 20 |
201 | 100 | 501 | 2022-09-01 | 30 |
202 | 101 | 500 | 2022-07-02 | 10 |
203 | 102 | 500 | 2022-06-30 | 15 |
204 | 103 | 501 | 2022-05-05 | 25 |
This query joins Customers and Purchases on the field. It uses a window function () to rank the purchases for each customer by date, in descending order. It then filters for the rows where this rank is 1, i.e., the most recent purchase. The columns selected for output are those relevant to the question: the customer's id, first name, last name, the id of the game purchased, the date of purchase, and the amount spent.
Since joins come up routinely during SQL interviews, try this interactive Snapchat SQL Interview question using JOINS:
Joins in SQL allow you to combine data from different tables based on a shared key or set of keys.
Four JOIN types are available in SQL. For an example of each one, say you had sales data exported from Playtika's Salesforce CRM stored in a PostgreSQL database, and had access to two tables: and .
Assuming that you've already got basic SQL skills, the next best tip we have to prepare for the Playtika SQL interview is to solve as many practice SQL interview questions as you can! Besides solving the above Playtika SQL interview questions, you should also solve the 200+ SQL questions from real Data Science & Analytics interviews which come from companies like Facebook, Google and unicorn tech startups.
Each interview question has hints to guide you, step-by-step solutions and most importantly, there's an interactive coding environment so you can instantly run your SQL query answer and have it checked.
To prep for the Playtika SQL interview it is also helpful to practice interview questions from other tech companies like:
In case your SQL coding skills are weak, forget about going right into solving questions – strengthen your SQL foundations with this free SQL for Data Analytics course.
This tutorial covers topics including Subqueries and joining multiple tables – both of these pop up routinely in SQL job interviews at Playtika.
Besides SQL interview questions, the other types of problems to prepare for the Playtika Data Science Interview are:
The best way to prepare for Playtika Data Science interviews is by reading Ace the Data Science Interview. The book's got: