At FIS Global, SQL is used often for analyzing transactional data patterns and powering multiple data solutions. That's why FIS Global LOVES to ask SQL query questions in interviews for Data Science, Data Engineering and Data Analytics jobs.
As such, to help you prepare for the FIS Global SQL interview, here’s 11 FIS Global SQL interview questions – can you solve them?
FIS Global is a global leader specializing in financial services technology. In this scenario, our focus is on users who make frequent high-value purchases. Your task is to write a SQL query to identify the top 10 customers who have contributed the most total revenue to FIS Global, considering all their transactions.
We have two tables; the 'users' table that has user details and the 'transactions' table that has the details of each transaction.
user_id | user_name | signup_date |
---|---|---|
100 | John | 01/01/2020 |
101 | Jane | 02/02/2019 |
102 | Joe | 03/03/2018 |
transaction_id | user_id | transaction_date | transaction_amount |
---|---|---|---|
5000 | 100 | 06/01/2022 | $500 |
5001 | 101 | 06/02/2022 | $1000 |
5002 | 102 | 06/03/2022 | $1500 |
5003 | 100 | 06/05/2022 | $750 |
5004 | 101 | 06/05/2022 | $800 |
We will join the two tables based on the 'user_id' field and then group the records by 'user_id' & 'user_name'. We can then get the sum of 'transaction_amount' and order them in descending order to get the top 10.
This query will group the transactions by user, sum the amount for each user, sort the result in descending order of the total revenue, and finally limit the result to the top 10 users. For a company like FIS Global, such a query would be used to identify the most valuable customers to the business in terms of revenue. These top users could be targeted for specific marketing campaigns, loyalty programs or to provide superior customer service.
To work on a similar customer analytics SQL question where you can code right in the browser and have your SQL query instantly graded, try this Walmart SQL Interview Question:
For FIS Global, a financial services technology company, we might be interested in analyzing our transaction data. For instance, a relevant question could be as follows:
Write a SQL query to compute the total and average transaction amounts for each month for each unique customer. The table is structured as follows:
transaction_id | customer_id | transaction_date | amount |
---|---|---|---|
1234 | 567 | 2022-06-01 | 100.00 |
1235 | 567 | 2022-06-10 | 200.00 |
1236 | 890 | 2022-07-20 | 50.00 |
1237 | 567 | 2022-07-25 | 500.00 |
1238 | 890 | 2022-07-29 | 60.00 |
The output should return the month (as a 2-digit number), customer ID, total transaction amount and average transaction amount (rounded to two decimal places).
month | customer_id | total_amt | avg_amt |
---|---|---|---|
06 | 567 | 300.00 | 150.00 |
07 | 567 | 500.00 | 500.00 |
07 | 890 | 110.00 | 55.00 |
To practice a related window function SQL problem on DataLemur's free interactive SQL code editor, try this Amazon SQL Interview Question:
Denormalization is the process of modifying a database schema in a way that deviates from the typical rules of normalization (1NF, 2NF, 3NF, etc.).
Denormalization is often used to improve the performance of a database, particularly when it is being used for reporting and analytical purposes (rather than in an Online Transaction Processing (OLTP) manager).
By duplicating data, denormalization can reduce the number of expensive joins required to retrieve data, which can improve query performance. However, denormalization can also cause problems such as increased data redundancy and the need for more complex update and delete operations.
FIS Global provides financial services technology. One of the many services they offer is payment processing. For this task, let's consider a scenario where we need to design a database for FIS Global that records all payment transactions for each of its myriad clients.
Let's consider we have two tables:
Our job is to create a SQL query that calculates the average transaction amount for each client.
client_id | client_name |
---|---|
1 | Client A |
2 | Client B |
3 | Client C |
transaction_id | client_id | amount |
---|---|---|
101 | 1 | 100 |
102 | 1 | 200 |
103 | 2 | 150 |
104 | 3 | 300 |
105 | 3 | 600 |
client_name | avg_transaction_amount |
---|---|
Client A | 150.0 |
Client B | 150.0 |
Client C | 450.0 |
In this solution, we use PostgreSQL's function to calculate the average from the amount column in the table. The clause is used to connect the table with the table using the common column, which is . The clause is then applied to group the result by so we can get the average transaction amount per client.
One way to find duplicate records in a table is by using , and then seeing which groups have more than one occurence:
Another way is by using the operator:
As a growing organization, FIS Global wants to target high potential customers for its new marketing campaign. Therefore, as a part of the Data Analytics team, you have been assigned the task to filter out customers based on below conditions from the given table:-
customer_id | customer_name | account_status | account_type | transaction_amount | transaction_date |
---|---|---|---|---|---|
1234 | John Due | active | Savings | 6000 | 2021-10-08 |
3567 | Tomas Cruise | inactive | Current | 10000 | 2021-12-15 |
8926 | Robert Ford | active | Savings | 2000 | 2021-09-28 |
4521 | Morgen Smith | active | Current | 5200 | 2021-09-29 |
9823 | Kane Davidson | active | Current | 1000 | 2021-11-06 |
customer_id | customer_name | account_status | account_type | transaction_amount | transaction_date |
---|---|---|---|---|---|
1234 | John Due | active | Savings | 6000 | 2021-10-08 |
4521 | Morgen Smith | active | Current | 5200 | 2021-09-29 |
This query will select all customers from the table where the account is active, the total transaction amount is more than 5000 during the past year, and the account type is either Savings or Current. This will hence filter out the high potential customers for the marketing campaign as per the given conditions.
An inner join only includes rows from both tables that satisfy the join condition, whereas a full outer join includes all rows from both tables, regardless of whether they match the join condition or not.
For a concrete example, imagine you had two database tables, an advertising campaigns table which had information on Google Ads keywords and how much was bid for each keyword, and a FIS Global sales table, which has data on how many products were sold and which Google Ads keyword drove that sale.
: retrieves rows from both tables where there is a match in the shared key or keys. For example, an INNER JOIN between the table and the table could be performed using the keyword column as the shared key. This would retrieve only the rows where the keyword in the table matches the keyword in the table.
: retrieves all rows from both tables, regardless of whether there is a match in the shared key or keys. If there is no match, values will be returned for the columns of the non-matching table.
FIS Global is a multinational financial services company. For this question, let's consider that the company has a database of transactions for various financial products that it offers. The company wants to know what the total revenue generated for each product category is for each month.
Given a table below, write an SQL query which sums up the for each on a per-month basis.
This will group the transactions by month and product category, and sum the amount spent in each category per month. The final output will then be ordered first by month and then by the total spent in each category.
For example, if we have transactions for Loans, Insurance, and Retail Banking across different months, this query will sum up how much was spent on Loans, Insurance, and Retail Banking products in each month and display the categories in each month sorted by the amounts they generated.
FIS Global management is interested in gathering information about internal customers whose email ID ends with '@fisglobal.com'.
Write an SQL query that finds records of customers in the customer database where the email is like '@fisglobal.com'.
Consider the customer table with sample data as shown below:
customer_id | first_name | last_name | |
---|---|---|---|
1 | John | Smith | johnsmith@fisglobal.com |
2 | Tom | Hanks | tomhanks@gmail.com |
3 | Sarah | Connor | sarahconnor@fisglobal.com |
4 | James | Bond | jamesbond@example.com |
5 | Harry | Potter | harrypotter@fisglobal.com |
This query uses the LIKE clause in SQL to filter out customer records in the 'customers' table where the 'email' field ends with '@fisglobal.com'. The '%' symbol acts as a wildcard that matches any sequence of characters. Hence '%@fisglobal.com' matches any string that ends with '@fisglobal.com'. The result of this query will be all customer records where the email ID is registered with 'fisglobal.com'.
In SQL, both a left and right join is used to combine rows from different tables based on a shared key or set of keys, resulting in a single merged table of data. However, here's the difference:
LEFT (OUTER) JOIN: Retrieves all the records/rows from the left and the matched records/rows from the right table.
RIGHT (OUTER) JOIN: Retrieves all the records/rows from the right and the matched records/rows from the left table.
Scenario: FIS Global, a leading provider of technology solutions for merchants, banks and capital markets firms globally, wants to analyze its customers and their transactions.
In the Customer database, there are many fields, but for this exercise, we will narrow down to only four fields: , , , and . On the other hand, in the Transaction database, we will focus on five fields: , , , , and .
The task is to write a SQL query to retrieve from the database: all customers along with their transaction details. Exclude the customers who haven't done any transactions. Sort the result by , then and .
customer_id | first_name | last_name | location |
---|---|---|---|
2001 | Johnny | Brown | United States |
2002 | Kelly | Green | United States |
2003 | Richard | Klein | Canada |
2004 | Maria | Gonzalez | Mexico |
2005 | Alexandra | White | United States |
transaction_id | customer_id | product_name | transaction_date | transaction_amount |
---|---|---|---|---|
3001 | 2001 | iPad | 02/10/2022 | $400 |
3002 | 2005 | Laptop | 02/15/2022 | $800 |
3003 | 2003 | iPhone | 02/10/2022 | $1000 |
3004 | 2001 | Smart Watch | 02/18/2022 | $200 |
This query joins the and tables on the column which is common to both tables. The output will include the first and last names of the customers, their location, product name, transaction date, and transaction amounts. The clause will sort the output first by location, then last name and first name. Since the join type is not specified (there are several types: inner, left, right, full), the default join used is the inner join, which means only the customers who have transactions (i.e., their customer id's appear in both the customers and transactions tables) will be included in the result set.
Because joins come up so often during SQL interviews, try an interactive Spotify JOIN SQL question:
The best way to prepare for a FIS Global SQL interview is to practice, practice, practice. Beyond just solving the earlier FIS Global SQL interview questions, you should also solve the 200+ SQL Interview Questions on DataLemur which come from companies like Facebook, Google, and VC-backed startups.
Each DataLemur SQL question has multiple hints, step-by-step solutions and most importantly, there is an interactive coding environment so you can easily right in the browser your SQL query and have it graded.
To prep for the FIS Global SQL interview you can also be wise to practice SQL problems from other fintech companies like:
However, if your SQL skills are weak, forget about going right into solving questions – refresh your SQL knowledge with this SQL tutorial for Data Analytics.
This tutorial covers things like UNION vs. joins and cleaning text data – both of these show up routinely during SQL job interviews at FIS Global.
Beyond writing SQL queries, the other types of questions to prepare for the FIS Global Data Science Interview are:
The best way to prepare for FIS Global Data Science interviews is by reading Ace the Data Science Interview. The book's got: