SQL is typically used for querying and managing financial databases, and for analyzing investment and market trends. So, it shouldn't surprise you that Evercore almost always asks SQL coding questions during interviews for Data Analyst, Data Science, and BI jobs.
So, to help you prep for the Evercore SQL interview, we've curated 10 Evercore SQL interview questions can you solve them?
Evercore is a financial advisory firm. Supposing we have a database table that records all financial transactions with their respective clients, the task for you here is to write a SQL query using a window function to retrieve the most recent transaction for each client. We want to know the transaction ID, the client ID, the transaction amount, and the date of the transaction. Assume all transactions are unique.
transaction_id | client_id | transaction_date | transaction_amount |
---|---|---|---|
1521 | 975 | 08/01/2020 00:00:00 | 55000 |
1032 | 743 | 08/12/2020 00:00:00 | 65000 |
3412 | 975 | 08/18/2020 00:00:00 | 62500 |
5231 | 743 | 08/23/2020 00:00:00 | 60000 |
2643 | 245 | 09/01/2020 00:00:00 | 67500 |
client_id | last_transaction_id | last_transaction_date | last_transaction_amount |
---|---|---|---|
975 | 3412 | 08/18/2020 | 62500 |
743 | 5231 | 08/23/2020 | 60000 |
245 | 2643 | 09/01/2020 | 67500 |
This query first ranks transactions for each client by date in descending order. Then, the outer query filters only the most recent transaction for each client (i.e., where rank number is 1). This way, we get the most recent transaction details for each client.
Pro Tip: Window functions are a popular SQL interview topic, so practice all the window function problems on DataLemur
Imagine there was a table of Evercore employee salary data, along with which department they belonged to. Write a query to compare the average salary of employees in each department to the company's average salary. Return the comparison result as 'higher', 'lower', or 'same' for each department. Display the department ID, payment month (in MM-YYYY format), and the comparison.
Code your solution to this question directly within the browser on DataLemur:
The solution is LONG – 30+ lines of SQL. You can find a detailed solution with hints here: Department Salaries.
Check out the Evercore career page and see how they use SQL in their day-to-day operations.
Normalization involves dividing a large table into smaller, more specific ones and establishing connections between them. This helps to reduce redundancy, creating a database that is more adaptable, scalable, and easy to manage. Additionally, normalization helps to maintain the integrity of the data by minimizing the risk of inconsistencies and anomalies.
Evercore has recently expanded its operations and employees are working across multiple departments and roles. To keep track of their workforce, they need a database of employees, departments, and roles at Evercore. The database must be easy to query to retrieve information about an employee’s department, role, and their supervisor.
The structure of the database tables could be as follows:
table:
employee_id | first_name | last_name | role_id | supervisor_id |
---|---|---|---|---|
1 | John | Doe | 3 | 2 |
2 | Jane | Smith | 2 | 3 |
3 | Sam | Brown | 1 | null |
4 | Emma | Jones | 4 | 1 |
5 | Mark | Taylor | 5 | 1 |
table:
role_id | role_name |
---|---|
1 | CEO |
2 | CTO |
3 | Software Engineer |
4 | Designer |
5 | Product Manager |
table:
role_id | department_name |
---|---|
1 | Executive |
2 | Technological |
3 | Technological |
4 | Design |
5 | Product |
In the table, the refers to the of that employee's supervisor. There's a foreign key relationship between and , as well as between and .
Question: Write a query to get a list of employees, their roles, their departments, and their supervisor's names.
In this query, we join the , , and tables on the column to get the role and department for each employee. We also join the table to itself to get the supervisor for each employee. The operator is used to concatenate the first and last names of the employees and supervisors.
The function allows you to select the first non-NULL value from a list of values. It is often used to replace a NULL value with a default value in order to run aggregate functions like or on a column without being affected by values.
For example, if you conducted a customer satisfaction survey for a company and asked customers to rate their level of agreement with a statement on a scale of 1 to 10, some customers may not have answered all the questions, resulting in NULL values in the survey data:
customer_id | question_id | agree_amount |
---|---|---|
101 | 1 | 9 |
101 | 2 | 10 |
202 | 1 | 6 |
202 | 2 | NULL |
303 | 1 | 4 |
303 | 2 | NULL |
To replace the NULLs in the column with the default value of 5 which corresponds to neutral on the survey, you could run the following SQL query:
You'd get the following output:
customer_id | question_id | agree_amount |
---|---|---|
101 | 1 | 9 |
101 | 2 | 10 |
202 | 1 | 6 |
202 | 2 | 5 |
303 | 1 | 4 |
303 | 2 | 5 |
As an analyst at Evercore, you are required to assess the active customer base in Q3 of 2022. Specifically, Evercore is interested in the customers who have at least $10,000 in their account balance and are based either in New York or San Francisco. Your task is to write a SQL query that filters the customers based on these conditions and calculates the average account balance for the resulting set of customers.
customer_id | name | account_balance | city |
---|---|---|---|
1001 | John | $15,000.00 | Chicago |
1002 | Samantha | $10,500.00 | San Francisco |
1003 | David | $9,000.00 | New York |
1004 | Alexander | $15,500.00 | New York |
1005 | Chloe | $8,000.00 | San Francisco |
city | avg_balance |
---|---|
San Francisco | $10,500.00 |
New York | $15,500.00 |
The SQL command filters down the records in the customer database to only include customers living in either New York or San Francisco with an account balance of at least $10,000. It then groups the resulting elements by city and calculates the average account balance for customers in each city.
NULLs are NOT the same as zero or blank spaces in SQL. NULLs are used to represent a missing value or the abscence of a value, whereas zero and blank space are legitimate values.
It's important to handle NULLs carefully, because they can mess up your analysis very easily. For example, if you compare a NULL value using the = operator, the result will always be NULL (because just like Drake, nothing be dared compared to NULL). That's why many data analysis in SQL start with removing NULLs using the function.
As a SQL consultant at Evercore, a prominent investment banking advisory firm, your task is to provide insights into the company's investment portfolio. Prepare an SQL query that calculates the average investment size for each industry sector. The company classifies its investments into sectors such as Manufacturing, Technology, Healthcare, etc.
For your analysis, you have a database table with the following schema:
investment_id | sector | deal_date | investment_size |
---|---|---|---|
5672 | Manufacturing | 01/19/2020 | 9500000 |
6892 | Technology | 05/11/2020 | 5700000 |
7428 | Healthcare | 06/29/2020 | 3000000 |
8506 | Technology | 08/03/2020 | 5000000 |
9371 | Manufacturing | 10/12/2020 | 7800000 |
10094 | Healthcare | 12/16/2020 | 8600000 |
10650 | Technology | 02/04/2021 | 6300000 |
In PostgreSQL, you can use the AVG function to get the average of a set of numbers. Here's how you would get the average investment size by sector:
This SQL query groups the data by sector and calculates the average investment size for each sector.
To practice a very similar question try this interactive Amazon Highest-Grossing Items Question which is similar for calculating information by categories or this CVS Health Pharmacy Analytics (Part 1) Question which is similar for finding top performing items in a sector.
Evercore is an investment banking advisory firm. Let's suppose for a specific period, you're given a dataset where each row represents a transaction handled by an analyst at the firm. The columns are , , and . Now, write an SQL query to find the average transaction value handled by each analyst for this period.
The column is the total value (in USD) of each transaction, and the is the unique identifier for each financial analyst.
transaction_id | amount_in_usd | analyst_id |
---|---|---|
56782 | 5000650 | 892 |
76485 | 2450000 | 234 |
86753 | 7850000 | 892 |
93462 | 6789000 | 378 |
45622 | 1250000 | 892 |
62584 | 4550000 | 378 |
analyst_id | avg_transaction_value |
---|---|
892 | 4666883.33 |
234 | 2450000.00 |
378 | 5669500.00 |
Everycore's managers can use this information to evaluate the performance of their analysts by looking at the average value of the transactions they handle. The higher the value, the more significant transactions the analyst is trusted with.
A cross join is a JOIN operation in SQL that creates a new table by pairing each row from the first table with every row from the second table. It is also referred to as a cartesian join. In contrast, a natural join combines rows from two or more tables based on their common columns, forming a new table. Natural joins are called "natural" because they rely on the natural relationship between the common columns in the joined tables.
Here's an example of a cross join:
Here's a natural join example using two tables, Evercore employees and Evercore managers:
This natural join returns all rows from Evercore employees where there is no matching row in managers based on the column.
One significant difference between cross joins and natural joins is that the former do not require common columns between the tables being joined, while the latter do. Another distinction is that cross joins can generate very large tables if the input tables have a large number of rows, while natural joins only produce a table with the number of rows equal to the number of matching rows in the input tables.
The key to acing a Evercore SQL interview is to practice, practice, and then practice some more! Beyond just solving the above Evercore SQL interview questions, you should also solve the 200+ SQL Interview Questions on DataLemur which come from companies like Facebook, Google, and VC-backed startups.
Each exercise has hints to guide you, step-by-step solutions and most importantly, there's an interactive coding environment so you can right in the browser run your SQL query answer and have it checked.
To prep for the Evercore SQL interview it is also a great idea to solve SQL questions from other banking & finanacial services companies like:
In case your SQL foundations are weak, forget about going right into solving questions – go learn SQL with this SQL tutorial for Data Analytics.
This tutorial covers things like RANK vs. DENSE RANK and LAG window function – both of these come up frequently in SQL interviews at Evercore.
Besides SQL interview questions, the other types of questions to prepare for the Evercore Data Science Interview include:
The best way to prepare for Evercore Data Science interviews is by reading Ace the Data Science Interview. The book's got: