At nCino, SQL does the heavy lifting for analyzing banking data for insights and managing the database for their cloud banking operating system. Unsurprisingly this is why nCino frequently asks SQL coding questions during interviews for Data Science, Data Engineering and Data Analytics jobs.
So, to help you study for the nCino SQL interview, we'll cover 8 nCino SQL interview questions – able to answer them all?
nCino is a software company specializing in cloud banking. Assume for this question that we have a table called "loan_offers" containing record for every loan offer made to customers, and another table called "responses" that records every customer response to an offer they received. "Accepted" responses represent loans that went into effect.
The "loan_offers" table might look like this:
offer_id | customer_id | loan_type | offer_date |
---|---|---|---|
125 | 374 | Personal | 06/08/2022 00:00:00 |
198 | 124 | Auto | 06/10/2022 00:00:00 |
374 | 853 | Auto | 06/18/2022 00:00:00 |
947 | 436 | Home | 07/26/2022 00:00:00 |
569 | 985 | Personal | 07/05/2022 00:00:00 |
The "responses" table might look like this:
response_id | offer_id | customer_id | response_type |
---|---|---|---|
1 | 125 | 374 | Accepted |
2 | 198 | 124 | Declined |
3 | 374 | 853 | Declined |
4 | 947 | 436 | Accepted |
5 | 569 | 985 | Accepted |
We need to write a query that estimates the acceptance rates for personal, auto, and home loans for each month using a SQL window function. Specifically, we want to know, for each month and each loan type, what percent of offered loans were accepted.
The output should look like this:
month | loan_type | acceptance_rate |
---|---|---|
6 | Personal | 100.00 |
6 | Auto | 0.00 |
7 | Home | 100.00 |
7 | Personal | 100.00 |
In this query, we join the "loan_offers" and "responses" tables on the "offer_id" column. We first extract the month from the "offer_date" and group by it along with the "loan_type". We count the number of 'Accepted' responses for this grouping, and then divide it by the total count for this grouping obtained by the window function.
The result is the acceptance rate for each loan type each month. The result is rounded to two decimal places for readability. Please note that data for months where a certain type of loan was not offered or not accepted at all will missing from the output. Also, the times here are based on a 24-hour clock. Make sure to adjust accordingly if your database uses a 12-hour clock.
For more window function practice, solve this Uber SQL Interview Question within DataLemur's online SQL coding environment:
Imagine you're working for nCino, a cloud banking services company. Your manager wants to understand the average transaction amount for each customer over the past year. You have access to the table containing all the transaction details. Can you write a SQL query to find out the average transaction amount per customer over the last year?
Given is the table:
transaction_id | customer_id | transaction_date | transaction_amount |
---|---|---|---|
1 | 101 | 01/08/2021 | $2000 |
2 | 102 | 03/15/2021 | $1800 |
3 | 101 | 01/10/2021 | $2200 |
4 | 101 | 07/20/2021 | $1500 |
5 | 102 | 11/25/2021 | $1600 |
We need to find out the average transaction amount by each customer over the dates that fall within the last year from the current date.
Here is a PostgreSQL query to achieve this:
This query first filters out the transactions that happened within the last year by checking that the is later than or equal to the current date minus one year. Then, it groups the transactions by and calculates the average per customer group. This query provides the answer as the average transaction amount for each customer over the last year.
Stored procedures are like functions in Python – they can accept input params and return values, and are used to encapsulate complex logic.
For example, if you worked as a Data Analyst in support of the Marketing Analytics team at nCino, a common task might be to find the conversion rate for your ads given a specific time-frame. Instead of having to write this query over-and-over again, you could write a stored procedure like the following:
To call this stored procedure, you'd execute the following query:
As a data analyst at the financial software company nCino, you are asked to analyze the click-through rates of various product ads. The company has been advertising two of their main products: Loan Origination System and Customer Relationship Management system.
Consider the following two tables, and :
click_id | user_id | click_time | product_id |
---|---|---|---|
9021 | 423 | 2022-06-06 08:32:00 | 10001 |
7853 | 271 | 2022-07-07 10:10:00 | 10002 |
8392 | 689 | 2022-08-08 12:12:00 | 10001 |
3412 | 356 | 2022-09-09 14:14:00 | 10002 |
8649 | 178 | 2022-10-10 16:16:00 | 10002 |
add_id | user_id | add_time | product_id |
---|---|---|---|
7291 | 423 | 2022-06-06 08:33:00 | 10001 |
6372 | 271 | 2022-07-07 10:14:00 | 10002 |
5093 | 689 | 2022-08-08 12:20:00 | 10001 |
You are tasked to find out the Click-Through Rates (CTR) from viewing a product ad to adding the specific product to the cart. CTR is calculated as ((Number of adds / Number of clicks) * 100).
Write a SQL query to find out these rates for each product for every month. Round these rates to two decimal places.
This query uses a LEFT JOIN on and tables using , , and ensures that is after . This joins clicks with their corresponding adds to the cart.
The CTR is calculated by dividing the count of (s) by the count of (s) times 100. The output is sorted by Month and then by .
Note: This analysis assumes that the and pair is unique for each click or add event. If this is not the case, the analysis might need adjustment.
To solve another question about calculating rates, solve this TikTok SQL question within DataLemur's online SQL code editor:
The keyword added to a statement can be used to get records without duplicates.
For example, say you had a table of nCino customers:
name | city |
---|---|
Akash | SF |
Brittany | NYC |
Carlos | NYC |
Diego | Seattle |
Eva | SF |
Faye | Seattle |
Suppose you wanted to figure out which cities the customers lived in, but didn't want duplicate results, you could write a query like this:
Your result would be:
city |
---|
SF |
NYC |
Seattle |
In nCino, a bank operating system, there is a Loans table with columns , , , and (which represents the time taken to process the loan). Your task is to find out the average processing days for each loan type in the month of January 2022.
loan_id | customer_id | application_date | loan_type | processing_days |
---|---|---|---|---|
101 | 123 | 01/05/2022 | Personal | 20 |
102 | 456 | 12/25/2021 | Personal | 30 |
103 | 789 | 01/15/2022 | Mortgage | 25 |
104 | 456 | 01/30/2022 | Car | 15 |
105 | 123 | 01/20/2022 | Mortgage | 30 |
month | loan_type | avg_processing_days |
---|---|---|
1 | Personal | 20.00 |
1 | Mortgage | 27.50 |
1 | Car | 15.00 |
This query first filters the data to include only loans applied for in January 2022. It then groups the data by and calculates the average processing days for each group. The use of the function allows us to separate the month and year from the .
To explain the difference between a primary key and foreign key, let's inspect employee data from nCino's HR database:
:
+------------+------------+------------+------------+ | employee_id| first_name | last_name | manager_id | +------------+------------+------------+------------+ | 1 | Aubrey | Graham | 3 | | 2 | Marshal | Mathers | 3 | | 3 | Dwayne | Carter | 4 | | 4 | Shawn | Carter | | +------------+------------+------------+------------+
In this table, could be a primary key. It is unique for each row in the table and cannot contain null values.
could be a foreign key. It references the of the manager of each employee in the table, establishing a relationship between the employees and their managers. This foreign key allows you to easily query the table to find out who a specific employee's manager is, or to find out which employees report to a specific manager.
It is also possible for a table to have multiple foreign keys that reference different primary keys in other tables. For example, the table could have additional foreign keys for the of the department where each employee works, and the l of the location where each employee is based.
Your company, nCino, is selling multiple products. You are given the table that contains information about each sale including , , , , and . There's also a table that contains information about each product including , , and (how much it costs nCino to acquire each unit of the product).
Write a SQL query that calculates the average monthly profit margin for each product in percentage. The profit margin of a sale is calculated as . The profit margin of a product in a month is the average of the profit margins of all sales of that product in that month.
sale_id | product_id | sale_date | units_sold | sale_price |
---|---|---|---|---|
121 | 201 | 06/05/2022 00:00:00 | 3 | 50 |
122 | 202 | 06/10/2022 00:00:00 | 5 | 30 |
123 | 201 | 06/15/2022 00:00:00 | 2 | 60 |
124 | 203 | 06/20/2022 00:00:00 | 1 | 100 |
125 | 202 | 07/01/2022 00:00:00 | 2 | 25 |
product_id | product_name | cost_price |
---|---|---|
201 | "Book" | 20 |
202 | "Notebook" | 10 |
203 | "Pen" | 75 |
This query first joins the and tables on . It then groups the sales by month and product name, calculates the profit margin for each sale, and then calculates the average profit margin for each group of sales. Finally, it sorts the result by month and product name for easy interpretation. The function is used to round the resulting average profit margin to two decimal digits. The function is used to extract the month from the .
To practice a very similar question try this interactive Amazon Highest-Grossing Items Question which is similar for dealing with sales and products data or this CVS Health Pharmacy Analytics (Part 1) Question which is similar for calculating profit margin for products.
Assuming that you've already got basic SQL skills, the next best tip we have to prepare for the nCino SQL interview is to solve as many practice SQL interview questions as you can! Beyond just solving the above nCino SQL interview questions, you should also solve the 200+ SQL exercises on DataLemur which come from companies like Amazon, Microsoft, Meta, and smaller tech companies.
Each interview question has multiple hints, step-by-step solutions and crucially, there's an interactive SQL code editor so you can right in the browser run your query and have it graded.
To prep for the nCino SQL interview you can also be wise to solve interview questions from other tech companies like:
However, if your SQL query skills are weak, forget about diving straight into solving questions – refresh your SQL knowledge with this SQL tutorial for Data Scientists & Analysts.
This tutorial covers SQL topics like functions like SUM()/COUNT()/AVG() and math functions – both of these come up frequently during nCino interviews.
Beyond writing SQL queries, the other topics to practice for the nCino Data Science Interview are:
The best way to prepare for nCino Data Science interviews is by reading Ace the Data Science Interview. The book's got: