At Fortinet, SQL is used quite frequently for analyzing cybersecurity data patterns and managing structured threat intelligence databases. Because of this, Fortinet almost always evaluates jobseekers on SQL problems during interviews for Data Analytics, Data Science, and Data Engineering jobs.
In case you're studying for a SQL Assessment, we've collected 10 Fortinet SQL interview questions to practice, which are similar to recently asked questions at Fortinet – how many can you solve?
Assuming Fortinet is a company that sells cybersecurity products, you are given a table with the following columns:
You are tasked to write a SQL query that calculates average monthly sales for each product and rank the products based on their average monthly sales in descending order.
sales_id | product_id | sale_date | sale_amount |
---|---|---|---|
9912 | 207 | 06/08/2022 00:00:00 | 85.20 |
7802 | 49 | 06/10/2022 00:00:00 | 121.50 |
5293 | 207 | 06/18/2022 00:00:00 | 90.37 |
6352 | 49 | 07/26/2022 00:00:00 | 94.25 |
4517 | 49 | 07/05/2022 00:00:00 | 98.15 |
month_year | product_id | avg_monthly_sale | rank |
---|---|---|---|
6/2022 | 207 | 87.79 | 1 |
6/2022 | 49 | 121.50 | 2 |
7/2022 | 49 | 96.20 | 1 |
This query first calculates average monthly sales for each product using a window function in the CTE. Then in the main select statement, it ranks the products by average monthly sales in descending order for each month using another window function. Please note that this query assumes PostgreSQL as the SQL environment.
To solve a similar SQL problem on DataLemur's free interactive coding environment, solve this Facebook SQL Interview question:
As a database manager for Fortinet, you are tasked to understand the product sales pattern over different geographical regions. The company wants to know the top selling product in each region for the last quarter.
The company has two main tables namely: 'sales' and 'products'
The 'sales' tables holds information about the product sales with relevant details such as sale_id, sale_date, product_id, region and units_sold.
sale_id | sale_date | product_id | region | units_sold |
---|---|---|---|---|
1 | 2022-04-01 | 101 | North America | 10 |
2 | 2022-04-15 | 102 | Europe | 7 |
3 | 2022-05-20 | 101 | Asia | 15 |
4 | 2022-06-10 | 103 | Europe | 12 |
5 | 2022-06-22 | 102 | North America | 8 |
The 'products' table holds information about product details like product_id, product_name and product_price.
product_id | product_name | product_price |
---|---|---|
101 | FortiGate60E | 1200 |
102 | FortiGate80E | 2300 |
103 | FortiWiFi45E | 900 |
Your task is to write a SQL query to display the top selling product in each region for the last quarter.
This query first joins the 'sales' and 'products' tables using the product_id. It then groups the data by region and product_name, sums the total units sold for each group, and finally sorts the results in descending order of total units. The 'WHERE' clause ensures that we are only considering data for the last quarter. The query then returns the top product for each region.
To practice a related SQL problem on DataLemur's free online SQL coding environment, solve this Meta SQL interview question:
No, in almost all cases, and for all practical purposes, and do NOT produce the same result.
While both are similar, in that they combine two tables, you can think of joins as increasing the width of the resulting table (you'll have more columns in the result set for a left/inner/right join), whereas a union is used to combine rows which increases the height of the result set but keeps the column count the same.
You are given a table named containing customer data for Fortinet. The table has the following columns:
Your task is to write a SQL query to select all customers from the United States () who have made a total purchase of more than $1000 () during the year 2021.
Please note that the 'purchase_date' is in date format and you must parse it to filter the year 2021.
customer_id | first_name | last_name | country | purchase_amount | purchase_date |
---|---|---|---|---|---|
8182 | John | Doe | United States | 1200 | 03/15/2021 |
9393 | Jane | Woods | Canada | 550 | 07/26/2021 |
5612 | Alice | Smith | United States | 800 | 10/18/2021 |
7793 | Robert | Brown | United States | 2000 | 04/08/2021 |
6584 | Charlie | Black | United Kingdom | 900 | 06/10/2021 |
This query uses the clause to filter based on three conditions: the country of residence, the total purchase amount, and the year of purchase. The function is used to convert the purchase date from a string to a date, and extracts the year from this date. The customers who meet all three conditions are selected.
To practice a related SQL interview question on DataLemur's free online SQL coding environment, solve this SQL interview question asked by Facebook:
In database schema design, a one-to-one relationship between two entities is where each entity is associated with only one instance of the other entity. For example, the relationship between a US citizen and their social-security number (SSN) is one-to-one, because each citizen can only have one SSN, and each SSN belongs to exactly one person.
On the other hand, a one-to-many relationship is where one entity can be associated with multiple instances of the 2nd entity. For example, each person can be associated with multiple email addresses, but each email address only relates back to one person.
As an IT security company, Fortinet supplies various Firewall systems to secure a company's network. Every Firewall system sends alert notifications depending on the level of security threats. Your task is to calculate the average number of alert notifications each type of Firewall system sends daily.
In this provided PostgreSQL query, we first create a subquery where we group the data by and and count the number of alerts () per group. Then, from this sub-query, we group by and calculate the average number of alerts.
To practice a similar problem about calculating rates, try this TikTok SQL Interview Question on DataLemur's interactive SQL code editor:
A cross-join, also known as a cartesian join, is a JOIN that produces the cross-product of two tables. In a cross-join, each row from the first table is matched with every row from the second table, resulting in a new table with a row for each possible combination of rows from the two input tables.
Let's say you were building a Machine Learning model that attempts to score the probability of a customer purchasing a Fortinet product. Before working in Pandas and Tensorflow, you might want to do some Exploratory Data Analysis (EDA) in SQL, and generate all pairs of customers and Fortinet products.
Here's a cross-join query you could run:
Cross-joins are useful for generating all possible combinations, but they can also create huge tables if you're not careful. For instance, if you had 10,000 potential customers and Fortinet had 500 different product SKUs, the resulting cross-join would have 5 million rows!
Fortinet is a cybersecurity company that produces and sells products like firewalls, anti-virus software, intrusion prevention systems etc. As an SQL analyst, your task is to determine the monthly average sales of each product.
Suppose we have a table that keeps track of all Fortinet products sold. The table has rows representing individual transactions, with each row containing the , , (in the format YYYY-MM-DD), and .
transaction_id | product_id | sale_date | units_sold |
---|---|---|---|
6512 | 1001 | 2022-01-12 | 15 |
1730 | 1002 | 2022-01-14 | 20 |
9203 | 1001 | 2022-01-23 | 12 |
5101 | 1003 | 2022-01-30 | 30 |
2549 | 1002 | 2022-02-03 | 19 |
8600 | 1001 | 2022-02-14 | 17 |
4502 | 1003 | 2022-02-18 | 25 |
3626 | 1001 | 2022-02-28 | 20 |
We want to know the average units of each product sold per month.
month | product | average_units_sold |
---|---|---|
1 | 1001 | 13.50 |
1 | 1002 | 20.00 |
1 | 1003 | 30.00 |
2 | 1001 | 18.50 |
2 | 1002 | 19.00 |
2 | 1003 | 25.00 |
This PostgreSQL query first extracts the month from the using the function. Then, for each month and for each product, it calculates the average of using the function. The clause is used to group data by month and product – this allows us to apply the aggregate function (AVG) to each group of rows that share the same month and product. Final results are sorted by the month and then by the product.
To solve a related SQL problem on DataLemur's free interactive coding environment, attempt this SQL interview question asked by Facebook:
In Fortinet, the salary details are tracked in a 'salary' table for each employee. Employees can receive bonuses based on the total number of successful projects which are tracked in the 'projects' table. The bonus is calculated as follows - the square root of the successful projects count times the absolute value of the difference between 7 (indicating all days of the week that they might have worked) and the actual working days in a week. The total salary is the current salary plus the calculated bonus rounded to 2 decimal places.
Write a query to get the employeeID, employee name, original salary, bonus, and the calculated total salary for each employee.
Tables:
employee_id | name | current_salary (in $) | working_days_in_week |
---|---|---|---|
101 | John | 5500 | 5 |
102 | Lucy | 6200 | 6 |
103 | Tom | 7000 | 4 |
104 | Cindy | 5800 | 5 |
project_id | employee_id | status |
---|---|---|
1 | 101 | Success |
2 | 101 | Success |
3 | 101 | Fail |
4 | 102 | Success |
5 | 103 | Success |
6 | 103 | Success |
7 | 104 | Success |
8 | 104 | Success |
9 | 104 | Fail |
This SQL query first gets the total number of successful projects for each employee by the subquery. Then, it calculates the bonus using the given formula for each employee based on the number of successful projects and working days in a week. The total salary is given by adding the calculated bonus to the current salary and rounded to 2 decimal places for clarity.
To practice a related SQL problem on DataLemur's free online SQL code editor, try this SQL interview question asked by Facebook:
There are several normal forms that define the rules for normalizing a database:
A database is in first normal form (1NF) if it meets the following criteria:
A database is in second normal form (2NF) if it meets the following criteria:
Said another way, to achieve 2NF, besides following all the rules from 1NF all the columns in a given table should be dependent only on that table's primary key.
A database is in third normal form (3NF) if it meets the following criteria:
A transitive dependency means that a piece of data in one column is derived from another column. For example, it wouldn't make sense to keep a column called "user's age" and "user's birthdate" (because age can be derived from birthdate.
While there's also a 4th and 5th normal form, it's too pedantic and hence out-of-scope to know for the Fortinet SQL interview.
The best way to prepare for a Fortinet SQL interview is to practice, practice, practice. Beyond just solving the above Fortinet SQL interview questions, you should also solve the 200+ SQL Interview Questions on DataLemur which come from companies like Google, Microsoft and Silicon Valley startups.
Each problem on DataLemur has hints to guide you, step-by-step solutions and crucially, there is an interactive SQL code editor so you can easily right in the browser your query and have it executed.
To prep for the Fortinet SQL interview you can also be useful to practice SQL problems from other tech companies like:
In case your SQL query skills are weak, don't worry about going right into solving questions – go learn SQL with this DataLemur SQL Tutorial.
This tutorial covers things like WHERE vs. HAVING and AND/OR/NOT – both of which pop up frequently during Fortinet SQL interviews.
For the Fortinet Data Science Interview, besides SQL questions, the other types of questions to prepare for are:
To prepare for Fortinet Data Science interviews read the book Ace the Data Science Interview because it's got: