Back to questions
Given a list of salaries, we'll define a metric called which is the difference between max and min salary seen in the list:
Write a function called which takes in a list of salaries, and a value , and returns the minimum inequity possible when taking salaries from the full salary list.
If that was hard to understand, you're not alone – let's break it down with some examples.
The minimum inequity is $10,000, because .
The minimum inequity is $20,000, because .
We can always brute-force a solution, by generating all sub-lists of the input salary list, of length . For each of these lists, we can then compute the min and max, and keep track of the smallest one we find.
However, a more clever solution is to sort the list in ascending order and check the minimum possible value of difference between elements with distance ‘n’ apart, almost like a sliding window.
This makes sure that for a given sub-list of length we will get the maximum and minimum element of the list at index and index respectively.
Here's that sorted approach:
The time complexity of this approach will be which is for sorting the array. Our linear pass doesn't affect the run-time.