pythonalgorithmperformanceminimumminimization

Find efficient algorithm to compute smallest index k such that $|f(k)-v|$ is minimum


Problem: Given a function f which is increasing, a float v and a positive integer n, find an integerk in the interval [0, n) such that |f(k)-v| is minimum. Use the fact that f is increasing!

My idea is to consider the first k such that f(k) >= v, since after that we don't have to check, since the function is increasing. That should return the index where the minimum of f lies.

However, when i implemented the code, it returned the wrong answer in certain cases. Here is the code:

def f(x):
    '''
    (float) -> float
    f must be increasing
    '''
    return x**2

def find(v, n):
    '''
    (float, int) -> int
    Returns k in [0:n]
    such that |f(k) - v| is minimum
    '''
    
    min_index = 0
    
    for k in range(n):
        if f(k) >= v:
            min_index = k
            return min_index

    return -1  

For the case find(5, 10) it returns the wrong index (it should be '2', but it returns '3'). However, when i call find(2, 10), it gives me the correct answer.

What is happening here?

Observation: Cannot use search algorithms such as binary search and etc

Thank you


Solution

  • Your task is to minimize the absolute deviation from target. What you’re actually doing is finding the first positive deviation. A negative deviation can have a smaller magnitude than the first positive deviation, in which case you will return the wrong answer for the task.

    If your target value is 5, f(2) -> 4 and f(3) -> 9. The index k==3 yields the first outcome greater than 5, which meets your program’s criteria but ignores absolute value. |f(2) - 5| -> 1, while |f(3) - 5| -> 4. Since 1 < 4, f(2) minimizes your objective function and is the correct answer.

    You will get the wrong answer for many target values if your program ignores the absolute value of the deviations from target.