I have a collection, the number of which may vary, of non-linear equations with constraints that I'd like to solve using some numerical approach.
I've been able to solve a simple (one equation) case in Excel using Solver, but haven't put anything like this together in Python before so would appreciate suggestions on approach.
Having done a bit of digging, it looks like fsolve is a popular approach for solving systems like these. For a simple, two equation case, my problem takes the following form, broken out into parts for clarity:
And the same form for the second, b, equation.
A is a constant, variable Z, S and x are constants for each entity i, and the only values that are independent are exponent a and b; two equations, two unknowns, so there should be a single unique solution.
As I'd said, I set up the simple one equation case in Excel and successfully solved for using Solver. Any guidance on setting this up in Python is appreciated.
The problem you're describing is one of root finding. You want to find (a,b) for which f(a,b)=0
A simple approach would be fixed point iteration. Since you have an analytical expression for f(a,b), you could calculate the derivatives and use Newton's method. To set this up using fsolve you'll need to define a function:
def myfunc(x):
val1 = #evaluate your first expression here using Z and S
val2 = #evaluate your second expression here
return np.ndarray([val1 val2])
You can optionally pass in your values for S and Z using the *args argument.
Then solve using:
fsolve(myfunc,x0)
where x0 is an initial guess.
Note that fsolve may not respect your condition on w. If that isn't satisfied identically for your problem, I'd look into a method that supports constrained optimization such as fmin_slsqp. The syntax should be very similar to what I described for fsolve in either case.