I am trying to solve an equation similar to the simplified example
x / x.sum - b = 0
where x is an n-dimensional vector. Since one can multiply x with any constant without changing the equation, the solution is up to a normalization. Because of this, I try to add an n+1-th equation, such that
x.sum() - 1 = 0
My attempts to put this in code have all produced errors. This is the most recent minimal example:
import numpy as np
from scipy.optimize import fsolve
n = 1000
a = np.ones(n)
b = np.random.rand(n)
def f(x):
return (x / (x.sum() + 1) - b, x.sum() - 1)
x = fsolve(f, a)
print(x)
Is this possible with fsolve? What is the correct code?
Further context: The function is a simplified example. The actual function that I attempt to solve is non-linear and complicated. I can proof that a solution for x exists and that it is unique up to scaling.
I suggest you rephrase your problem as an optimization problem where constraints are naturally accounted. Here using https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
import numpy as np
from scipy.optimize import minimize
np.random.seed(10)
n = 1000
a = np.ones(n)
b = np.random.rand(n)
a /= a.sum() #added to speed up the optimization
b /= b.sum() #added to sanity check the solution - not needed
def opt(x):
return ((x/x.sum()-b)**2).sum() #replaced mean by sum for better scaling with n
def norm_constraint(x):
return x.sum() - 1
x = minimize(opt, x0=a, constraints={'type': 'eq', 'fun': norm_constraint}, tol=1e-10).x # passing tol for optimization to not terminate too early
print( np.sum(x), np.max(np.abs(x-b)))
# 1.0 1.5460126668448426e-12
Edit, here is how you can force entries of x to be positive:
import numpy as np
from scipy.optimize import minimize
from functools import partial
np.random.seed(10)
n = 1000
a = np.ones(n)
b = np.random.rand(n)
a /= a.sum() #added to speed up the optimization
def opt(x):
return ((x/x.sum()-b)**2).sum() #replaced mean by sum for better scaling with n
def norm_constraint(x):
return x.sum() - 1
constaints=[{'type': 'eq', 'fun': norm_constraint}]
def pos_constraint_maker(x,idx):
return x[idx]
for idx in range(n):
constaints.append({'type': 'ineq', 'fun': partial(pos_constraint_maker, idx=idx)})
x = minimize(opt, x0=a, constraints=constaints, tol=1e-10).x # passing tol for optimization to not terminate too early
print( np.sum(x), np.max(np.abs(x-b)))
# 1.0 0.9536785254041191
np.max( np.abs( x[x<0])), np.sum(np.round(x, 10) < 0)
# 8.525672691589617e-14, 0
# x does have some negative entries, but their magnitude is very low (=1e-13), and rounding can alleviate that