Using either lmfit.Model or scipy.optimize.curve_fit I have to optimize a function whose output needs to be convolved with some experimental data before being fit to some other experimental data. To sum up, the workflow is something like this:
(1) Function A is defined (for example, a Gaussian function). (2) The output of function A is convolved with an experimental signal called data B. (3) The parameters of function A are optimized for the convolution mentioned in (2) to perfectly match some other experimental data called data C.
I am convolving the output of function A with data B using Fourier transforms as follows:
from scipy.fftpack import fft, ifft
def convolve(data_B, function_A):
convolved = ifft(fft(IRF) * fft(model)).real
return convolved
How can I use lmfit.Model or scipy.optimize.curve_fit to fit "convolved" to data C?
EDIT: In response to the submitted answer, I have incorporated my convolution step into the equation used for the fit in the following manner:
#1 component exponential distribution:
def ExpDecay_1(x, ampl1, tau1, y0, x0, args=(new_y_irf)): # new_y_irf is a list.
h = np.zeros(x.size)
lengthVec = len(new_y_decay)
shift_1 = np.remainder(np.remainder(x-np.floor(x0)-1, lengthVec) + lengthVec, lengthVec)
shift_Incr1 = (1 - x0 + np.floor(x0))*new_y_irf[shift_1.astype(int)]
shift_2 = np.remainder(np.remainder(x-np.ceil(x0)-1, lengthVec) + lengthVec, lengthVec)
shift_Incr2 = (x0 - np.floor(x0))*new_y_irf[shift_2.astype(int)]
irf_shifted = (shift_Incr1 + shift_Incr2)
irf_norm = irf_shifted/sum(irf_shifted)
h = ampl1*np.exp(-(x)/tau1)
conv = ifft(fft(h) * fft(irf_norm)).real # This is the convolution step.
return conv
However, when I try this:
gmodel = Model(ExpDecay_1)
I get this:
gmodel = Model(ExpDecay_1) Traceback (most recent call last):
File "", line 1, in gmodel = Model(ExpDecay_1)
File "C:\Users\lopez\Anaconda3\lib\site-packages\lmfit\model.py", line 273, in init self._parse_params()
File "C:\Users\lopez\Anaconda3\lib\site-packages\lmfit\model.py", line 477, in _parse_params if fpar.default == fpar.empty:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
EDIT 2: I managed to make it work as follows:
import pandas as pd
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
import numpy as np
from lmfit import Model
from scipy.fftpack import fft, ifft
def Test_fit2(x, arg=new_y_irf, data=new_y_decay, num_decay=1):
IRF = arg
DATA = data
def Exp(x, ampl1=1.0, tau1=3.0): # This generates an exponential model.
res = ampl1*np.exp(-x/tau1)
return res
def Conv(IRF,decay): # This convolves a model with the data (data = Instrument Response Function, IRF).
conv = ifft(fft(decay) * fft(IRF)).real
return conv
if num_decay == 1: # If the user chooses to use a model equation with one exponential term.
def fitting(x, ampl1=1.0, tau1=3.0):
exponential = Exp(x,ampl1,tau1)
convolved = Conv(IRF,exponential)
return convolved
modelling = Model(fitting)
res = modelling.fit(DATA,x=new_x_decay,ampl1=1.0,tau1=2.0)
if num_decay == 2: # If the user chooses to use a model equation with two exponential terms.
def fitting(x, ampl1=1.0, tau1=3.0, ampl2=1.0, tau2=1.0):
exponential = Exp(x,ampl1,tau1)+Exp(x,ampl2,tau2)
convolved = Conv(IRF,exponential)
return convolved
modelling = Model(fitting)
res = modelling.fit(DATA,x=new_x_decay,ampl1=1.0,tau1=2.0)
if num_decay == 3: # If the user chooses to use a model equation with three exponential terms.
def fitting(x, ampl1=1.0, tau1=3.0, ampl2=2.0, tau2=1.0, ampl3=3.0, tau3=5.0):
exponential = Exp(x,ampl1,tau1)+Exp(x,ampl2,tau2)+Exp(x,ampl3,tau3)
convolved = Conv(IRF,exponential)
return convolved
modelling = Model(fitting)
res = modelling.fit(DATA,x=new_x_decay,ampl1=1.0,tau1=2.0)
if num_decay == 4: # If the user chooses to use a model equation with four exponential terms.
def fitting(x, ampl1=1.0, tau1=0.1, ampl2=2.0, tau2=1.0, ampl3=3.0, tau3=5.0, ampl4=1.0, tau4=10.0):
exponential = Exp(x,ampl1,tau1)+Exp(x,ampl2,tau2)+Exp(x,ampl3,tau3)+Exp(x,ampl4,tau4)
convolved = Conv(IRF,exponential)
return convolved
modelling = Model(fitting)
res = modelling.fit(DATA,x=new_x_decay,ampl1=1.0,tau1=2.0)
return res
You could simply do the convolutions in your model function that is wrapped by lmfit.Model
, passing in the kernel array to use in the convolution. Or you could create convolution kernel and function, and do the convolution as part of the modeling process, as described for example at https://lmfit.github.io/lmfit-py/examples/documentation/model_composite.html
I would imagine that the first approach is easier if the kernel is not actually meant to be changed during the fit.