I am trying to solve the following optimization problem and trying to obtain a set of values x_1, x_2, ..., x_k
as follows:
argmin Σx_i * a_i
subject to <x_1, x_2, ..., x_k> ~ Lap(m, b)
The terms a_i are constants, and the values x_i are drawn from a laplace distribution with mean m and scale parameter b. Hence the resulting outputs are generated from a Laplacian distribution. What is this kind of constraint called?
This looks like a general form of Lasso regularization. Adding L1 regularization can often been seen as enforcing a Laplace prior on your data (see the Bayesian interpretation). You could try solving:
argmin Σx_i * a_i + 1/b Σ|x_i - m|
You could try solving this using (sub)-gradient methods or proximal minimization.