I'm trying to allocate shelf space to items on planogram using constraint programming. It is a big problem and I'm trying to implement piece by piece. At first we're just trying to place items on shelf.
The strategy is dividing whole planogram into multiple sections. For example if my planogram width is 10 cm with 3 shelfs and chosen granalarity is 1 then the total avialable space is 30 grids.
Now, let's say I have an item of 6 cm then accordingly, it'll take 6 grid on planogram. there can be multiple such products also, so I have defined 4 constraints to arrange items properly:
This approach is working perfectly for smaller configuration like the one below:
here, tpnb is item number and linear is space required.
Now, sample planogram configuration:
granularity = 1,
shelf_count = 3,
section_width = 10
The image attached above is the solution that I got after running the code. but however this is not scaling well for larger planograms, like:
granularity = 1
shelf_count = 7
section_width = 133
total item count: 57
What I'm getting is :
Status: ExitStatus.UNSATISFIABLE (59.21191700000001 seconds)
No solution found.
I tried so many times with modifying the constraints n all but I'm unable to figure out what is causing this issue why it is failing for large config or what is the root cause of infeasibility here.
Sharing my code below:
pog_df['linear'] = pog_df.linear.apply(np.ceil)
pog_df['gridlinear'] = pog_df.linear//granularity
G = nx.grid_2d_graph(int(section_width * bay_count/granularity),int(shelf_count))
# Define the positions of the nodes for a planar layout
pos = {(x, y): (x, y) for x, y in G.nodes()}
# # Draw the graph
plt.figure(figsize=(8, 4))
nx.draw(G, pos, node_size=0.07)#, with_labels=True, node_size=7, node_color="lightblue", font_weight="bold")
products = pog_df[['tpnb', 'gridlinear']].astype(str)
locations = pd.Series([str(s) for s in G.nodes()], name='location')
locations = pd.concat([locations,locations.to_frame().location.str.strip("() ").str.split(',', expand=True).rename(columns={0: 'x', 1: 'y'})], axis=1)
l_p_idx = pd.merge(products.reset_index(),
locations,
how='cross')[['tpnb', 'gridlinear', 'location', 'x', 'y']]
n_location = len(locations)
n_products = pog_df.shape[0]
# create decision variables
l_p_idx['Var'] = l_p_idx.apply(lambda x: cp.boolvar(name=x['location']+'-'+x['tpnb']), axis=1)
m = cp.Model()
l_p_idx.groupby('tpnb', as_index=False).agg({'Var':cp.sum, 'gridlinear': 'unique'}).apply(lambda x: m.constraints.append(x['Var']==int(float(x["gridlinear"]))), axis=1)
l_p_idx.groupby('location', as_index=False).agg({'Var':cp.sum}).apply(lambda x: m.constraints.append(x['Var']<=1), axis=1)
l_p_idx["y"] = l_p_idx["y"].astype("int32")
shelf_var = {tpnb: cp.intvar(0, max(l_p_idx["y"])) for tpnb in l_p_idx["tpnb"].unique()}
l_p_idx.apply(lambda row: m.constraints.append(
(row['Var'] == 1).implies(row['y'] == shelf_var[row['tpnb']])
), axis=1)
def process_group(level, data):
return level, {eval(row['location']): row['Var'] for _, row in data.iterrows()}
def parallel_creator(key, idx_df):
node_dict = {}
with ProcessPoolExecutor() as executor:
futures = {executor.submit(process_group, level, data): level for level, data in idx_df.groupby(key)}
for future in as_completed(futures):
level, var_dict = future.result()
node_dict[level] = var_dict
return node_dict
node_p_var_dict = parallel_creator( 'tpnb', l_p_idx)
for p in products.tpnb.values:
for shelf in range(shelf_count):
m.constraints.append(
cp.sum([(node_p_var_dict[str(p)][(level, shelf)] != node_p_var_dict[str(p)][(level+1, shelf)])
for level in range(section_width - 1)]) <= 1
)
hassol = m.solve()
print("Status:", m.status())
Actually, due to the formulation of problem, the solver is always placing the item only on the edges, adding one more constraint helped to solve the issue:
m.constraints.append(node_p_var_dict[p][(0, shelf)] + node_p_var_dict[p][((section_width*bay_count)-1, shelf)] <= 1)