My idea is to write a python program which would take as arguments two finite simple undirected graphs say G,H and returns the number hom(G,H) of graph homomorphisms from G to H.
Examples: If G=K_1 (one-vertex graph) then hom(G,H) equals the number of vertices of H. If G=K_2 (or equivalently, P_2), then hom(G,H) = 2 times the number of edges of H.
Can anyone please help me out? Thanks.
In general it's NP-hard. If the graph G has n vertices and the graph H has m vertices, a naive approach could be to check all n^m possible assignment functions between the two graphes.
This is equivalent to do m chained loops over range(n)
.
I know two ways to do it in python:
1)You can generate m lists [1...n] and use itertools.product
to get the cartesian product between these lists.
2)You can generate a string with these chained loops code and execute it in python with the exec
built-in function.
If you use the first solution, it's highly parallelizable. So you can speed up quite a bit.
An implementation of the first idea without parallelization would be something like this:
from itertools import product
def verify(G, H, f):
homomorphism = True
for edge in G:
if not ((f[edge[0]], f[edge[1]]) in H):
homomorphism = False
break
return homomorphism
def solve(G, H, n, m):
rangeG = [i for i in range(n)]
assignments = list(product(rangeG, repeat=m))
cnt = 0
for f in assignments:
if verify(G, H, f):
cnt += 1
return cnt
Here the graphs G
and H
are stored as a set of tuples. The tuples represent the edges. This representation is very convenient to test the homomorphism condition and to apply assignment functions in a fast way. The parameters n
and m
are the number of vertices in each graph.
For example, if you want G = S4 and H = P4 it would be something like this: G = {(0, 1), (1, 0), (0, 2), (2, 0), (0, 3), (3, 0)}
and H = {(0, 1), (1, 0), (1, 2), (2, 1), (2, 3), (3, 2)}
. Then you call the function solve(G, H, 4, 4)
.
I tested it with some examples of the section 2.3 of this paper and it seems to be working well.
As I said, the speed can be improved a lot with parallelization. This code is parallelizable almost everywhere. It needs some testing to see what is worth to execute in parallel.