I'm fairly new to Python and trying to create a function to multiply a vector by a matrix (of any column size). e.g.:
multiply([1,0,0,1,0,0], [[0,1],[1,1],[1,0],[1,0],[1,1],[0,1]])
[1, 1]
Here is my code:
def multiply(v, G):
result = []
total = 0
for i in range(len(G)):
r = G[i]
for j in range(len(v)):
total += r[j] * v[j]
result.append(total)
return result
The problem is that when I try to select the first row of each column in the matrix (r[j]) the error 'list index out of range' is shown. Is there any other way of completing the multiplication without using NumPy?
The Numpythonic approach: (using numpy.dot
in order to get the dot product of two matrices)
In [1]: import numpy as np
In [3]: np.dot([1,0,0,1,0,0], [[0,1],[1,1],[1,0],[1,0],[1,1],[0,1]])
Out[3]: array([1, 1])
The Pythonic approach:
The length of your second for
loop is len(v)
and you attempt to indexing v
based on that so you got index Error . As a more pythonic way you can use zip
function to get the columns of a list then use starmap
and mul
within a list comprehension:
In [13]: first,second=[1,0,0,1,0,0], [[0,1],[1,1],[1,0],[1,0],[1,1],[0,1]]
In [14]: from itertools import starmap
In [15]: from operator import mul
In [16]: [sum(starmap(mul, zip(first, col))) for col in zip(*second)]
Out[16]: [1, 1]