[Numpy-discussion] Iterative Matrix Multiplication

Friedrich Romstedt friedrichromstedt@gmail....
Sat Mar 6 16:26:45 CST 2010

```I'm a bit unhappy with your code, because it's so hard to read tbh.
You don't like objects?

2010/3/6 Ian Mallett <geometrian@gmail.com>:
> Unfortunately, the pure Python implementation is actually an order of
> magnitude faster.  The fastest solution right now is to use numpy for the
> transformations, then convert it back into a list (.tolist()) and use Python
> for the rest.

:-( But well, if it's faster, than do it that way, right?  I can only
guess, that for large datasets, the comparison to compare each and
every vector "in parallel" makes it slow down.  So an iterative
approach might be fine.  In fact, no one would implement such things
in C++ using not lists with pushback(), I guess.

> Here's the actual Python code.
>
> def glLibInternal_edges(object,lightpos):
>     edge_set = set([])

Where do you use edge_set?  Btw, set() would do it.

>     edges = {}
>     for sublist in xrange(object.number_of_lists): #There's only one sublist here
>         face_data = object.light_volume_face_data[sublist]
>         for indices in face_data: #v1,v2,v3,n

Here objects would fit in nicely.

for indices in face_data:
normal = object.transformed_normals[sublist][indices.nindex]
(v1, v2, v3) = [object.transformed_vertices[sublist][vidx] for
vidx in indices.vindices]

>             normal = object.transformed_normals[sublist][indices[3]]
>             v1,v2,v3 = [ object.transformed_vertices[sublist][indices[i]] for i in xrange(3) ]

v1, lightpos can be stored as numpy ndarrays, no?  If you have a list
of vectors in an ndarray arr, you can convert to a list of ndarrays by
using list(arr).  This will only iterate over the first dimension, and
not recurse into the vectors in arr.  Provided this, you can simply
write:

if (normal * (v1 - lightpos) / numpy.sqrt(((v1 - lightpos) ** 2).sum())) > 0:
(...)

>                 for p1,p2 in [[indices[0],indices[1]],
>                               [indices[1],indices[2]],
>                               [indices[2],indices[0]]]:
>                     edge = [p1,p2]

Why not writing:

for edge in numpy.asarray([[indices[0], indices[1]], (...)]):
(...)

>                     index = 0

Where do you use index?  It's a lonely remnant?

>                     edge2 = list(edge)

Why do you convert the unmodified edge list into a list again?  Btw, I
found your numbering quite a bit annoying.  No one can tell from an
appended 2 what purpose that xxx2 has.

Furthermore, I think a slight speedup could be reached by:
unique = (egde.sum(), abs((egde * [1, -1]).sum()))

>                     edge2.sort()
>                     edge2 = tuple(edge2)
>                     if edge2 in edges: edges[edge2][1] += 1
>                     else:              edges[edge2] = [edge,1]
>
>     edges2 = []
>     for edge_data in edges.values():
>         if edge_data[1] == 1:
>             p1 = object.transformed_vertices[sublist][edge_data[0][0]]
>             p2 = object.transformed_vertices[sublist][edge_data[0][1]]
>             edges2.append([p1,p2])
>     return edges2

My 2 cents:

class Edge:
def __init__(self, indices, count):
self.indices = indices

def __hash__(self):
return hash(self.unique)

edges = {}
(...)
edges.setdefault()
edges[egde2] += 1

for (indices, count) in edges.items():
if count == 1:
edges_clean.append(object.transformed_vertices[sublist][indices])

provided that transformed_vertices[sublist] is an ndarray.  You can
iterate over ndarrays using standard Python iteration, it will provide
an iterator object.
```