![]() ![]() Things seem very consisstent once you get above. I re-ran this for both real and complex arrays of size, ,, ,, , and. The 3 inputs default to the values used to generate the original image I posted if they are left blank. Results get saved in a folder that is named based on the 3 aforementioned input variables. I tweaked it to make it very user friendly - tell it the matrix size to use, the number of trials / unique sparsity levels to use, and whether to use real- or complex-valued arrays, and it does everything for you. Id expect these results might change (not so much in overall ordering, but in magnitude of the difference) on other CPU artichetures. Side note: this was run on and ivy-bridge 4c/8t CPU. However, making both X and y improved things by another ~1.5x or so (except at very low sparsity). Making y sparse was a little better than making X sparse, which makes sense (you drop the same number of computations, but the "downsides" of using a sparse array are always smaller than for a sparse array). Note that X and y has the same overall sparsity in each test. The situation might not be as bad for y' * X'.įor BLAS, using sparse matricies was unilaterally helpful (in terms of execution time) for sparsity larger that 20% or so.įor BLAS, Making either X or y produced a similar gain. NOTE: this is probably because of how much Matlab dislikes out-of-order memory access with sparse arrays. It is never useful for the loop-based method. ![]() In the bsxfun method, having X be sparse is useful for ~70% sparsity and up. y being sparse (or not being sparse) has almost no effect.įor both bsxfun and the loop method, using a sparse X made things ~ 3x slower for low sparsity cases. The loop method was the worst in all cases (also not a surprise).įor (dense X * dense y) inputs, it doesnt matter how sparse the matrix actually is, as expected.įor (dense X * dense y) inputs, bsxfun is ~3-4x slower than BLAS, and the loop-based vector-vector product method is 5-8x slower than bsxfun.įor bsxfun and loops, it only matters whether or not X is sparse. ![]() Note the y-axis is log scale in both subplots.īLAS is best in literally every scenario, usually by a huge margin.This is no surprise, but I admit I didnt expect bsxfun to lag behind BLAS quite as much as it did (particularly in sparse cases). I ran 51 trials with sparsity ranging from ~3% to ~97%. I also tried having both X and y as dense matricies, both as sparse, and one sparse / one dense. Method 2: 3: Loop + vector-vector products Here we discuss how to perform matrix multiplication in Matlab along with the examples.I ended up doing this in order to try and speed up a code I was working on, but I thought the results were interesting and figured a few people on here might think the same.īasically, I took a matrix(X)-vector(y) product and solved it 3 ways: Method 1: BLAS (standard matrix multiply) This is a guide to Matrix Multiplication in Matlab. Both the methods used for matrix multiplication are easy and simple to implement. Matrix multiplication is a very difficult and complex operation in mathematics but we implement the same in Matlab we can easily get the output without error. In the above example, the dimension of the first matrix are 3 rows and 4 columns and dimensions of the second matrix are 3 rows and 3 columns so a number of columns of the first matrix are not equal to the number of rows of the second matrix so multiplication cannot execute. Let us assume two matrices are mat1 and mat2, Let us consider two matrix mat1 and mat2, Here are some of the examples of matrix multiplication in Matlab which are given below: Example #1
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |