This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.MathematicsWikipedia:WikiProject MathematicsTemplate:WikiProject Mathematicsmathematics
This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.Computer scienceWikipedia:WikiProject Computer scienceTemplate:WikiProject Computer scienceComputer science
This article has been given a rating which conflicts with the project-independent quality rating in the banner shell. Please resolve this conflict if possible.
The contents of the Sparse array page were merged into Sparse matrix on 2017-04-25. For the contribution history and old versions of the redirected page, please see its history; for the discussion at that location, see its talk page.
While the Cuthill-McKee algorithm is a fine algorithm, it was first shown that the Reverse Cuthill-McKee algorithm was often better by A. George in 1971.
There have been many arguably better algorithms in the last 30 years.
Cholesky factorization usually only shows up (in my experience) for large sparse matrices in solutions of least squares problems via Normal Equations.
There have been many many alternatives since the 1960's and Householders work. [Lawson and Hanson, 1973] covered several, for example.
Reference: Lawson, Charles L. and Hanson, Richard J. 1974, "Solving Least Squares Problems" Prentice-Hall.
The article seems, to me, to have a very old fashioned bias in my opinion. I added some remarks against that bias.
My first sentence shows how small an area I play in. In Tim Davis' note below under "too much emphasis on band matrices" he corrects this, and in email has sent me a pointer to his http://www.cise.ufl.edu/research/sparse/matrices which has lots and lots of symmetric positive definite matrices that arise in non least squares contexts and for which Cholesky factorization would be appropriate. Nahaj14:54, 27 September 2007 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2007-09-27T14:54:00.000Z","author":"Nahaj","type":"comment","level":1,"id":"c-Nahaj-2007-09-27T14:54:00.000Z-Cholesky_factorization","replies":[]}}-->
in algebraic geometry sometimes sparse (multivariate) polynomials are mentoined, although I do not know
an explicit definition. Assuming somebody here knows a bit about this, it could be a nice topic to add..
guest, 2005-09-14
Well, looking forward to seeing a sparse polynomial article! :) I know nothing of the topic, so I can't help. Oleg Alexandrov21:10, 14 September 2005 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2005-09-14T21:10:00.000Z","author":"Oleg Alexandrov","type":"comment","level":1,"id":"c-Oleg_Alexandrov-2005-09-14T21:10:00.000Z-Sparse_polynomials?","replies":[]}}-->
__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-Nahaj-2007-09-27T15:02:00.000Z","type":"heading","level":0,"id":"h-too_much_emphasis_on_band_matrices-2007-09-27T15:02:00.000Z","replies":["c-Nahaj-2007-09-27T15:02:00.000Z-too_much_emphasis_on_band_matrices"],"text":"too much emphasis on band matrices","linkableTitle":"too much emphasis on band matrices"}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-Nahaj-2007-09-27T15:02:00.000Z","type":"heading","level":0,"id":"h-too_much_emphasis_on_band_matrices-2007-09-27T15:02:00.000Z","replies":["c-Nahaj-2007-09-27T15:02:00.000Z-too_much_emphasis_on_band_matrices"],"text":"too much emphasis on band matrices","linkableTitle":"too much emphasis on band matrices"}-->
This article has too much emphasis on band matrices.
"Band" matrices are often thought of as arising from
the discretization of a 2D mesh. These are not truly
"banded" matrices, however, since the optimal ordering
(nested dissection) results in a matrix that is far
from banded. Bandwidth reducing orderings are not suitable
for a matrix arising from the discretization of a 2D or
3D square mesh (for example).
For example, a n-by-n matrix arising from an s-by-s 2D mesh can
be factorized in O(s^3) time, where n = s^2, and
with only O (n log n) entries in the factor L, or about
O(s^2 log s).
On the other hand, if the "natural" ordering is used
(the one that "looks banded"), the time taken is O(n^2),
or O(s^4). That's quite a bit higher. The number of entries
in the factor is O(n times sqrt(n)), or O(s^3).
Cholesky factorization arises in more problems than the
Normal Equations (using Cholesky factorization for the
Normal Equations is usually a bad idea; QR factorization is
more accurate).
-- Tim Davis, Univ. of Florida
I agree it is a bad idea... but it is still being promoted in current Surveying texts, and still used by the National Geodetic Survey for the national adjustments. So I see it a lot. Nahaj15:02, 27 September 2007 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2007-09-27T15:02:00.000Z","author":"Nahaj","type":"comment","level":1,"id":"c-Nahaj-2007-09-27T15:02:00.000Z-too_much_emphasis_on_band_matrices","replies":[]}}-->
I don't have a reference in front of me, but I think the definition of row bandwidth is wrong. Something like n-m, where m is the minimizer, is within 1 of the lower bandwidth. But it doesn't appear to capture upper bandwidth. Jonas August16:20, 30 April 2007 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2007-04-30T16:20:00.000Z","author":"Jonas August","type":"comment","level":1,"id":"c-Jonas_August-2007-04-30T16:20:00.000Z-Row_Bandwidth_def'n_incorrect","replies":["c-Jitse_Niesen-2007-05-01T02:00:00.000Z-Jonas_August-2007-04-30T16:20:00.000Z"]}}-->
Yes, the definition was rather strange; thanks for bringing that to our attention. I replaced it by another one. Let me know if the new one doesn't make sense to you either. -- Jitse Niesen (talk) 02:00, 1 May 2007 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2007-05-01T02:00:00.000Z","author":"Jitse Niesen","type":"comment","level":2,"id":"c-Jitse_Niesen-2007-05-01T02:00:00.000Z-Jonas_August-2007-04-30T16:20:00.000Z","replies":[]}}-->
I removed the link to a "Banker's algorithm" since it points to another algorithm of the same name, not Dr. Snay's algorithm. Nahaj14:52, 27 September 2007 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2007-09-27T14:52:00.000Z","author":"Nahaj","type":"comment","level":1,"id":"c-Nahaj-2007-09-27T14:52:00.000Z-Banker's_algorithm","replies":[]}}-->
As I remember, Orthogonal List (or a similar name) is a common data structure to store a sparse matrix. Is it correct? Btw, I can't even find any info about Orthogonal List in wikipedia. Could anyone more knowledgeable on this issue write some stuff? Took05:10, 9 November 2007 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2007-11-09T05:10:00.000Z","author":"Took","type":"comment","level":1,"id":"c-Took-2007-11-09T05:10:00.000Z-Orthogonal_List?","replies":[]}}-->
__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-BenFrantzDale-2008-04-08T02:38:00.000Z","type":"heading","level":0,"id":"h-Bandwidth_and_invariants-2008-04-08T02:38:00.000Z","replies":["c-BenFrantzDale-2008-04-08T02:38:00.000Z-Bandwidth_and_invariants"],"text":"Bandwidth and invariants","linkableTitle":"Bandwidth and invariants"}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-BenFrantzDale-2008-04-08T02:38:00.000Z","type":"heading","level":0,"id":"h-Bandwidth_and_invariants-2008-04-08T02:38:00.000Z","replies":["c-BenFrantzDale-2008-04-08T02:38:00.000Z-Bandwidth_and_invariants"],"text":"Bandwidth and invariants","linkableTitle":"Bandwidth and invariants"}-->
Is this correct?: It seems to me that the bandwidth of a matrix describes the matrix but not the underlying linear operator. That is, by changing the basis in which you write a matrix, you can change the matrix's bandwidth. As such, bandwidth is not an invariant the way trace and determinant are.
Is that right? —Ben FrantzDale (talk) 02:38, 8 April 2008 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2008-04-08T02:38:00.000Z","author":"BenFrantzDale","type":"comment","level":1,"id":"c-BenFrantzDale-2008-04-08T02:38:00.000Z-Bandwidth_and_invariants","replies":[],"displayName":"\u2014Ben FrantzDale"}}-->
__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-69.107.136.52-2008-04-12T17:01:00.000Z","type":"heading","level":0,"id":"h-Why_are_academic_explanations_so_hard_to_comprehend?-2008-04-12T17:01:00.000Z","replies":["c-69.107.136.52-2008-04-12T17:01:00.000Z-Why_are_academic_explanations_so_hard_to_comprehend?","c-82.196.56.36-2009-10-03T09:28:00.000Z-Why_are_academic_explanations_so_hard_to_comprehend?"],"text":"Why are academic explanations so hard to comprehend?","linkableTitle":"Why are academic explanations so hard to comprehend?"}-->
Why are academic explanations so hard to comprehend?
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-69.107.136.52-2008-04-12T17:01:00.000Z","type":"heading","level":0,"id":"h-Why_are_academic_explanations_so_hard_to_comprehend?-2008-04-12T17:01:00.000Z","replies":["c-69.107.136.52-2008-04-12T17:01:00.000Z-Why_are_academic_explanations_so_hard_to_comprehend?","c-82.196.56.36-2009-10-03T09:28:00.000Z-Why_are_academic_explanations_so_hard_to_comprehend?"],"text":"Why are academic explanations so hard to comprehend?","linkableTitle":"Why are academic explanations so hard to comprehend?"}-->
"It stores an initial sparse N×N matrix M in row form using three arrays, A, IA, JA"
I admit, I'm not academic. I believe I have a need for a sparse matrix. I understand the basic theory, but wonder how they deal with insertions. I search the web, and find nothing but incomprehensible academic articles. I decide to search Wikipedia, which I find almost always to answer my questions, except now. I see this article, and try to understand this explanation and get no where.....what is wrong with academics that they can not communicate clearly? Problems with this statement:
- Does N*N mean rows and columns? If so, say "rows and columns"!
- Does reusing the letter N mean that rows and columns are the same size? If so, say "where the number of rows and columns are the same"!
- Why is A reused in A, IA, JA? If there is no reason, then DON'T DO IT! If there is then say so, IN PLAIN ENGLISH! It confused the hell out of me as to what the relationship was given that A is everywhere.
Why do academics make it so hard to understand their writing? It seems like a college social fraternity, with all it's social pressure glory. REBEL AGAINST THE PRESSURE TO WRITE INCOMPREHENSIBLE DOUBLETHINK! Please write so us dumb dumbs can understand.....I believe that is in the in spirit of Wikipedia! —Preceding unsigned comment added by 69.107.136.52 (talk) 17:01, 12 April 2008 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2008-04-12T17:01:00.000Z","author":"69.107.136.52","type":"comment","level":1,"id":"c-69.107.136.52-2008-04-12T17:01:00.000Z-Why_are_academic_explanations_so_hard_to_comprehend?","replies":["c-BenFrantzDale-2008-04-14T23:47:00.000Z-69.107.136.52-2008-04-12T17:01:00.000Z"]}}-->
Thanks for the suggestions. NxN means it's square. mxn means it's m rows and n columns; this is standard matrix notation that could be referenced but is common knowledge for anyone attempting a sparse-matrix implementation. I cleaned it up a bit. Is that clearer? Cheers. —Ben FrantzDale (talk) 23:47, 14 April 2008 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2008-04-14T23:47:00.000Z","author":"BenFrantzDale","type":"comment","level":2,"id":"c-BenFrantzDale-2008-04-14T23:47:00.000Z-69.107.136.52-2008-04-12T17:01:00.000Z","replies":[],"displayName":"\u2014Ben FrantzDale"}}-->
Yes its clearer but the IA row values should be explained more than saying they are the "Index of the first nonzero element of row i in A, which is of length m + 1 (i.e., one entry per row, plus one)". [1 3 5 7] do not appear to be indexes (in the sense of an array index) of any nonzeros in rows 1 to 3 (what's the + 1 for?). The first row has a nonzero at position 1. The second row, however, has a non-zero at position 2 as does the third row. Clearly I am not a mathematician and don't understand what is being explained here, but I would like to understand because it has an impact on my practical understanding of EAV structure and database modelling. I suppose the IA and JA arrays have nothing to do with cartesian coordinates and are more complex for purposes of compression but at the risk of appearing stupid, I don't get it. —Preceding unsigned comment added by 82.196.56.36 (talk) 09:28, 3 October 2009 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2009-10-03T09:28:00.000Z","author":"82.196.56.36","type":"comment","level":1,"id":"c-82.196.56.36-2009-10-03T09:28:00.000Z-Why_are_academic_explanations_so_hard_to_comprehend?","replies":["c-134.60.83.75-2013-05-29T14:46:00.000Z-82.196.56.36-2009-10-03T09:28:00.000Z"]}}-->
Please help! I tried now for 15 minutes to understand the second array, and asked several colleagues, and no one seem to know what is meant with those indizes :/ This is REALLY frustrating :/ Thanks in advance! — Preceding unsigned comment added by 134.60.83.75 (talk) 14:46, 29 May 2013 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2013-05-29T14:46:00.000Z","author":"134.60.83.75","type":"comment","level":2,"id":"c-134.60.83.75-2013-05-29T14:46:00.000Z-82.196.56.36-2009-10-03T09:28:00.000Z","replies":[]}}-->
Please change the example so that the sparse format actually takes up less space than the naive format. (Make the matrix bigger)
__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-Lumpidu-2009-12-06T14:42:00.000Z","type":"heading","level":0,"id":"h-Sparse_Matrix_with_consecutive_zero_rows-2009-12-06T14:42:00.000Z","replies":["c-Lumpidu-2009-12-06T14:42:00.000Z-Sparse_Matrix_with_consecutive_zero_rows"],"text":"Sparse Matrix with consecutive zero rows","linkableTitle":"Sparse Matrix with consecutive zero rows"}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-Lumpidu-2009-12-06T14:42:00.000Z","type":"heading","level":0,"id":"h-Sparse_Matrix_with_consecutive_zero_rows-2009-12-06T14:42:00.000Z","replies":["c-Lumpidu-2009-12-06T14:42:00.000Z-Sparse_Matrix_with_consecutive_zero_rows"],"text":"Sparse Matrix with consecutive zero rows","linkableTitle":"Sparse Matrix with consecutive zero rows"}-->
What I don't see in the Yale Sparse Matrix format, is how to encode matrixes which have one or more recurring rows with just zero values.
Lumpidu (talk) 14:42, 6 December 2009 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2009-12-06T14:42:00.000Z","author":"Lumpidu","type":"comment","level":1,"id":"c-Lumpidu-2009-12-06T14:42:00.000Z-Sparse_Matrix_with_consecutive_zero_rows","replies":["c-98.222.132.229-2010-02-13T18:27:00.000Z-Lumpidu-2009-12-06T14:42:00.000Z"]}}-->
Storing an entirely zero row is not usually useful in linear algebra, since then the matrix is singular (non-invertible). However, if need be, just store a zero entry as if it were a nonzero. That is, store a 0 in the value array. The diagonal is a reasonable choice. —Preceding unsigned comment added by 98.222.132.229 (talk) 18:27, 13 February 2010 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2010-02-13T18:27:00.000Z","author":"98.222.132.229","type":"comment","level":2,"id":"c-98.222.132.229-2010-02-13T18:27:00.000Z-Lumpidu-2009-12-06T14:42:00.000Z","replies":[]}}-->
__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-Ged.R-2010-07-22T16:09:00.000Z","type":"heading","level":0,"id":"h-More_on_fill-in-2010-07-22T16:09:00.000Z","replies":["c-Ged.R-2010-07-22T16:09:00.000Z-More_on_fill-in"],"text":"More on fill-in","linkableTitle":"More on fill-in"}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-Ged.R-2010-07-22T16:09:00.000Z","type":"heading","level":0,"id":"h-More_on_fill-in-2010-07-22T16:09:00.000Z","replies":["c-Ged.R-2010-07-22T16:09:00.000Z-More_on_fill-in"],"text":"More on fill-in","linkableTitle":"More on fill-in"}-->
The section on fill-in seems very brief. I'm not an expert on this, but I would guess it should refer to things like minimal degree permutation and other reordering algorithms. See e.g.
Also, a few comments on special cases could be very helpful. E.g. I read in Gockenbach's textbook (p.228) that the Cholesky factor of a banded matrix retains the same (half-) bandwidth (though zeros within the band can still fill-in). It also seems to be the case that block-diagonal matrices do not fill-in at all, but I have not seen this stated/proven... Can anyone help with this and other special cases? Thanks--Ged.R (talk) 16:09, 22 July 2010 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2010-07-22T16:09:00.000Z","author":"Ged.R","type":"comment","level":1,"id":"c-Ged.R-2010-07-22T16:09:00.000Z-More_on_fill-in","replies":[]}}-->
Suppose that an m X n matrix with t non-zero terms is represented. How small must t be so that the linked scheme uses less space than an m X n array uses? —Preceding unsigned comment added by 49.156.83.248 (talk) 15:42, 8 March 2011 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2011-03-08T15:42:00.000Z","author":"49.156.83.248","type":"comment","level":1,"id":"c-49.156.83.248-2011-03-08T15:42:00.000Z-Linked_Scheme","replies":[]}}-->
Hi, i think the examples in Yale Format are wrong in the IA array. The instruction for the IA array reads: (array of index of first nonzero element of row i), so the IA array must have the same number of elements as the number row of the original matrix. That means IA = [0 2 4] for the first example and IA = [0 2 4 8] for the second. I corrected that, pls verify it. Thank you
__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-Antromindopofagist-2013-04-16T23:14:00.000Z","type":"heading","level":0,"id":"h-Confusing_statement_in_the_end_of_Yale_Format-2013-04-16T23:14:00.000Z","replies":["c-Antromindopofagist-2013-04-16T23:14:00.000Z-Confusing_statement_in_the_end_of_Yale_Format"],"text":"Confusing statement in the end of Yale Format","linkableTitle":"Confusing statement in the end of Yale Format"}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-Antromindopofagist-2013-04-16T23:14:00.000Z","type":"heading","level":0,"id":"h-Confusing_statement_in_the_end_of_Yale_Format-2013-04-16T23:14:00.000Z","replies":["c-Antromindopofagist-2013-04-16T23:14:00.000Z-Confusing_statement_in_the_end_of_Yale_Format"],"text":"Confusing statement in the end of Yale Format","linkableTitle":"Confusing statement in the end of Yale Format"}-->
Fellows, i found that the following statement is confusing:
(Note that in this format, the first value of IA will always be zero and the last will always be NNZ: these two cells may not be useful.)
There is no explicity reason for the last value, with the size of NNZ, to exist in this array. If there is any reason for the last value NNZ, that is linked to the sparse data structure, than it must be made explicity in the article. Hugs. — Preceding unsigned comment added by Antromindopofagist (talk • contribs) 23:14, 16 April 2013 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2013-04-16T23:14:00.000Z","author":"Antromindopofagist","type":"comment","level":1,"id":"c-Antromindopofagist-2013-04-16T23:14:00.000Z-Confusing_statement_in_the_end_of_Yale_Format","replies":[]}}-->
__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-Petosorus-2015-05-07T14:04:00.000Z","type":"heading","level":0,"id":"h-How_to_read_back_from_Yale_Format_to_matrix?-2015-05-07T14:04:00.000Z","replies":["c-Petosorus-2015-05-07T14:04:00.000Z-How_to_read_back_from_Yale_Format_to_matrix?"],"text":"How to read back from Yale Format to matrix?","linkableTitle":"How to read back from Yale Format to matrix?"}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-Petosorus-2015-05-07T14:04:00.000Z","type":"heading","level":0,"id":"h-How_to_read_back_from_Yale_Format_to_matrix?-2015-05-07T14:04:00.000Z","replies":["c-Petosorus-2015-05-07T14:04:00.000Z-How_to_read_back_from_Yale_Format_to_matrix?"],"text":"How to read back from Yale Format to matrix?","linkableTitle":"How to read back from Yale Format to matrix?"}-->
Yale Format doesn't contain matrix dimensions so I think there could be problems while reading back.
For example, with the matrix
how do we know from
A = [ 5 8 3 6 ]
IA = [ 0 0 2 3 4 ]
JA = [ 0 1 2 1 ]
that the last column exists? Couldn't this Yale Formatted matrix be read ? — Preceding unsigned comment added by Petosorus (talk • contribs) 14:04, 7 May 2015 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2015-05-07T14:04:00.000Z","author":"Petosorus","type":"comment","level":1,"id":"c-Petosorus-2015-05-07T14:04:00.000Z-How_to_read_back_from_Yale_Format_to_matrix?","replies":["c-Fylwind-2015-05-10T11:25:00.000Z-Petosorus-2015-05-07T14:04:00.000Z"]}}-->
In all of the formats described by this article, I believe it is implied that the matrix dimensions must be stored separately in addition to what is mentioned. If the Yale format were to be implemented in C, IA would be simply a pointer with no knowledge of its length, and therefore it would not know even the number of rows. Therefore both dimensions must be stored in addition to the three arrays.
In other languages with a proper array data type that "knows" its length, the number of columns is still needed (as you have found out), but the number of rows would be unnecessary. Nonetheless, to get the number of rows you would have to subtract 1 from the length of IA every time, so you may as well just store the number of rows anyway. The memory overhead would be insignificant unless the matrix is extremely small. — Preceding unsigned comment added by Fylwind (talk • contribs) 11:25, 10 May 2015 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2015-05-10T11:25:00.000Z","author":"Fylwind","type":"comment","level":2,"id":"c-Fylwind-2015-05-10T11:25:00.000Z-Petosorus-2015-05-07T14:04:00.000Z","replies":["c-Petosorus-2015-05-11T07:18:00.000Z-Fylwind-2015-05-10T11:25:00.000Z"]}}-->
As it was only implied and not explicit, I prefered to ask to be sure. Thank you for your well developped answer. — Preceding unsigned comment added by Petosorus (talk • contribs) 07:18, 11 May 2015 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2015-05-11T07:18:00.000Z","author":"Petosorus","type":"comment","level":3,"id":"c-Petosorus-2015-05-11T07:18:00.000Z-Fylwind-2015-05-10T11:25:00.000Z","replies":[]}}-->
The assertion that the CSC Compressed sparse column format, is good for matrix-vector products, seems strange.
To calculate a matrix-vector product, you take each row of the matrix and multiply the elements by the values
in the vector. The compressed sparse column format seems inconvenient for this, in comparison to the
compressed spare row format.Lathamibird (talk) 06:25, 17 March 2017 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2017-03-17T06:25:00.000Z","author":"Lathamibird","type":"comment","level":1,"id":"c-Lathamibird-2017-03-17T06:25:00.000Z-Strange_claim","replies":[]}}-->
__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-84.123.9.117-2019-12-30T16:00:00.000Z","type":"heading","level":0,"id":"h-Examples_should_be_given_with_indexes_starting_at_1_instead_of_zero.-2019-12-30T16:00:00.000Z","replies":["c-84.123.9.117-2019-12-30T16:00:00.000Z-Examples_should_be_given_with_indexes_starting_at_1_instead_of_zero."],"text":"Examples should be given with indexes starting at 1 instead of zero.","linkableTitle":"Examples should be given with indexes starting at 1 instead of zero."}-->
Examples should be given with indexes starting at 1 instead of zero.
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-84.123.9.117-2019-12-30T16:00:00.000Z","type":"heading","level":0,"id":"h-Examples_should_be_given_with_indexes_starting_at_1_instead_of_zero.-2019-12-30T16:00:00.000Z","replies":["c-84.123.9.117-2019-12-30T16:00:00.000Z-Examples_should_be_given_with_indexes_starting_at_1_instead_of_zero."],"text":"Examples should be given with indexes starting at 1 instead of zero.","linkableTitle":"Examples should be given with indexes starting at 1 instead of zero."}-->
It would be much better to write the article using indexes starting at 1, not at 0. Most people understand it better in that way, as you say my first finger, my second finger... and not my finger number zero.
Only some programming languages start indexes at zero (such as C++), many other start at 0 (such as Matlab, R, SQL, Julia and most scientific languages). And for mathematicians and physicists and engineers the natural way to do it is starting from 1. — Preceding unsigned comment added by 84.123.9.117 (talk) 16:00, 30 December 2019 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2019-12-30T16:00:00.000Z","author":"84.123.9.117","type":"comment","level":1,"id":"c-84.123.9.117-2019-12-30T16:00:00.000Z-Examples_should_be_given_with_indexes_starting_at_1_instead_of_zero.","replies":[]}}-->