Matrix multiplication precision

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Matrix multiplication precision

Nair, Murlidharan T
Hi!!

I am trying to multiply 5 matrices and then using the inverse of that matrix for further computation. I read about precision problems from the archives and the suggestion was to use as.numeric while computing the products. I am still having problems with the results. Here is how I am using it

#Mn.mat<-(T.mat %*% Rz.mat %*% Q.mat %*% Rz.mat %*% T.mat)   # I was doing this in one step which I have broken down into multiple steps as below.

Mn1.mat<-matrix(as.numeric(T.mat %*% Rz.mat),nrow=4)
Mn2.mat<-matrix(as.numeric(Mn1.mat %*% Q.mat),nrow=4)
Mn3.mat<-matrix(as.numeric(Mn2.mat %*% Rz.mat),nrow=4)
Mn.mat<-matrix(as.numeric( Mn3.mat %*% T.mat),nrow=4)


For getting the inverse I am doing the following

Mn.mat.inv<-qr.solve(Mn.mat)


Will I run into precision problems when I do the above?  

Thanks ../Murli

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Matrix multiplication precision

Douglas Bates-2
On Wed, Jul 15, 2009 at 3:42 AM, Nair, Murlidharan T<[hidden email]> wrote:
> Hi!!

> I am trying to multiply 5 matrices and then using the inverse of that matrix for further computation. I read about precision problems from the archives and the suggestion was to use as.numeric while computing the products.

Hmm.  I'm not sure what the origins of that advice might be but I
don't think it would apply in general.

> I am still having problems with the results. Here is how I am using it

> #Mn.mat<-(T.mat %*% Rz.mat %*% Q.mat %*% Rz.mat %*% T.mat)   # I was doing this in one step which I have broken down into multiple steps as below.

> Mn1.mat<-matrix(as.numeric(T.mat %*% Rz.mat),nrow=4)
> Mn2.mat<-matrix(as.numeric(Mn1.mat %*% Q.mat),nrow=4)
> Mn3.mat<-matrix(as.numeric(Mn2.mat %*% Rz.mat),nrow=4)
> Mn.mat<-matrix(as.numeric( Mn3.mat %*% T.mat),nrow=4)

I doubt very much that the as.numeric would have any effect on
precision in these cases.  You are simply taking a numeric result in
its matrix form, converting it to a vector then converting the vector
back to a matrix.,

> mm <- matrix(round(rnorm(8), 3), nrow = 4)
> mm
       [,1]  [,2]
[1,] -0.110 2.195
[2,]  0.616 0.353
[3,]  0.589 0.970
[4,]  1.028 0.857
> as.numeric(mm)
[1] -0.110  0.616  0.589  1.028  2.195  0.353  0.970  0.857

The key in situations like this is to analyze the structure of the
computation and decide whether you can group the intermediate results.
 You have written your product as

T %*% Rz %*% Q %*% Rz %*% T

What are the characteristics of T, Rz, and Q?  Are they square and
symmetric, square and triangular, diagonal?  Is Q orthogonal (the
factors in an orthogonal - triangular decomposition are often written
as Q and R)?  Did you happen to skip a few transposes in that formula
- often formulas like that generate symmetric matrices.

And the big lesson, of course, is the first rule of numerical linear
algebra., "Just because a formula is written in terms of the inverse
of a matrix doesn't mean that is a good way to calculate the result;
in fact, it is almost inevitably the worst way of calculating the
result".  You only calculate the inverse of a matrix after you have
investigated all other possible avenues for calculating the result and
found them to be fruitless.

You haven't told us what you plan to do with the inverse and that is
an important consideration.,  If, for example, it represents a
variance-covariance matrix, and you want to express the result as
standard deviations and correlations you would not compute the
variance-covariance matrix but stop instead at the Cholesky factor of
the inverse.

You may want to check the facilities of the Matrix package for
expressing your calculation.  It is a recommended package that is
included with binary versions of R from 2.9.0 and has many ways of
expressing and exploiting structure in matrices.

> For getting the inverse I am doing the following
>
> Mn.mat.inv<-qr.solve(Mn.mat)
>
>
> Will I run into precision problems when I do the above?
>
> Thanks ../Murli
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Matrix multiplication precision

Steve Lianoglou-6
Hi Douglas,

> And the big lesson, of course, is the first rule of numerical linear
> algebra., "Just because a formula is written in terms of the inverse
> of a matrix doesn't mean that is a good way to calculate the result;
> in fact, it is almost inevitably the worst way of calculating the
> result".  You only calculate the inverse of a matrix after you have
> investigated all other possible avenues for calculating the result and
> found them to be fruitless.

As a relative noob to the nitty gritty details of numerical linear  
algebra, could you elaborate on this a bit (even if it's just a link  
to a book/reference that explains these issues in more detail)?

Where/what else should we be looking to do that would be better? Or  
are there really no general rules, and the answer just depends on the  
situation?

Thanks,

-steve

--
Steve Lianoglou
Graduate Student: Physiology, Biophysics and Systems Biology
Weill Medical College of Cornell University

Contact Info: http://cbio.mskcc.org/~lianos/contact

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Matrix multiplication precision

RKoenker
A good discussion of this is provided by Gill, Murray and Wright
Num Lin Alg and Opt, section 4.7.2.


url:    www.econ.uiuc.edu/~roger            Roger Koenker
email    [hidden email]            Department of Economics
vox:     217-333-4558                University of Illinois
fax:       217-244-6678                Urbana, IL 61801



On Jul 15, 2009, at 9:27 AM, Steve Lianoglou wrote:

> Hi Douglas,
>
>> And the big lesson, of course, is the first rule of numerical linear
>> algebra., "Just because a formula is written in terms of the inverse
>> of a matrix doesn't mean that is a good way to calculate the result;
>> in fact, it is almost inevitably the worst way of calculating the
>> result".  You only calculate the inverse of a matrix after you have
>> investigated all other possible avenues for calculating the result  
>> and
>> found them to be fruitless.
>
> As a relative noob to the nitty gritty details of numerical linear  
> algebra, could you elaborate on this a bit (even if it's just a link  
> to a book/reference that explains these issues in more detail)?
>
> Where/what else should we be looking to do that would be better? Or  
> are there really no general rules, and the answer just depends on  
> the situation?
>
> Thanks,
>
> -steve
>
> --
> Steve Lianoglou
> Graduate Student: Physiology, Biophysics and Systems Biology
> Weill Medical College of Cornell University
>
> Contact Info: http://cbio.mskcc.org/~lianos/contact
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Reply | Threaded
Open this post in threaded view
|

Re: Matrix multiplication precision

Nair, Murlidharan T
In reply to this post by Douglas Bates-2
Thanks for the explanation. I am not too deep into linear algebra.

T.mat, Rz.mat and Q.mat are square matrices 4 x 4.

To be more clear, here is what they look like.
 
h=3.39/2;
T.mat<-matrix(c(1,0,0,0,0,1,0,0,0,0,1,0,0,0,-h,1), nrow=4) # translation of the system
alpha<-36*pi/180
cos.alpha<-cos(alpha)
minus.sin.alpha<- -1*sin(alpha)
sin.alpha<-sin(alpha)
Rz.mat<-matrix(c(cos.alpha,minus.sin.alpha,0,0,sin.alpha,cos.alpha,0,0,0,0,1,0,0,0,0,1), nrow=4)
Rx.mat<-matrix(c(1,0,0,0,0,cos.alpha, minus.sin.alpha,0,0,sin.alpha,cos.alpha,0,0,0,0,1), nrow=4)

Q.mat is a product of Rz.mat and Rx.mat with different angles.

The resultant matrix Mn.mat is used to computed the next coordinates of a helix.

center.Of.bases<-matrix(c(0.0,0.0,0.0,1),nrow=4)

Mn.mat<-qr.solve(compute.Mn.mat(T.mat,Rz.mat,Q.mat)) # This function computes the Mn.mat matrix using specific angles.  Which is used to get the new coordinates.

New.center.Of.bases<-as.numeric(Mn.mat %*% center.Of.bases)

Does these lines of code tell you anything about the complexity of the problem?  Let me know if I should do anything different. I was really glad hear from you since you are an expert in the area.

Cheers../Murli
 





-----Original Message-----
From: [hidden email] [mailto:[hidden email]] On Behalf Of Douglas Bates
Sent: Wednesday, July 15, 2009 7:29 AM
To: Nair, Murlidharan T
Cc: [hidden email]
Subject: Re: [R] Matrix multiplication precision

On Wed, Jul 15, 2009 at 3:42 AM, Nair, Murlidharan T<[hidden email]> wrote:
> Hi!!

> I am trying to multiply 5 matrices and then using the inverse of that matrix for further computation. I read about precision problems from the archives and the suggestion was to use as.numeric while computing the products.

Hmm.  I'm not sure what the origins of that advice might be but I
don't think it would apply in general.

> I am still having problems with the results. Here is how I am using it

> #Mn.mat<-(T.mat %*% Rz.mat %*% Q.mat %*% Rz.mat %*% T.mat)   # I was doing this in one step which I have broken down into multiple steps as below.

> Mn1.mat<-matrix(as.numeric(T.mat %*% Rz.mat),nrow=4)
> Mn2.mat<-matrix(as.numeric(Mn1.mat %*% Q.mat),nrow=4)
> Mn3.mat<-matrix(as.numeric(Mn2.mat %*% Rz.mat),nrow=4)
> Mn.mat<-matrix(as.numeric( Mn3.mat %*% T.mat),nrow=4)

I doubt very much that the as.numeric would have any effect on
precision in these cases.  You are simply taking a numeric result in
its matrix form, converting it to a vector then converting the vector
back to a matrix.,

> mm <- matrix(round(rnorm(8), 3), nrow = 4)
> mm
       [,1]  [,2]
[1,] -0.110 2.195
[2,]  0.616 0.353
[3,]  0.589 0.970
[4,]  1.028 0.857
> as.numeric(mm)
[1] -0.110  0.616  0.589  1.028  2.195  0.353  0.970  0.857

The key in situations like this is to analyze the structure of the
computation and decide whether you can group the intermediate results.
 You have written your product as

T %*% Rz %*% Q %*% Rz %*% T

What are the characteristics of T, Rz, and Q?  Are they square and
symmetric, square and triangular, diagonal?  Is Q orthogonal (the
factors in an orthogonal - triangular decomposition are often written
as Q and R)?  Did you happen to skip a few transposes in that formula
- often formulas like that generate symmetric matrices.

And the big lesson, of course, is the first rule of numerical linear
algebra., "Just because a formula is written in terms of the inverse
of a matrix doesn't mean that is a good way to calculate the result;
in fact, it is almost inevitably the worst way of calculating the
result".  You only calculate the inverse of a matrix after you have
investigated all other possible avenues for calculating the result and
found them to be fruitless.

You haven't told us what you plan to do with the inverse and that is
an important consideration.,  If, for example, it represents a
variance-covariance matrix, and you want to express the result as
standard deviations and correlations you would not compute the
variance-covariance matrix but stop instead at the Cholesky factor of
the inverse.

You may want to check the facilities of the Matrix package for
expressing your calculation.  It is a recommended package that is
included with binary versions of R from 2.9.0 and has many ways of
expressing and exploiting structure in matrices.

> For getting the inverse I am doing the following
>
> Mn.mat.inv<-qr.solve(Mn.mat)
>
>
> Will I run into precision problems when I do the above?
>
> Thanks ../Murli
>
> ______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.