Dear list members,
I want to perform an MM-regression. This seems an easy task using the function lmrob(), however, this function provides me with NA coefficients. My data generating process is as follows: rho <- 0.15 # low interdependency Sigma <- matrix(rho, d, d); diag(Sigma) <- 1 x.clean <- mvrnorm(n, rep(0,d), Sigma) beta <- c(1.0, 2.0, 3.0, 4.0) error <- rnorm(n = n, mean = 0, sd = 1) y <- as.data.frame(beta[1]*rep(1, n) + beta[2]*x.clean[,1] + beta[3]*x.clean[,2] + beta[4]*x.clean[,3] + error) xy.clean <- cbind(x.clean, y) colnames(xy.clean) <- c("x1", "x2", "x3", "y") Then, I pass the following formula to lmrob: f <- y ~ x1 + x2 + x3 Finally, I run lmrob: lmrob(f, data = data, cov = ".vcov.w") and this results in NA coefficients. It would be great if anyone can help me out. Thanks in advance. Regards, Christien [[alternative HTML version deleted]] ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. |
> On Mar 3, 2018, at 3:04 PM, Christien Kerbert <[hidden email]> wrote: > > Dear list members, > > I want to perform an MM-regression. This seems an easy task using the > function lmrob(), however, this function provides me with NA coefficients. > My data generating process is as follows: > > rho <- 0.15 # low interdependency > Sigma <- matrix(rho, d, d); diag(Sigma) <- 1 > x.clean <- mvrnorm(n, rep(0,d), Sigma) Which package are you using for mvrnorm? > beta <- c(1.0, 2.0, 3.0, 4.0) > error <- rnorm(n = n, mean = 0, sd = 1) > y <- as.data.frame(beta[1]*rep(1, n) + beta[2]*x.clean[,1] + > beta[3]*x.clean[,2] + beta[4]*x.clean[,3] + error) > xy.clean <- cbind(x.clean, y) > colnames(xy.clean) <- c("x1", "x2", "x3", "y") > > Then, I pass the following formula to lmrob: f <- y ~ x1 + x2 + x3 > > Finally, I run lmrob: lmrob(f, data = data, cov = ".vcov.w") > and this results in NA coefficients. It would also be more courteous to specify the package where you are getting lmrob. > > It would be great if anyone can help me out. Thanks in advance. > > Regards, > Christien > > [[alternative HTML version deleted]] This is a plain text mailing list although it doesn't seem to have created problems this time. > > ______________________________________________ > [hidden email] mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. David Winsemius Alameda, CA, USA 'Any technology distinguishable from magic is insufficiently advanced.' -Gehm's Corollary to Clarke's Third Law ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. |
Thanks for your reply.
I use mvrnorm from the *MASS* package and lmrob from the *robustbase* package. To further explain my data generating process, the idea is as follows. The explanatory variables are generated my a multivariate normal distribution where the covariance matrix of the variables is defined by Sigma in my code, with ones on the diagonal and rho = 0.15 on the non-diagonal. Then y is created by y = 1 - 2*x1 + 3*x3 + 4*x4 + error and the error term is standard normal distributed. Hope this helps. Regards, Christien In this section, we provide a simulation study to illustrate the performance of four estimators, the (GLS), S, MM and MM ridge estimator for SUR model. This simulation process is executed to generate data for the following equation Where In this simulation, we set the initial value for β= [1,2,3] for k=3. The explanatory variables are generated by multivariate normal distribution MNNk=3 (0,∑x) where diag(∑x)=1, off-diag(∑x)= ρX= 0.15 for low interdependency and ρx= 0.70 for high interdependency. Where ρx is correlation between explanatory variables. We chose two sample size 25 for small sample and 100 for large sample. The specific error in equations μi, i=1,2,…..,n, we generated by MVNk=3 (0, ∑ε), ∑ε the variance covariance matrix of errors, diag(∑ε)= 1, off-diag(∑ε)= ρε= 0.15. To investigate the robustness of the estimators against outliers, we chosen different percentages of outliers ( 20%, 45%). We choose shrink parameter in (12) by minimize the new robust Cross Validation (CVMM) criterion which avoided 2018-03-04 0:52 GMT+01:00 David Winsemius <[hidden email]>: > > > On Mar 3, 2018, at 3:04 PM, Christien Kerbert < > [hidden email]> wrote: > > > > Dear list members, > > > > I want to perform an MM-regression. This seems an easy task using the > > function lmrob(), however, this function provides me with NA > coefficients. > > My data generating process is as follows: > > > > rho <- 0.15 # low interdependency > > Sigma <- matrix(rho, d, d); diag(Sigma) <- 1 > > x.clean <- mvrnorm(n, rep(0,d), Sigma) > > Which package are you using for mvrnorm? > > > beta <- c(1.0, 2.0, 3.0, 4.0) > > error <- rnorm(n = n, mean = 0, sd = 1) > > y <- as.data.frame(beta[1]*rep(1, n) + beta[2]*x.clean[,1] + > > beta[3]*x.clean[,2] + beta[4]*x.clean[,3] + error) > > xy.clean <- cbind(x.clean, y) > > colnames(xy.clean) <- c("x1", "x2", "x3", "y") > > > > Then, I pass the following formula to lmrob: f <- y ~ x1 + x2 + x3 > > > > Finally, I run lmrob: lmrob(f, data = data, cov = ".vcov.w") > > and this results in NA coefficients. > > It would also be more courteous to specify the package where you are > getting lmrob. > > > > > It would be great if anyone can help me out. Thanks in advance. > > > > Regards, > > Christien > > > > [[alternative HTML version deleted]] > > This is a plain text mailing list although it doesn't seem to have created > problems this time. > > > > > ______________________________________________ > > [hidden email] mailing list -- To UNSUBSCRIBE and more, see > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide http://www.R-project.org/ > posting-guide.html > > and provide commented, minimal, self-contained, reproducible code. > > David Winsemius > Alameda, CA, USA > > 'Any technology distinguishable from magic is insufficiently advanced.' > -Gehm's Corollary to Clarke's Third Law > > > > > > [[alternative HTML version deleted]] ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. |
What is 'd'? What is 'n'?
On Sun, Mar 4, 2018 at 12:14 PM, Christien Kerbert < [hidden email]> wrote: > Thanks for your reply. > > I use mvrnorm from the *MASS* package and lmrob from the *robustbase* > package. > > To further explain my data generating process, the idea is as follows. The > explanatory variables are generated my a multivariate normal distribution > where the covariance matrix of the variables is defined by Sigma in my > code, with ones on the diagonal and rho = 0.15 on the non-diagonal. Then y > is created by y = 1 - 2*x1 + 3*x3 + 4*x4 + error and the error term is > standard normal distributed. > > Hope this helps. > > Regards, > Christien > > In this section, we provide a simulation study to illustrate the > performance of four estimators, the (GLS), S, MM and MM ridge estimator for > SUR model. This simulation process is executed to generate data for the > following equation Where In this simulation, we set the initial value > for β= [1,2,3] for k=3. The explanatory variables are generated by > multivariate normal distribution MNNk=3 (0,∑x) where diag(∑x)=1, > off-diag(∑x)= ρX= 0.15 for low interdependency and ρx= 0.70 for high > interdependency. Where ρx is correlation between explanatory variables. We > chose two sample size 25 for small sample and 100 for large sample. The > specific error in equations μi, i=1,2,…..,n, we generated by MVNk=3 (0, > ∑ε), ∑ε the variance covariance matrix of errors, diag(∑ε)= 1, > off-diag(∑ε)= ρε= 0.15. To investigate the robustness of the estimators > against outliers, we chosen different percentages of outliers ( 20%, 45%). > We choose shrink parameter in (12) by minimize the new robust Cross > Validation (CVMM) criterion which avoided > > 2018-03-04 0:52 GMT+01:00 David Winsemius <[hidden email]>: > > > > > > On Mar 3, 2018, at 3:04 PM, Christien Kerbert < > > [hidden email]> wrote: > > > > > > Dear list members, > > > > > > I want to perform an MM-regression. This seems an easy task using the > > > function lmrob(), however, this function provides me with NA > > coefficients. > > > My data generating process is as follows: > > > > > > rho <- 0.15 # low interdependency > > > Sigma <- matrix(rho, d, d); diag(Sigma) <- 1 > > > x.clean <- mvrnorm(n, rep(0,d), Sigma) > > > > Which package are you using for mvrnorm? > > > > > beta <- c(1.0, 2.0, 3.0, 4.0) > > > error <- rnorm(n = n, mean = 0, sd = 1) > > > y <- as.data.frame(beta[1]*rep(1, n) + beta[2]*x.clean[,1] + > > > beta[3]*x.clean[,2] + beta[4]*x.clean[,3] + error) > > > xy.clean <- cbind(x.clean, y) > > > colnames(xy.clean) <- c("x1", "x2", "x3", "y") > > > > > > Then, I pass the following formula to lmrob: f <- y ~ x1 + x2 + x3 > > > > > > Finally, I run lmrob: lmrob(f, data = data, cov = ".vcov.w") > > > and this results in NA coefficients. > > > > It would also be more courteous to specify the package where you are > > getting lmrob. > > > > > > > > It would be great if anyone can help me out. Thanks in advance. > > > > > > Regards, > > > Christien > > > > > > [[alternative HTML version deleted]] > > > > This is a plain text mailing list although it doesn't seem to have > created > > problems this time. > > > > > > > > ______________________________________________ > > > [hidden email] mailing list -- To UNSUBSCRIBE and more, see > > > https://stat.ethz.ch/mailman/listinfo/r-help > > > PLEASE do read the posting guide http://www.R-project.org/ > > posting-guide.html > > > and provide commented, minimal, self-contained, reproducible code. > > > > David Winsemius > > Alameda, CA, USA > > > > 'Any technology distinguishable from magic is insufficiently advanced.' > > -Gehm's Corollary to Clarke's Third Law > > > > > > > > > > > > > > [[alternative HTML version deleted]] > > ______________________________________________ > [hidden email] mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/ > posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > [[alternative HTML version deleted]] ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. |
d is the number of observed variables (d = 3 in this example). n is the
number of observations. 2018-03-04 11:30 GMT+01:00 Eric Berger <[hidden email]>: > What is 'd'? What is 'n'? > > > On Sun, Mar 4, 2018 at 12:14 PM, Christien Kerbert < > [hidden email]> wrote: > >> Thanks for your reply. >> >> I use mvrnorm from the *MASS* package and lmrob from the *robustbase* >> package. >> >> To further explain my data generating process, the idea is as follows. The >> explanatory variables are generated my a multivariate normal distribution >> where the covariance matrix of the variables is defined by Sigma in my >> code, with ones on the diagonal and rho = 0.15 on the non-diagonal. Then y >> is created by y = 1 - 2*x1 + 3*x3 + 4*x4 + error and the error term is >> standard normal distributed. >> >> Hope this helps. >> >> Regards, >> Christien >> >> In this section, we provide a simulation study to illustrate the >> performance of four estimators, the (GLS), S, MM and MM ridge estimator >> for >> SUR model. This simulation process is executed to generate data for the >> following equation Where In this simulation, we set the initial value >> >> for β= [1,2,3] for k=3. The explanatory variables are generated by >> multivariate normal distribution MNNk=3 (0,∑x) where diag(∑x)=1, >> off-diag(∑x)= ρX= 0.15 for low interdependency and ρx= 0.70 for high >> interdependency. Where ρx is correlation between explanatory variables. We >> chose two sample size 25 for small sample and 100 for large sample. The >> specific error in equations μi, i=1,2,…..,n, we generated by MVNk=3 (0, >> ∑ε), ∑ε the variance covariance matrix of errors, diag(∑ε)= 1, >> off-diag(∑ε)= ρε= 0.15. To investigate the robustness of the estimators >> against outliers, we chosen different percentages of outliers ( 20%, 45%). >> We choose shrink parameter in (12) by minimize the new robust Cross >> Validation (CVMM) criterion which avoided >> >> 2018-03-04 0:52 GMT+01:00 David Winsemius <[hidden email]>: >> >> > >> > > On Mar 3, 2018, at 3:04 PM, Christien Kerbert < >> > [hidden email]> wrote: >> > > >> > > Dear list members, >> > > >> > > I want to perform an MM-regression. This seems an easy task using the >> > > function lmrob(), however, this function provides me with NA >> > coefficients. >> > > My data generating process is as follows: >> > > >> > > rho <- 0.15 # low interdependency >> > > Sigma <- matrix(rho, d, d); diag(Sigma) <- 1 >> > > x.clean <- mvrnorm(n, rep(0,d), Sigma) >> > >> > Which package are you using for mvrnorm? >> > >> > > beta <- c(1.0, 2.0, 3.0, 4.0) >> > > error <- rnorm(n = n, mean = 0, sd = 1) >> > > y <- as.data.frame(beta[1]*rep(1, n) + beta[2]*x.clean[,1] + >> > > beta[3]*x.clean[,2] + beta[4]*x.clean[,3] + error) >> > > xy.clean <- cbind(x.clean, y) >> > > colnames(xy.clean) <- c("x1", "x2", "x3", "y") >> > > >> > > Then, I pass the following formula to lmrob: f <- y ~ x1 + x2 + x3 >> > > >> > > Finally, I run lmrob: lmrob(f, data = data, cov = ".vcov.w") >> > > and this results in NA coefficients. >> > >> > It would also be more courteous to specify the package where you are >> > getting lmrob. >> > >> > > >> > > It would be great if anyone can help me out. Thanks in advance. >> > > >> > > Regards, >> > > Christien >> > > >> > > [[alternative HTML version deleted]] >> > >> > This is a plain text mailing list although it doesn't seem to have >> created >> > problems this time. >> > >> > > >> > > ______________________________________________ >> > > [hidden email] mailing list -- To UNSUBSCRIBE and more, see >> > > https://stat.ethz.ch/mailman/listinfo/r-help >> > > PLEASE do read the posting guide http://www.R-project.org/ >> > posting-guide.html >> > > and provide commented, minimal, self-contained, reproducible code. >> > >> > David Winsemius >> > Alameda, CA, USA >> > >> > 'Any technology distinguishable from magic is insufficiently advanced.' >> > -Gehm's Corollary to Clarke's Third Law >> > >> > >> > >> > >> > >> > >> >> [[alternative HTML version deleted]] >> >> ______________________________________________ >> [hidden email] mailing list -- To UNSUBSCRIBE and more, see >> https://stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide http://www.R-project.org/posti >> ng-guide.html >> and provide commented, minimal, self-contained, reproducible code. >> > > [[alternative HTML version deleted]] ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. |
Hard to help you if you don't provide a reproducible example.
On Sun, Mar 4, 2018 at 1:05 PM, Christien Kerbert < [hidden email]> wrote: > d is the number of observed variables (d = 3 in this example). n is the > number of observations. > > 2018-03-04 11:30 GMT+01:00 Eric Berger <[hidden email]>: > >> What is 'd'? What is 'n'? >> >> >> On Sun, Mar 4, 2018 at 12:14 PM, Christien Kerbert < >> [hidden email]> wrote: >> >>> Thanks for your reply. >>> >>> I use mvrnorm from the *MASS* package and lmrob from the *robustbase* >>> package. >>> >>> To further explain my data generating process, the idea is as follows. >>> The >>> explanatory variables are generated my a multivariate normal distribution >>> where the covariance matrix of the variables is defined by Sigma in my >>> code, with ones on the diagonal and rho = 0.15 on the non-diagonal. Then >>> y >>> is created by y = 1 - 2*x1 + 3*x3 + 4*x4 + error and the error term is >>> standard normal distributed. >>> >>> Hope this helps. >>> >>> Regards, >>> Christien >>> >>> In this section, we provide a simulation study to illustrate the >>> performance of four estimators, the (GLS), S, MM and MM ridge estimator >>> for >>> SUR model. This simulation process is executed to generate data for the >>> following equation Where In this simulation, we set the initial value >>> >>> for β= [1,2,3] for k=3. The explanatory variables are generated by >>> multivariate normal distribution MNNk=3 (0,∑x) where diag(∑x)=1, >>> off-diag(∑x)= ρX= 0.15 for low interdependency and ρx= 0.70 for high >>> interdependency. Where ρx is correlation between explanatory variables. >>> We >>> chose two sample size 25 for small sample and 100 for large sample. The >>> specific error in equations μi, i=1,2,…..,n, we generated by MVNk=3 (0, >>> ∑ε), ∑ε the variance covariance matrix of errors, diag(∑ε)= 1, >>> off-diag(∑ε)= ρε= 0.15. To investigate the robustness of the estimators >>> against outliers, we chosen different percentages of outliers ( 20%, >>> 45%). >>> We choose shrink parameter in (12) by minimize the new robust Cross >>> Validation (CVMM) criterion which avoided >>> >>> 2018-03-04 0:52 GMT+01:00 David Winsemius <[hidden email]>: >>> >>> > >>> > > On Mar 3, 2018, at 3:04 PM, Christien Kerbert < >>> > [hidden email]> wrote: >>> > > >>> > > Dear list members, >>> > > >>> > > I want to perform an MM-regression. This seems an easy task using the >>> > > function lmrob(), however, this function provides me with NA >>> > coefficients. >>> > > My data generating process is as follows: >>> > > >>> > > rho <- 0.15 # low interdependency >>> > > Sigma <- matrix(rho, d, d); diag(Sigma) <- 1 >>> > > x.clean <- mvrnorm(n, rep(0,d), Sigma) >>> > >>> > Which package are you using for mvrnorm? >>> > >>> > > beta <- c(1.0, 2.0, 3.0, 4.0) >>> > > error <- rnorm(n = n, mean = 0, sd = 1) >>> > > y <- as.data.frame(beta[1]*rep(1, n) + beta[2]*x.clean[,1] + >>> > > beta[3]*x.clean[,2] + beta[4]*x.clean[,3] + error) >>> > > xy.clean <- cbind(x.clean, y) >>> > > colnames(xy.clean) <- c("x1", "x2", "x3", "y") >>> > > >>> > > Then, I pass the following formula to lmrob: f <- y ~ x1 + x2 + x3 >>> > > >>> > > Finally, I run lmrob: lmrob(f, data = data, cov = ".vcov.w") >>> > > and this results in NA coefficients. >>> > >>> > It would also be more courteous to specify the package where you are >>> > getting lmrob. >>> > >>> > > >>> > > It would be great if anyone can help me out. Thanks in advance. >>> > > >>> > > Regards, >>> > > Christien >>> > > >>> > > [[alternative HTML version deleted]] >>> > >>> > This is a plain text mailing list although it doesn't seem to have >>> created >>> > problems this time. >>> > >>> > > >>> > > ______________________________________________ >>> > > [hidden email] mailing list -- To UNSUBSCRIBE and more, see >>> > > https://stat.ethz.ch/mailman/listinfo/r-help >>> > > PLEASE do read the posting guide http://www.R-project.org/ >>> > posting-guide.html >>> > > and provide commented, minimal, self-contained, reproducible code. >>> > >>> > David Winsemius >>> > Alameda, CA, USA >>> > >>> > 'Any technology distinguishable from magic is insufficiently advanced.' >>> > -Gehm's Corollary to Clarke's Third Law >>> > >>> > >>> > >>> > >>> > >>> > >>> >>> [[alternative HTML version deleted]] >>> >>> ______________________________________________ >>> [hidden email] mailing list -- To UNSUBSCRIBE and more, see >>> https://stat.ethz.ch/mailman/listinfo/r-help >>> PLEASE do read the posting guide http://www.R-project.org/posti >>> ng-guide.html >>> and provide commented, minimal, self-contained, reproducible code. >>> >> >> > [[alternative HTML version deleted]] ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. |
Free forum by Nabble | Edit this page |