# statistical remedy needed

7 messages
Open this post in threaded view
|

## statistical remedy needed

 Hi, I am trying to test for differences in means between two return (time) series. However, the Ljung-Box test is significant due to the long-memory structure of the series ie. autocorrelation is present. I tried to difference twice which is standard (didn't want to overdifference) and still I have, as expected, series correlation. Is there any function in R or a technique some of you guys can suggest me to filter the autocorrelation in order to apply my test? t.test(ts1, ts2 ,alternative='greater', paired=TRUE,var.equal=FALSE, conf.level=0.95) I would totally appreciate if any of you quant minds outthere has done work on that. Thank you in advance. Best, TP
Open this post in threaded view
|

## Re: statistical remedy needed

 hi: the paired t-test is used when you want to reduce variance ( by differencing the same observational unit ) or reduce the amount of confounding. in this case, I think it's more appropriate to use a regular t-test on two populations. that will also get rid of the autocorrelation problem. of course, if the variances of the 2 series differs greatly, then you have a new problem ( and should use welch t-test).                                                                            mark On Fri, Jul 8, 2011 at 6:23 AM, tonyp <[hidden email]> wrote: > Hi, > > I am trying to test for differences in means between two return (time) > series. However, the Ljung-Box test is significant due to the long-memory > structure of the series ie. autocorrelation is present. I tried to > difference twice which is standard (didn't want to overdifference) and still > I have, as expected, series correlation. > > Is there any function in R or a technique some of you guys can suggest me to > filter the autocorrelation in order to apply my test? > > t.test(ts1, ts2 ,alternative='greater', > paired=TRUE,var.equal=FALSE, conf.level=0.95) > > I would totally appreciate if any of you quant minds outthere has done work > on that. Thank you in advance. > > Best, > TP > > -- > View this message in context: http://r.789695.n4.nabble.com/statistical-remedy-needed-tp3653641p3653641.html> Sent from the Rmetrics mailing list archive at Nabble.com. > > _______________________________________________ > [hidden email] mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance> -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R questions should go. > _______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-finance-- Subscriber-posting only. If you want to post, subscribe first. -- Also note that this is not the r-help list where general R questions should go.
Open this post in threaded view
|

## Re: statistical remedy needed

 In reply to this post by tonyp One easy way may be remove autocorrelation first for both series; then work on the residual series. I am afraid differencing may not be a good way for autocorrelation. AR(p) could  be sufficient if autocorrelation is your only concern.  However, if you see unit root, then differencing may be helpful. If it is appropriate, you may attach the time series. Andy ________________________________ From: tonyp <[hidden email]> To: [hidden email] Sent: Friday, July 8, 2011 6:23 AM Subject: [R-SIG-Finance] statistical remedy needed Hi, I am trying to test for differences in means between two return (time) series. However, the Ljung-Box test is significant due to the long-memory structure of the series ie. autocorrelation is present. I tried to difference twice which is standard (didn't want to overdifference) and still I have, as expected, series correlation. Is there any function in R or a technique some of you guys can suggest me to filter the autocorrelation in order to apply my test? t.test(ts1, ts2 ,alternative='greater', paired=TRUE,var.equal=FALSE, conf.level=0.95) I would totally appreciate if any of you quant minds outthere has done work on that. Thank you in advance. Best, TP -- View this message in context: http://r.789695.n4.nabble.com/statistical-remedy-needed-tp3653641p3653641.htmlSent from the Rmetrics mailing list archive at Nabble.com. _______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-finance-- Subscriber-posting only. If you want to post, subscribe first. -- Also note that this is not the r-help list where general R questions should go.         [[alternative HTML version deleted]] _______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-finance-- Subscriber-posting only. If you want to post, subscribe first. -- Also note that this is not the r-help list where general R questions should go.
Open this post in threaded view
|

## Re: statistical remedy needed

 In reply to this post by mark leeds Thanks for the reply, Mark and Andy. The reason I am using the paired t-test is that I am comparing top 10% top performance of a benchmark and the benchmark itself, therefore, the confounder is the benchmark per se. Correct me if I am wrong! So if you think that I still should use the simple t-test in this case as you say I am using the Welch one due to the high p-value significance of the variance homogeneity test. Again, thanks for the quick feedback guys. TP
Open this post in threaded view
|

## Re: statistical remedy needed

 hi TP: I'm not sure of the best approach. To me, it would seem like, if the returns to the benchmark are auto-correlated ( which causes any  portfolio tracking it to have returns that are correlated ), then, if you subtract them each period, that should remove the autocorrelation. I would create the paired difference series and do the Box-Leung test again because I don't think you should be getting correlated returns on the differences. Good luck. On Fri, Jul 8, 2011 at 8:38 AM, tonyp <[hidden email]> wrote: > Thanks for the reply, Mark and Andy. The reason I am using the paired t-test > is that I am comparing top 10% top performance of a benchmark and the > benchmark itself, therefore, the confounder is the benchmark per se. Correct > me if I am wrong! So if you think that I still should use the simple t-test > in this case as you say I am using the Welch one due to the high p-value > significance of the variance homogeneity test. Again, thanks for the quick > feedback guys. > > TP > > -- > View this message in context: http://r.789695.n4.nabble.com/statistical-remedy-needed-tp3653641p3653980.html> Sent from the Rmetrics mailing list archive at Nabble.com. > > _______________________________________________ > [hidden email] mailing list > https://stat.ethz.ch/mailman/listinfo/r-sig-finance> -- Subscriber-posting only. If you want to post, subscribe first. > -- Also note that this is not the r-help list where general R questions should go. > _______________________________________________ [hidden email] mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-finance-- Subscriber-posting only. If you want to post, subscribe first. -- Also note that this is not the r-help list where general R questions should go.