statistical remedy needed

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

statistical remedy needed

tonyp
Hi,

I am trying to test for differences in means between two return (time) series. However, the Ljung-Box test is significant due to the long-memory structure of the series ie. autocorrelation is present. I tried to difference twice which is standard (didn't want to overdifference) and still I have, as expected, series correlation.

Is there any function in R or a technique some of you guys can suggest me to filter the autocorrelation in order to apply my test?

t.test(ts1, ts2 ,alternative='greater',
paired=TRUE,var.equal=FALSE, conf.level=0.95)

I would totally appreciate if any of you quant minds outthere has done work on that. Thank you in advance.

Best,
TP
Reply | Threaded
Open this post in threaded view
|

Re: statistical remedy needed

mark leeds
hi: the paired t-test is used when you want to reduce variance ( by
differencing the
same observational unit ) or reduce the amount of confounding. in this
case, I think it's more appropriate to use a regular t-test on two
populations. that will also get rid of the autocorrelation problem. of
course, if the variances of the 2 series differs greatly,
then you have a new problem ( and should use welch t-test).

                                                                           mark






On Fri, Jul 8, 2011 at 6:23 AM, tonyp <[hidden email]> wrote:

> Hi,
>
> I am trying to test for differences in means between two return (time)
> series. However, the Ljung-Box test is significant due to the long-memory
> structure of the series ie. autocorrelation is present. I tried to
> difference twice which is standard (didn't want to overdifference) and still
> I have, as expected, series correlation.
>
> Is there any function in R or a technique some of you guys can suggest me to
> filter the autocorrelation in order to apply my test?
>
> t.test(ts1, ts2 ,alternative='greater',
> paired=TRUE,var.equal=FALSE, conf.level=0.95)
>
> I would totally appreciate if any of you quant minds outthere has done work
> on that. Thank you in advance.
>
> Best,
> TP
>
> --
> View this message in context: http://r.789695.n4.nabble.com/statistical-remedy-needed-tp3653641p3653641.html
> Sent from the Rmetrics mailing list archive at Nabble.com.
>
> _______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> -- Subscriber-posting only. If you want to post, subscribe first.
> -- Also note that this is not the r-help list where general R questions should go.
>

_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should go.
Reply | Threaded
Open this post in threaded view
|

Re: statistical remedy needed

Andy Zhu
In reply to this post by tonyp
One easy way may be remove autocorrelation first for both series; then work on the residual series. I am afraid differencing may not be a good way for autocorrelation.

AR(p) could  be sufficient if autocorrelation is your only concern.  However, if you see unit root, then differencing may be helpful.

If it is appropriate, you may attach the time series.

Andy



________________________________
From: tonyp <[hidden email]>
To: [hidden email]
Sent: Friday, July 8, 2011 6:23 AM
Subject: [R-SIG-Finance] statistical remedy needed

Hi,

I am trying to test for differences in means between two return (time)
series. However, the Ljung-Box test is significant due to the long-memory
structure of the series ie. autocorrelation is present. I tried to
difference twice which is standard (didn't want to overdifference) and still
I have, as expected, series correlation.

Is there any function in R or a technique some of you guys can suggest me to
filter the autocorrelation in order to apply my test?

t.test(ts1, ts2 ,alternative='greater',
paired=TRUE,var.equal=FALSE, conf.level=0.95)

I would totally appreciate if any of you quant minds outthere has done work
on that. Thank you in advance.

Best,
TP

--
View this message in context: http://r.789695.n4.nabble.com/statistical-remedy-needed-tp3653641p3653641.html
Sent from the Rmetrics mailing list archive at Nabble.com.

_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should go.
        [[alternative HTML version deleted]]


_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should go.
Reply | Threaded
Open this post in threaded view
|

Re: statistical remedy needed

tonyp
In reply to this post by mark leeds
Thanks for the reply, Mark and Andy. The reason I am using the paired t-test is that I am comparing top 10% top performance of a benchmark and the benchmark itself, therefore, the confounder is the benchmark per se. Correct me if I am wrong! So if you think that I still should use the simple t-test in this case as you say I am using the Welch one due to the high p-value significance of the variance homogeneity test. Again, thanks for the quick feedback guys.

TP
Reply | Threaded
Open this post in threaded view
|

Re: statistical remedy needed

mark leeds
hi TP: I'm not sure of the best approach. To me, it would seem like,
if the returns
to the benchmark are auto-correlated ( which causes any  portfolio
tracking it to
have returns that are correlated ), then, if you subtract them each
period, that should remove the autocorrelation. I would create the
paired difference series and do the Box-Leung test again because I
don't think you should be getting correlated returns on the
differences. Good luck.





On Fri, Jul 8, 2011 at 8:38 AM, tonyp <[hidden email]> wrote:

> Thanks for the reply, Mark and Andy. The reason I am using the paired t-test
> is that I am comparing top 10% top performance of a benchmark and the
> benchmark itself, therefore, the confounder is the benchmark per se. Correct
> me if I am wrong! So if you think that I still should use the simple t-test
> in this case as you say I am using the Welch one due to the high p-value
> significance of the variance homogeneity test. Again, thanks for the quick
> feedback guys.
>
> TP
>
> --
> View this message in context: http://r.789695.n4.nabble.com/statistical-remedy-needed-tp3653641p3653980.html
> Sent from the Rmetrics mailing list archive at Nabble.com.
>
> _______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-finance
> -- Subscriber-posting only. If you want to post, subscribe first.
> -- Also note that this is not the r-help list where general R questions should go.
>

_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should go.
Reply | Threaded
Open this post in threaded view
|

Re: statistical remedy needed

Arun.stat
In reply to this post by tonyp
Hi tonyp, I have been following your thread from it's starting,
however the answers replied by my friends there could not convince me
greatly (probably I could not understand them properly!). For example
Andy said "One easy way may be remove autocorrelation first for both
series; then work on the residual series". In this case, the residual
series will trivially have zero mean in either case. So probably we
could not make any meaningful test (please correct me if I am wrong.)

However my suggestion is to think this problem from bivariate VAR
prospective. Assuming you observativations are stationary but
autocorrelated, you can easily assume those observations as
realization for some Stable VAR process. You can consider following
form of that stable VAR process:

(yt - mu) = A1(y[t-1] - mu) + A2(y[t-2] - mu) +... + error

here yt and mu both vector with length 2. Then you can construct a
valid test on mu1 = mu2 under suitable alternatives.

Seeking other comments on this approach.

Thanks and regards,
_____________________________________________________

Arun Kumar Saha, FRM
QUANTITATIVE RISK AND HEDGE CONSULTING SPECIALIST
Visit me at: http://in.linkedin.com/in/ArunFRM

_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-finance
-- Subscriber-posting only. If you want to post, subscribe first.
-- Also note that this is not the r-help list where general R questions should go.
Reply | Threaded
Open this post in threaded view
|

Re: statistical remedy needed

tonyp
Hi Arun,

Thank you for the reply. You are right. Statistics can be a very tricky business; especially applied in fiance. As you are suggesting, I did the problem under the VAR analytical framework. Another possibility I tried to analyze the problem with was to take the difference and use GLS with serially correlated residuals and test for intercept = 0 or not.

However, thanks for the reply.

Best wishes,
Tony