

Thanks to the help from this list, I'm on my way learning how to handle R.
There are two things that bugger me:
1. Using yahooImport I can easily get financial data. But how do I
handle the data from there on?
For example, I wanted to calculate the beta of DaimlerChrysler vs. the
DAX. So I imported both datasets. How do I have to manipulate the
objects so I can have a linear regression of the returns of DCX to the DAX?
2. Using different datasources, one usually gets timeseries of
differing length. Whats the most economical way to make shure that one
feeds the appropiate pieces to lm, plot and so on?
Thanks in advance,
Owe

Owe Jessen
DiplomVolkswirt
Hanssenstraße 17
24106 Kiel
[hidden email]
http://www.econinfo.deFestnetz 043189096
Mobil 016097780537
FAX 0941599289096
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


Owe Jessen wrote:
>Thanks to the help from this list, I'm on my way learning how to handle R.
>
>There are two things that bugger me:
>1. Using yahooImport I can easily get financial data. But how do I
>handle the data from there on?
>For example, I wanted to calculate the beta of DaimlerChrysler vs. the
>DAX. So I imported both datasets. How do I have to manipulate the
>objects so I can have a linear regression of the returns of DCX to the DAX?
>
>2. Using different datasources, one usually gets timeseries of
>differing length. Whats the most economical way to make shure that one
>feeds the appropiate pieces to lm, plot and so on?
>
>Thanks in advance,
>Owe
>
>
>
>_______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/rsigfinance>
>
# DCX.DE
# ^GDAXI
require(fMultivar)
myFinCenter = "GMT"
# Download > Data Slot > As Time Series > Close
# GET Closing Prices: 20050601 until now  see help(yahooImport)
query = "s=^GDAXI&a=5&b=1&c=2005&d=0&q=31&f=2009&z=^GDAXI&x=.csv"
DAX.CLOSE = as.timeSeries(yahooImport(query)@data)[,"Close"]
query = "s=DCX.DE&a=5&b=1&c=2005&d=0&q=31&f=2009&z=DCX.DE&x=.csv"
DCX.CLOSE = as.timeSeries(yahooImport(query)@data)[,"Close"]
# Align to daily dates:  So both time series will have afterwards the
same time stamps
# Consult help(timeSeries)
DAX.ALIGNED = alignDailySeries(DAX.CLOSE, method = "interp")
DCX.ALIGNED = alignDailySeries(DCX.CLOSE, method = "interp")
# Cut Common Piece from each:  help(timeDate)
START = modify(c(start(DAX.ALIGNED), start(DCX.ALIGNED)), "sort")[2]
END = modify(c(end(DAX.ALIGNED), end(DCX.ALIGNED)), "sort")[1]
DAX.CUTTED = cutSeries(DAX.ALIGNED, from = START, to = END)
DCX.CUTTED = cutSeries(DCX.ALIGNED, from = START, to = END)
# Merge the two Series:
DCXDAX = mergeSeries(DCX.CUTTED, DAX.CUTTED@Data, units = c("DCX", "DAX"))
# Compute Return Series:
DCXDAX.RET = as.data.frame(returnSeries(DCXDAX))
# Compute Beta:  linear Modelling:
c(Beta = lm(formula = DCX ~ DAX, data = DCXDAX.RET)$coef[2])
Beta.DAX
1.320997
Is that what you wanted ? Read my Paper about timeDate and timeSeries
objects under Rmetrics!
Regards Diethelm
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


> require(fMultivar)
> myFinCenter = "GMT"
>
> # Download > Data Slot > As Time Series > Close
> # GET Closing Prices: 20050601 until now  see help(yahooImport)
> query = "s=^GDAXI&a=5&b=1&c=2005&d=0&q=31&f=2009&z=^GDAXI&x=.csv"
> DAX.CLOSE = as.timeSeries(yahooImport(query)@data)[,"Close"]
> query = "s=DCX.DE&a=5&b=1&c=2005&d=0&q=31&f=2009&z=DCX.DE&x=.csv"
> DCX.CLOSE = as.timeSeries(yahooImport(query)@data)[,"Close"]
>
> # Align to daily dates:  So both time series will have afterwards the
> same time stamps
> # Consult help(timeSeries)
> DAX.ALIGNED = alignDailySeries(DAX.CLOSE, method = "interp")
> DCX.ALIGNED = alignDailySeries(DCX.CLOSE, method = "interp")
>
> # Cut Common Piece from each:  help(timeDate)
> START = modify(c(start(DAX.ALIGNED), start(DCX.ALIGNED)), "sort")[2]
> END = modify(c(end(DAX.ALIGNED), end(DCX.ALIGNED)), "sort")[1]
> DAX.CUTTED = cutSeries(DAX.ALIGNED, from = START, to = END)
> DCX.CUTTED = cutSeries(DCX.ALIGNED, from = START, to = END)
>
> # Merge the two Series:
> DCXDAX = mergeSeries(DCX.CUTTED, DAX.CUTTED@Data, units = c("DCX", "DAX"))
>
> # Compute Return Series:
> DCXDAX.RET = as.data.frame(returnSeries(DCXDAX))
>
> # Compute Beta:  linear Modelling:
> c(Beta = lm(formula = DCX ~ DAX, data = DCXDAX.RET)$coef[2])
>
> Beta.DAX
> 1.320997
This looks like hard work! It's easier using R + zoo:
http://www.mayin.org/ajayshah/KB/R/html/o6.html http://www.mayin.org/ajayshah/KB/R/html/o5.html
Ajay Shah http://www.mayin.org/ajayshah
[hidden email] http://ajayshahblog.blogspot.com<*(:?  wizard who doesn't know the answer.
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


Ajay Narottam Shah wrote:
>>require(fMultivar)
>>myFinCenter = "GMT"
>>
>># Download > Data Slot > As Time Series > Close
>># GET Closing Prices: 20050601 until now  see help(yahooImport)
>>query = "s=^GDAXI&a=5&b=1&c=2005&d=0&q=31&f=2009&z=^GDAXI&x=.csv"
>>DAX.CLOSE = as.timeSeries(yahooImport(query)@data)[,"Close"]
>>query = "s=DCX.DE&a=5&b=1&c=2005&d=0&q=31&f=2009&z=DCX.DE&x=.csv"
>>DCX.CLOSE = as.timeSeries(yahooImport(query)@data)[,"Close"]
>>
>># Align to daily dates:  So both time series will have afterwards the
>>same time stamps
>># Consult help(timeSeries)
>>DAX.ALIGNED = alignDailySeries(DAX.CLOSE, method = "interp")
>>DCX.ALIGNED = alignDailySeries(DCX.CLOSE, method = "interp")
>>
>># Cut Common Piece from each:  help(timeDate)
>>START = modify(c(start(DAX.ALIGNED), start(DCX.ALIGNED)), "sort")[2]
>>END = modify(c(end(DAX.ALIGNED), end(DCX.ALIGNED)), "sort")[1]
>>DAX.CUTTED = cutSeries(DAX.ALIGNED, from = START, to = END)
>>DCX.CUTTED = cutSeries(DCX.ALIGNED, from = START, to = END)
>>
>># Merge the two Series:
>>DCXDAX = mergeSeries(DCX.CUTTED, DAX.CUTTED@Data, units = c("DCX", "DAX"))
>>
>># Compute Return Series:
>>DCXDAX.RET = as.data.frame(returnSeries(DCXDAX))
>>
>># Compute Beta:  linear Modelling:
>>c(Beta = lm(formula = DCX ~ DAX, data = DCXDAX.RET)$coef[2])
>>
>>Beta.DAX
>>1.320997
>>
>>
>
>This looks like hard work! It's easier using R + zoo:
>
>
Come On!!!!
If it is really so easy in R+zoo, be fair enough and show us how we can
do it in 10 or less lines!
I'm waiting for it ...
Thanks Diethelm
> http://www.mayin.org/ajayshah/KB/R/html/o6.html> http://www.mayin.org/ajayshah/KB/R/html/o5.html>
>
>
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


Why you should definitely use Rmetrics to solve Owe Jessens problem:
# Be sure you have the latest Version of Rmetrics ...65 from the CRAN Server
# Download > Data Slot > As Time Series > Close
# GET Closing Prices: 20050601 until now
require(fCalendar)
myFinCenter = "GMT"
query = "s=^GDAXI&a=5&b=1&c=2005&d=0&q=31&f=2009&z=^GDAXI&x=.csv"
DAX.CLOSE = as.timeSeries(yahooImport(query)@data)[,"Close"]
query = "s=DCX.DE&a=5&b=1&c=2005&d=0&q=31&f=2009&z=DCX.DE&x=.csv"
DCX.CLOSE = as.timeSeries(yahooImport(query)@data)[,"Close"]
# Assume we have the Data, then
DAX = alignDailySeries(DAX.CLOSE)
DCX = alignDailySeries(DCX.CLOSE)
DAX = cutSeries(DAX, start(DCX), end(DCX))
DCX = cutSeries(DCX, start(DCX), end(DCX))
DCA = returnSeries(mergeSeries(DCX, DAX@Data, c("DCX", "DAX")))
c(Beta = lm(DCX ~ DAX, as.data.frame(DCA))$coef[2])
Beta.DAX
1.348041
When you have downloaded the data,
you can do it with default settings in 6 lines !!!!
Note you have several alternatives to align the data, and to
compute the returns. Check the arguments of the functions
alignDailySeries() and returnSeries().
Diethelm
Ajay Narottam Shah wrote:
>>require(fMultivar)
>>myFinCenter = "GMT"
>>
>># Download > Data Slot > As Time Series > Close
>># GET Closing Prices: 20050601 until now  see help(yahooImport)
>>query = "s=^GDAXI&a=5&b=1&c=2005&d=0&q=31&f=2009&z=^GDAXI&x=.csv"
>>DAX.CLOSE = as.timeSeries(yahooImport(query)@data)[,"Close"]
>>query = "s=DCX.DE&a=5&b=1&c=2005&d=0&q=31&f=2009&z=DCX.DE&x=.csv"
>>DCX.CLOSE = as.timeSeries(yahooImport(query)@data)[,"Close"]
>>
>># Align to daily dates:  So both time series will have afterwards the
>>same time stamps
>># Consult help(timeSeries)
>>DAX.ALIGNED = alignDailySeries(DAX.CLOSE, method = "interp")
>>DCX.ALIGNED = alignDailySeries(DCX.CLOSE, method = "interp")
>>
>># Cut Common Piece from each:  help(timeDate)
>>START = modify(c(start(DAX.ALIGNED), start(DCX.ALIGNED)), "sort")[2]
>>END = modify(c(end(DAX.ALIGNED), end(DCX.ALIGNED)), "sort")[1]
>>DAX.CUTTED = cutSeries(DAX.ALIGNED, from = START, to = END)
>>DCX.CUTTED = cutSeries(DCX.ALIGNED, from = START, to = END)
>>
>># Merge the two Series:
>>DCXDAX = mergeSeries(DCX.CUTTED, DAX.CUTTED@Data, units = c("DCX", "DAX"))
>>
>># Compute Return Series:
>>DCXDAX.RET = as.data.frame(returnSeries(DCXDAX))
>>
>># Compute Beta:  linear Modelling:
>>c(Beta = lm(formula = DCX ~ DAX, data = DCXDAX.RET)$coef[2])
>>
>>Beta.DAX
>>1.320997
>>
>>
>
>This looks like hard work! It's easier using R + zoo:
>
> http://www.mayin.org/ajayshah/KB/R/html/o6.html> http://www.mayin.org/ajayshah/KB/R/html/o5.html>
>
>
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


On 7 April 2006 at 11:58, Diethelm Wuertz wrote:
 Why you should definitely use Rmetrics to solve Owe Jessens problem:
[...]
 When you have downloaded the data,
 you can do it with default settings in 6 lines !!!!

 Note you have several alternatives to align the data, and to
 compute the returns. Check the arguments of the functions
 alignDailySeries() and returnSeries().
Ajay has point as zoo can really do this compactly:
stopifnot(require(tseries), quiet=TRUE) # also loads zoo these days
simpleBeta < function(ysymb="F", xsymb="^GSPC", startdate="20050101") {
Y < get.hist.quote(instrument=ysymb, start=startdate, quote="Close")
X < get.hist.quote(instrument=xsymb, start=startdate, quote="Close")
rets < diff(log(merge(Y,X,all=FALSE)))
fit < lm(Close.Y ~ Close.X, data=rets)
return(coef(fit)[2])
}
One command each for data retrieval [ and is there a way to not get the
verbose output from downloading ? ], the return transformation and merge
really can be as compact as one compact, fit, and then coefficient
extraction.
Obviously, this even fits in one call without really saving anything:
reallyShortSimpleBeta < function(ysymb="F", xsymb="^GSPC",
startdate="20050101") {
return( coef( lm(Close.Y ~ Close.X,
data = diff( log ( merge(
Y=get.hist.quote(instrument=ysymb,
start=startdate, quote="Close"),
X=get.hist.quote(instrument=xsymb,
start=startdate, quote="Close"),
all=FALSE)))))[2] )
}
Now, what is the deal with the Frankfurt data at data Yahoo?
For regressions of Daimler or Deutsche Bank, ie
> simpleBeta("DCX.DE", "^GDAXI")
> simpleBeta("DBK.DE", "^GDAXI")
I get only 200 datapoints whereas US data (with the same start data request
of Jan 1, 2005) gets me the full 317 points...
Anyway, hope this helps.
Cheers, Dirk

Hell, there are no rules here  we're trying to accomplish something.
 Thomas A. Edison
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


On Fri, 07 Apr 2006 11:08:07 +0200 Diethelm Wuertz wrote:
> >This looks like hard work! It's easier using R + zoo:
> >
> Come On!!!!
Could we please *not* have discussions like this? This is not a
competition (at least IMHO).
I think we're all aware that Diethelm has a great collection of
financial functionality in the Rmetrics packages and I hope that most
users will agree that zoo has time series functionality with more
flexibility concerning the choice of date/time class and the more
standard interface. So it's the question how to get these work
together and not against each other.
> If it is really so easy in R+zoo, be fair enough and show us how we
> can do it in 10 or less lines!
> I'm waiting for it ...
Except for the alignment part which I've omitted for simplicity, you can
do:
## get data from yahoo
library("tseries")
dax < get.hist.quote("^GDAXI", start = "20050601")[, "Close"]
dcx < get.hist.quote("DCX.DE", start = "20050601")[, "Close"]
## merge > returns > lm
dcxdax < merge(dcx, dax, all = FALSE)
ddret < diff(log(dcxdax))
lm(dcx ~ dax, data = as.data.frame(ddret))
Best,
Z
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


On 4/7/06, Dirk Eddelbuettel < [hidden email]> wrote:
>
> On 7 April 2006 at 11:58, Diethelm Wuertz wrote:
>  Why you should definitely use Rmetrics to solve Owe Jessens problem:
> [...]
>  When you have downloaded the data,
>  you can do it with default settings in 6 lines !!!!
> 
>  Note you have several alternatives to align the data, and to
>  compute the returns. Check the arguments of the functions
>  alignDailySeries() and returnSeries().
>
> Ajay has point as zoo can really do this compactly:
>
>
> stopifnot(require(tseries), quiet=TRUE) # also loads zoo these days
> simpleBeta < function(ysymb="F", xsymb="^GSPC", startdate="20050101") {
> Y < get.hist.quote(instrument=ysymb, start=startdate, quote="Close")
> X < get.hist.quote(instrument=xsymb, start=startdate, quote="Close")
> rets < diff(log(merge(Y,X,all=FALSE)))
> fit < lm(Close.Y ~ Close.X, data=rets)
> return(coef(fit)[2])
> }
>
> and is there a way to not get the
> verbose output from downloading ?
The tseries function, get.hist.quote, uses download.file and that's where
its coming from. If you want you can redefine download.file with quiet = TRUE.
The following creates a wrapper around get.hist.quote that first
defines a download.file wrapper with quiet = TRUE and then
places a copy of get.hist.quote in a proto object whose parent is
the environment within the redefined get.hist.quote so that the
redefined download.file is found.
library(tseries)
library(proto)
get.hist.quote < function(...) {
download.file < function(url, destfile, method, quiet = TRUE,
mode = "w", cacheOK = TRUE)
utils::download.file(url, destfile = destfile, method = method,
quiet = quiet, mode = mode, cacheOK = cacheOK)
with(proto(get.hist.quote = tseries::get.hist.quote),
get.hist.quote(...))
}
ibm < get.hist.quote("ibm")
The above does not display messages on the R console
although it still does have a popup that briefly appears.
I also tried using capture.output but this seems not to have
any effect on download.file.
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


Just that I understand. Is this basically a request to add a `quiet =
TRUE' option to get.hist.quote()?
Z
On Fri, 7 Apr 2006 08:47:35 0400 Gabor Grothendieck wrote:
> On 4/7/06, Dirk Eddelbuettel < [hidden email]> wrote:
> >
> > On 7 April 2006 at 11:58, Diethelm Wuertz wrote:
> >  Why you should definitely use Rmetrics to solve Owe Jessens
> >  problem:
> > [...]
> >  When you have downloaded the data,
> >  you can do it with default settings in 6 lines !!!!
> > 
> >  Note you have several alternatives to align the data, and to
> >  compute the returns. Check the arguments of the functions
> >  alignDailySeries() and returnSeries().
> >
> > Ajay has point as zoo can really do this compactly:
> >
> >
> > stopifnot(require(tseries), quiet=TRUE) # also loads zoo
> > these days simpleBeta < function(ysymb="F", xsymb="^GSPC",
> > startdate="20050101") { Y < get.hist.quote(instrument=ysymb,
> > start=startdate, quote="Close") X < get.hist.quote
> > (instrument=xsymb, start=startdate, quote="Close") rets < diff(log
> > (merge(Y,X,all=FALSE))) fit < lm(Close.Y ~ Close.X, data=rets)
> > return(coef(fit)[2])
> > }
> >
> > and is there a way to not get the
> > verbose output from downloading ?
>
> The tseries function, get.hist.quote, uses download.file and that's
> where its coming from. If you want you can redefine download.file
> with quiet = TRUE. The following creates a wrapper around
> get.hist.quote that first defines a download.file wrapper with quiet
> = TRUE and then places a copy of get.hist.quote in a proto object
> whose parent is the environment within the redefined get.hist.quote
> so that the redefined download.file is found.
>
> library(tseries)
> library(proto)
>
> get.hist.quote < function(...) {
> download.file < function(url, destfile, method, quiet = TRUE,
> mode = "w", cacheOK = TRUE)
> utils::download.file(url, destfile = destfile, method =
> method, quiet = quiet, mode = mode, cacheOK = cacheOK)
> with(proto(get.hist.quote = tseries::get.hist.quote),
> get.hist.quote(...))
> }
>
> ibm < get.hist.quote("ibm")
>
> The above does not display messages on the R console
> although it still does have a popup that briefly appears.
>
> I also tried using capture.output but this seems not to have
> any effect on download.file.
>
> _______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/rsigfinance>
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


Gabor, Achim,
On 7 April 2006 at 08:47, Gabor Grothendieck wrote:
 The tseries function, get.hist.quote, uses download.file and that's where
 its coming from. If you want you can redefine download.file with quiet =
 TRUE.
Right. I think I did that once.
 The following creates a wrapper around get.hist.quote that first
 defines a download.file wrapper with quiet = TRUE and then
 places a copy of get.hist.quote in a proto object whose parent is
 the environment within the redefined get.hist.quote so that the
 redefined download.file is found.
Very nice. I hope I get a chance to study this, and the proto package, at
some point.
 The above does not display messages on the R console
 although it still does have a popup that briefly appears.
Only on some OSs but not on the one I tend to use :)
On 7 April 2006 at 15:18, Achim Zeileis wrote:
 Just that I understand. Is this basically a request to add a `quiet =
 TRUE' option to get.hist.quote()?
Yes please!
Dirk

Hell, there are no rules here  we're trying to accomplish something.
 Thomas A. Edison
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


Achim Zeileis wrote:
>On Fri, 07 Apr 2006 11:08:07 +0200 Diethelm Wuertz wrote:
>
>
>
>>Come On!!!!
>>
>>
>
>Could we please *not* have discussions like this? This is not a
>competition (at least IMHO).
>
>
Just one remark to this: When I was a student I hated it when a teacher
said,
"it is (very) easy to show that", or "it is (much) easier to do" and it
was never
shown or done ...
When this is the case, why one don't show it or do it? The student will
be happy.
Then I think *Come on!!!* is neither a rude nor an unfair reaction nor the
signal for a competetion, it is just a reminder that it would be fine to
show the
easy way to the student. He will be thankful.
Anyway, if somebody felt be attacked by the selection of my words, I
apologize  but not for the content behind my words.
Nevertheless, my heart was really happy about the many positive reactions
and code solutions submitted which wouldn't be happened otherwise.
DW
please also read my other remarks further down ...
>Except for the alignment part which I've omitted for simplicity, you can
>do:
>
>## get data from yahoo
>library("tseries")
>dax < get.hist.quote("^GDAXI", start = "20050601")[, "Close"]
>dcx < get.hist.quote("DCX.DE", start = "20050601")[, "Close"]
>
>## merge > returns > lm
>dcxdax < merge(dcx, dax, all = FALSE)
>ddret < diff(log(dcxdax))
>lm(dcx ~ dax, data = as.data.frame(ddret))
>
>Best,
>Z
>
>
>
Thanks for this clear part of code.
DW
Two other comments:
To have a great flexibility yahooImport() requests the download query.
This may be not
very user friendly, but it gives a great flexibility.
Compare:
dax1 =
as.timeSeries(yahooImport(query="s=^GDAXI&a=0&b=1&c=2003&g=w&x=.csv")@data
) [,"Close"]
dax2 = get.hist.quote (instrument = "^GDAXI",
start="20030101", quote = "Close", compression = "w")
Like get.hist.quote() one can make it more user friendly very easily: [I
show it]
getYahoo =
function(symbol = "^GDAXI", start = "20030101", quote = "Close",
aggregation = "w", class = c("timeSeries", "zoo"))
{
# Extend it ...
year = substring(start, 1, 4)
month = as.character(as.integer(substring(start, 6,7))1)
day = substring(start, 9, 10)
query = paste("s=", symbol, "&a=", month, "&b=", day, "&c=",
year, "&g=", aggregation, "&x=.csv", sep = "")
X = yahooImport(query="s=^GDAXI&a=0&b=1&c=2003&g=w&x=.csv")@data
ans = as.timeSeries(X)[, quote]
if (class[1] == "zoo") ans = as.zoo(ans)
ans
}
also use:
as.zoo.timeSeries = function(x, ...) zoo(seriesData(x),
as.character(seriesPositions(x))) }
Then you can use any code proposal ...
The download results either in a timeSeries object or optionally in a
zoo Object
Convert zoo to timeSeries ...
as.timeSeries.zoo = function(x, ...) {
# make as.timeSeries() generic in Rmetrics ... (Still to Improve it)
timeSeries(as.matrix(x), attr(x, "index"), myFincenter = "GMT")
}
Why Rmetrics uses mergeSeries, diffSeries, logSeries instead of the
generic functions merge, diff, log, etc. is for compatibility reasons
with SPlus/Finmetrics.
With the next version of Rmetrics I will add the generic functions for
the Rmetrics
timeDate and timeSeries objects.
regards Diethelm
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


On Fri, 07 Apr 2006 16:41:35 +0200 Diethelm Wuertz wrote:
> Just one remark to this: When I was a student I hated it when a
> teacher said, "it is (very) easy to show that", or "it is (much)
> easier to do" and it was never shown or done ...
Ajay posted a link which did basically the same thing albeit not with
those two instruments.
> When this is the case, why one don't show it or do it? The student
> will be happy.
> Then I think *Come on!!!* is neither a rude nor an unfair reaction
> nor the signal for a competetion,
My impression was that the combination of implying that the posted link
was not proof enough and that it needs to be done in less than 10 lines
conveyed an atmosphere of two rival approaches. Also Ajay's first reply
was a surely a bit sloppy and could have been interpreted to be
somewhat tendential.
Hence, I tried to stop such discussions from the very start. Sorry if I
was very direct in this...
Thanks also for the other remarks. I think we'll also have time
to discuss all of this at useR! in Vienna.
Best,
Z
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


On 4/7/06, Achim Zeileis < [hidden email]> wrote:
> Could we please *not* have discussions like this? This is not a
> competition (at least IMHO).
Discussions like this are extraordinarily valuable. The competition of
ideas in the market place is the mechanism of progress.
jab

John Bollinger, CFA, CMT
www.BollingerBands.com
If you advance far enough, you arrive at the beginning.
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance


On Fri, 7 Apr 2006 08:59:20 0700 BBands wrote:
> On 4/7/06, Achim Zeileis < [hidden email]> wrote:
> > Could we please *not* have discussions like this? This is not a
> > competition (at least IMHO).
>
> Discussions like this are extraordinarily valuable. The competition of
> ideas in the market place is the mechanism of progress.
Of course. I've had offlist discussions with almost all people
involved in this thread (sometimes just onetoone, sometimes in a
larger group) and my understanding is that essentially everyone is
d'accord that we've got to find a way to get the best of n worlds
(where is at least 2).
And if you want to team up in an effort, it is IMO easier to create an
atmosphere of doing things together rather than against each other.
Furthermore, it will be much more relaxed to discuss this over a beer
in a Viennese pub during useR! :)
Best,
Z
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance

