This post has NOT been accepted by the mailing list yet.
Hi, I am trying to convert from Splus to R. In my work, I store all of my data in SQL and then use the Splus JDBC library to import very large data sets (200,000-3000000) of panel data into my Splus session. In R, I have noticed a few things with regard to this action. First, the RJDBC runs out of memory very quickly and is far slower than JDBC in S-Plus (this could be a function of me using the java driver that came with the splus pakcage, but I doubt it). I next tried RODBC and was able to import 85,000 rows in 44 seconds (which is still slower than Splus, but not too big of a problem). However, when it comes to many more rows than that I am wondering if I should be operating in a different manner because I dont think there is anyway for me to create a bigmemory object from a sqlQuery. For instance, if the analysis I am doing is looping through dates and performing a regression on each date with respect to say 500 elements per date, would it be better if I call sql on each date in an lapply, for, or by loop rather than importing a very large data set into memory and then looping through it?
Let me know your thoughts,
Thank you om advance.