

Dear list,
(forgive me if this is a question already answered a hundred times here
 I was unable to find the threads)
I want to derive option prices (let's say european style for a start)
for a given sample of historical return data (let's say the last 50 DAX
returns) using monte carlo simulation. It is clear that the return data
has to be transformed into "risk neutral" data first. I thought I could
find a procedure for this task (data transformation and simulation) in R
because it should be a standard approach for assessing market option
prices. But I could not find anything.
Can somebody help?
Joachim
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.


Hi Joachim , I'm not a prof but I recommend you the following book:
Option Pricing and Estimation of Financial Models with R
Stafano M. Iacus
http://www.amazon.com/OptionPricingEstimationFinancialModels/dp/0470745843/ref=sr_1_48?ie=UTF8&qid=1321974343&sr=848it'll explain a lot of thinks with code samples.
Some very base simple code, I started with this one and while was reading the book I started to improve it
#define functions
require(package='fBasics',quietly=FALSE)
#This function to create innovation
#We keep innovation function autside of process because
#once computed leave it stored for next usage
innovation.norm<function(pTrials=5000){
# remember to add the seed
#generate standard normal innovation
z < rnorm(n=pTrials,mean=0,sd=1)
#genereta anthitetic innovation
ant.z <z
#combine in the new one:
z < c(z,ant.z)
#return innovation vector
return(z)
}
#this function to create the price distribution
st.proc.norm<function(pInnVect=z,mu=0,pSnow=15,pVol=0.15,pTradDaysToExp=20){
#set the number of trading days in one year
yearTradDays<252
#compute the interval time
dt<pTradDaysToExp/yearTradDays
s.final < rep(0,length(pInnVect))
s.final<pSnow*exp(pVol*sqrt(dt)*pInnVect)
out<c(s.final,pSnow)
return(s.final)
}
#this function to build the option chain
price.chain<function(pSFinal,pIntRate=0.02,pStrikeMin=10,pStrikeMax=15,pDStrike=0.5,pCalDaysToExp=30){
#build strike chain
K<seq(from=pStrikeMin,to=pStrikeMax,by=pDStrike)
#compute dicount factor
calDays<365
dt<pCalDaysToExp/calDays
#prepare storage for chain
prCall<rep(x=0,length(K))
prPut<rep(x=0,length(K))
se.call<rep(x=0,length(K))
se.put<rep(x=0,length(K))
for (i in (1:length(K))){
#print(i)
#print(K[i])
#formula for pricing
c< ifelse(pSFinal>K[i],exp(pIntRate*dt)*(pSFinalK[i]), 0)
p< ifelse(pSFinal<K[i],exp(pIntRate*dt)*(K[i]pSFinal), 0)
#price as average
prCall[i]<mean(c)
prPut[i]<mean(p)
#standard error
se.call[i]<sd(c)/sqrt(length(pSFinal))
se.put[i]<sd(p)/sqrt(length(pSFinal))
## make a confidence interval estimate
alpha<0.05
t.star<qt(p=(1alpha/2),df=length(pSFinal)1,lower.tail=TRUE)
prCall[i]+c(1,1)*t.star*se.call[i]
prPut[i]+c(1,1)*t.star*se.put[i]
}
#build the data frame for option's chain
pr.chain<data.frame(CALL=round(prCall,digits=3),Strike=K,PUT=round(prPut,digits=3))
return(pr.chain)
}
##################################################################################
## PRICING WITH NORMAL
inn.n<innovation.norm(pTrials=500)
hist(inn.n)
#you must estimate volatility from past data
s.final.n<st.proc.norm(pInnVect=inn.n,pSnow=13.40,pVol=0.25,pTradDaysToExp=20)
hist(s.final.n)
#you must estimate interest rate
price.chain(pSFinal=s.final.n,pIntRate=0.02,pStrikeMin=10,pStrikeMax=15,pDStrike=0.5,pCalDaysToExp=20)


Hi:
I think this could work.
Now, I always find it useful to identify the hard parts up front. Let's see if I can sketch this in a couple minutes.
1) You need to estimate variance (vol^2) from the historical prices. Now 50 days closing prices is a little low and when close to expiration you'd want some intraday data too. But the model is a little crude anyhow, so perhaps it doesn't matter. So assume the stock price St follows a lognormal process: dlog(St) = (r  vol^2/2) dt + vol dBt and take first differences of log(St) for the returns process Rt ~ log(St/S(t1)). Rt is normally distributed so vol^2 ~ Sum{(Rt  R(t1))^2}/48  (1/49 Sum{Rt})^2}.
2) You need the riskfree rate, so set r=treasusryrate for the same maturity as the stock option timetoexpiration  this is in calendar days.
3) Let t = today and T=expiration.
The sample paths of the stock price can be assumed to follow: Ss > exp{r s + vol N(0, 1) sqrt(s)} // N(0,1) =normal distribution.
Let c_(ns) = exp{r (Tt)} max{ exp{r (Tt) + vol ns sqrt(Tt)}  X, 0}  where ns is a random sample from N(0,1).
european call(X,S,r,vol,T) = { c_(ns1) + c_(ns2) + ... + c_(ns(num_samples)) } / num_samples.
My math could be off a little (e.g. maybe a ito factor), but at least for me, it really helps to get some brief clean calculations up front, and then code.;)
________________________________________
From: [hidden email] [ [hidden email]] on behalf of msalese [ [hidden email]]
Sent: Tuesday, November 22, 2011 7:30 AM
To: [hidden email]
Subject: Re: [RSIGFinance] Option valuation for arbitrary distribution using monte carlo simulation
Hi Joachim , I'm not a prof but I recommend you the following book:
Option Pricing and Estimation of Financial Models with R
Stafano M. Iacus
http://www.amazon.com/OptionPricingEstimationFinancialModels/dp/0470745843/ref=sr_1_48?ie=UTF8&qid=1321974343&sr=848it'll explain a lot of thinks with code samples.
Some very base simple code, I started with this one and while was reading
the book I started to improve it
#define functions
require(package='fBasics',quietly=FALSE)
#This function to create innovation
#We keep innovation function autside of process because
#once computed leave it stored for next usage
innovation.norm<function(pTrials=5000){
# remember to add the seed
#generate standard normal innovation
z < rnorm(n=pTrials,mean=0,sd=1)
#genereta anthitetic innovation
ant.z <z
#combine in the new one:
z < c(z,ant.z)
#return innovation vector
return(z)
}
#this function to create the price distribution
st.proc.norm<function(pInnVect=z,mu=0,pSnow=15,pVol=0.15,pTradDaysToExp=20){
#set the number of trading days in one year
yearTradDays<252
#compute the interval time
dt<pTradDaysToExp/yearTradDays
s.final < rep(0,length(pInnVect))
s.final<pSnow*exp(pVol*sqrt(dt)*pInnVect)
out<c(s.final,pSnow)
return(s.final)
}
#this function to build the option chain
price.chain<function(pSFinal,pIntRate=0.02,pStrikeMin=10,pStrikeMax=15,pDStrike=0.5,pCalDaysToExp=30){
#build strike chain
K<seq(from=pStrikeMin,to=pStrikeMax,by=pDStrike)
#compute dicount factor
calDays<365
dt<pCalDaysToExp/calDays
#prepare storage for chain
prCall<rep(x=0,length(K))
prPut<rep(x=0,length(K))
se.call<rep(x=0,length(K))
se.put<rep(x=0,length(K))
for (i in (1:length(K))){
#print(i)
#print(K[i])
#formula for pricing
c< ifelse(pSFinal>K[i],exp(pIntRate*dt)*(pSFinalK[i]), 0)
p< ifelse(pSFinal<K[i],exp(pIntRate*dt)*(K[i]pSFinal), 0)
#price as average
prCall[i]<mean(c)
prPut[i]<mean(p)
#standard error
se.call[i]<sd(c)/sqrt(length(pSFinal))
se.put[i]<sd(p)/sqrt(length(pSFinal))
## make a confidence interval estimate
alpha<0.05
t.star<qt(p=(1alpha/2),df=length(pSFinal)1,lower.tail=TRUE)
prCall[i]+c(1,1)*t.star*se.call[i]
prPut[i]+c(1,1)*t.star*se.put[i]
}
#build the data frame for option's chain
pr.chain<data.frame(CALL=round(prCall,digits=3),Strike=K,PUT=round(prPut,digits=3))
return(pr.chain)
}
##################################################################################
## PRICING WITH NORMAL
inn.n<innovation.norm(pTrials=500)
hist(inn.n)
#you must estimate volatility from past data
s.final.n<st.proc.norm(pInnVect=inn.n,pSnow=13.40,pVol=0.25,pTradDaysToExp=20)
hist(s.final.n)
#you must estimate interest rate
price.chain(pSFinal=s.final.n,pIntRate=0.02,pStrikeMin=10,pStrikeMax=15,pDStrike=0.5,pCalDaysToExp=20)

View this message in context: http://r.789695.n4.nabble.com/Optionvaluationforarbitrarydistributionusingmontecarlosimulationtp4095718p4096122.htmlSent from the Rmetrics mailing list archive at Nabble.com.
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.


Dear Massimo,
thanks for the hint to Iacus' book. I was aware of this title already
but could not get hold of it by now. Thank you for the code snippet as
well. Unfortunately, it does not seem to help with my main problem: How
to correct the original data such that Monte Carlo sampling from it
gives me samples from a riskneutral but NONNORMAL process. Maybe my
description is a bit awkward. Also I may lack the math and statistics
skills. I try do say it another way:
Let
r : original log return series
( with e.g. mean(r) = 0.0001574849 and sd(r) = 0.01479017 )
x: riskfree rate of, say, 5%.
Snow : stock price now
time to maturity = 1 year = 252 trading days
St : stock price in 1 year
X: call option strike
E[] : expected value
C : the option price
I am looking for a corrected return series rcorr = r + z such that
E[ Snow * exp( sum( sample(rcorr, 252) ) ) ] = 100 * exp(0.05)
because then I can find the call option price with
for (i in 1:bignumber) {
St[i] = Snow * exp( sum( sample(rcorr, 252) ) )
op[i] = max(0, exp(.05) * (St[i]  X) )
}
C = mean(op)
I could of course use some search algorithm to find z. In fact thats
what I did. But I thought this is so common a problem for finance folks
that there must be some elegant R procedure for th task.
Your code seems to assume that we are dealing with normal log returns
but this is not the case for my sample.
Do you see what I mean? Is this stuff dealt with in the Iacus book?
Greetings
Joachim
>
> Hi Joachim , I'm not a prof but I recommend you the following book:
> Option Pricing and Estimation of Financial Models with R
> Stafano M. Iacus
> http://www.amazon.com/OptionPricingEstimationFinancialModels/dp/0470745843/ref=sr_1_48?ie=UTF8&qid=1321974343&sr=848>
> it'll explain a lot of thinks with code samples.
> Some very base simple code, I started with this one and while was reading
> the book I started to improve it
>
> #define functions
> require(package='fBasics',quietly=FALSE)
>
> #This function to create innovation
> #We keep innovation function autside of process because
> #once computed leave it stored for next usage
> innovation.norm<function(pTrials=5000){
> # remember to add the seed
> #generate standard normal innovation
> z< rnorm(n=pTrials,mean=0,sd=1)
> #genereta anthitetic innovation
> ant.z<z
> #combine in the new one:
> z< c(z,ant.z)
> #return innovation vector
> return(z)
> }
>
> #this function to create the price distribution
> st.proc.norm<function(pInnVect=z,mu=0,pSnow=15,pVol=0.15,pTradDaysToExp=20){
> #set the number of trading days in one year
> yearTradDays<252
> #compute the interval time
> dt<pTradDaysToExp/yearTradDays
> s.final< rep(0,length(pInnVect))
> s.final<pSnow*exp(pVol*sqrt(dt)*pInnVect)
> out<c(s.final,pSnow)
> return(s.final)
> }
>
> #this function to build the option chain
> price.chain<function(pSFinal,pIntRate=0.02,pStrikeMin=10,pStrikeMax=15,pDStrike=0.5,pCalDaysToExp=30){
> #build strike chain
> K<seq(from=pStrikeMin,to=pStrikeMax,by=pDStrike)
> #compute dicount factor
> calDays<365
> dt<pCalDaysToExp/calDays
> #prepare storage for chain
> prCall<rep(x=0,length(K))
> prPut<rep(x=0,length(K))
> se.call<rep(x=0,length(K))
> se.put<rep(x=0,length(K))
>
> for (i in (1:length(K))){
> #print(i)
> #print(K[i])
> #formula for pricing
> c< ifelse(pSFinal>K[i],exp(pIntRate*dt)*(pSFinalK[i]), 0)
> p< ifelse(pSFinal<K[i],exp(pIntRate*dt)*(K[i]pSFinal), 0)
> #price as average
> prCall[i]<mean(c)
> prPut[i]<mean(p)
> #standard error
> se.call[i]<sd(c)/sqrt(length(pSFinal))
> se.put[i]<sd(p)/sqrt(length(pSFinal))
>
> ## make a confidence interval estimate
> alpha<0.05
> t.star<qt(p=(1alpha/2),df=length(pSFinal)1,lower.tail=TRUE)
> prCall[i]+c(1,1)*t.star*se.call[i]
> prPut[i]+c(1,1)*t.star*se.put[i]
> }
>
> #build the data frame for option's chain
>
> pr.chain<data.frame(CALL=round(prCall,digits=3),Strike=K,PUT=round(prPut,digits=3))
>
> return(pr.chain)
> }
>
>
> ##################################################################################
> ## PRICING WITH NORMAL
> inn.n<innovation.norm(pTrials=500)
> hist(inn.n)
> #you must estimate volatility from past data
> s.final.n<st.proc.norm(pInnVect=inn.n,pSnow=13.40,pVol=0.25,pTradDaysToExp=20)
> hist(s.final.n)
> #you must estimate interest rate
> price.chain(pSFinal=s.final.n,pIntRate=0.02,pStrikeMin=10,pStrikeMax=15,pDStrike=0.5,pCalDaysToExp=20)
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.


You can tray to use non normal innovations (sampled from stable distribution),
but you have to estimate alpha, beta, gamma and delta parameters
innovation.stable<function(pTrials=5000,pAlpha=1.7,pBeta=0.5,pGamma=1,pDelta=1,pPm=0){
#you must estimate parameters from past data using stableFit
#alpha > index tail (0,2]
#beta > skewness [1,1]
#gamma > scale
#delta > shift
#pm < parametrizzation (0,1,2)
require(package='fBasics',quietly=FALSE)
z < rstable(n=pTrials,alpha=pAlpha,beta=pBeta,gamma=pGamma,delta=pDelta,pm=pPm)
#genereta anthitetic innovation
ant.z <z
#combine in the new one:
z < c(z,ant.z)
#return innovation vector
return(z)
}
##############################Ã
Iacus' Book starts teaching you to price with normal innovations and then improves the code and teachs you how to parallelize all.


Probably my question is quite trivial, however could somebody clarify me why I need to have some stable distribution instead having any arbitrary distribution? I know what the stable distribution is however could not get the reason in the asset price generation context.
Thanks andregards,


I appreciate msalese's help, but: good question!
Am 24.11.2011 12:59, schrieb Bogaso:
> Probably my question is quite trivial, however could somebody clarify me why
> I need to have some stable distribution instead having any arbitrary
> distribution? I know what the stable distribution is however could not get
> the reason in the asset price generation context.
>
> Thanks andregards,
>
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.


Hi guys, I think that you can use what distribution you want.
Stable is one of that better fits the log returns (it's my opinion!)
But you can have more info giving a look at http://www.mathestate.com/tools/Financial/map/Overview.html).
If you prefer you can price with GARCH to better reproduce the smile effect.
I'm a buyer side trader (risk taker), I price options only to create future scenarios and on that scenarios I plan the reaction.


Thank you for the interesting link.
I agree that you can use a (fit of a) stable distribution for sampling
purposes. But please: Why not simply use the raw data and sample from
that? What could be a better starting point? Again, it is clear that you
cannot use the raw return series as it comes off your data feed; there
is a need for adjusting. But there cannot be a better fit to the raw
data than the raw data itself...
Am 24.11.2011 14:04, schrieb msalese:
> Hi guys, I think that you can use what distribution you want.
> Stable is one of that better fits the log returns (it's my opinion!)
> But you can have more info giving a look at
> http://www.mathestate.com/tools/Financial/map/Overview.html).
> If you prefer you can price with GARCH to better reproduce the smile effect.
> I'm a buyer side trader (risk taker), I price options only to create future
> scenarios and on that scenarios I plan the reaction.
>
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.


The stable distribution is among the earliest distributions that have been
proposed for describing the returns of financial assets. It was first
suggested by Benoit Mandelbrot [1], the same person after whom the
Mandelbrot set is named. It's probably fair to say that Mandelbrot's work
launched the study of asset price distributions. Since then, there have
been dozens of distributions proposed that have been argued to have better
properties. From a modeling perspective, one of the disadvantages of the
stable distribution is that the variance is undefined (infinite) if alpha <
2, so it is not clear what it would mean to talk about the volatility of a
stabledistributed process.
[1] Mandelbrot, Benoit, "The Variation of Certain Speculative Prices," The
Journal of Business, Vol. 36, No. 4 (Oct. 1963), pp. 394419.
On Thu, Nov 24, 2011 at 8:04 AM, msalese < [hidden email]> wrote:

Matthew Clegg
[hidden email]
[[alternative HTML version deleted]]
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.


I think it *is* clear what the volatility
of a stable distribution (other than
Gaussian) is: infinite.
That implies, I believe, that the price
of most options would also be infinite.
I'm not so convinced that doing a Monte
Carlo for options pricing with a stable
distribution is a good idea.
Caveat: I'm basically ignorant about options
pricing.
Pat
On 24/11/2011 15:36, Matthew Clegg wrote:
> The stable distribution is among the earliest distributions that have been
> proposed for describing the returns of financial assets. It was first
> suggested by Benoit Mandelbrot [1], the same person after whom the
> Mandelbrot set is named. It's probably fair to say that Mandelbrot's work
> launched the study of asset price distributions. Since then, there have
> been dozens of distributions proposed that have been argued to have better
> properties. From a modeling perspective, one of the disadvantages of the
> stable distribution is that the variance is undefined (infinite) if alpha<
> 2, so it is not clear what it would mean to talk about the volatility of a
> stabledistributed process.
>
> [1] Mandelbrot, Benoit, "The Variation of Certain Speculative Prices," The
> Journal of Business, Vol. 36, No. 4 (Oct. 1963), pp. 394419.
>
>
> On Thu, Nov 24, 2011 at 8:04 AM, msalese< [hidden email]> wrote:
>
>> Hi guys, I think that you can use what distribution you want.
>> Stable is one of that better fits the log returns (it's my opinion!)
>> But you can have more info giving a look at
>> http://www.mathestate.com/tools/Financial/map/Overview.html).
>> If you prefer you can price with GARCH to better reproduce the smile
>> effect.
>> I'm a buyer side trader (risk taker), I price options only to create future
>> scenarios and on that scenarios I plan the reaction.
>>
>>
>> 
>> View this message in context:
>> http://r.789695.n4.nabble.com/Optionvaluationforarbitrarydistributionusingmontecarlosimulationtp4095718p4103714.html>> Sent from the Rmetrics mailing list archive at Nabble.com.
>>
>> _______________________________________________
>> [hidden email] mailing list
>> https://stat.ethz.ch/mailman/listinfo/rsigfinance>>  Subscriberposting only. If you want to post, subscribe first.
>>  Also note that this is not the rhelp list where general R questions
>> should go.
>>
>
>
>

Patrick Burns
[hidden email]
http://www.burnsstat.comhttp://www.portfolioprobe.com/blogtwitter: @portfolioprobe
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.


On Thu, 20111124 at 16:19 +0100, Joachim Breit wrote:
> Thank you for the interesting link.
>
> I agree that you can use a (fit of a) stable distribution for sampling
> purposes. But please: Why not simply use the raw data and sample from
> that? What could be a better starting point? Again, it is clear that
> you cannot use the raw return series as it comes off your data feed;
> there is a need for adjusting. But there cannot be a better fit to the
> raw data than the raw data itself...
Of course you start with sampling from your data, but if all you do is
sample from the data, with no 'noise', then all you are doing is
smoothing out the prior observations. This may lead you to a false
sense of security, and give you an incorrect idea of how likely a fat
tailed event is. So by adding noise from a stable or other fat tailed
distribution (skewed Student's T is also popular), you are believing
that your prior data is both stationary and fully representative.
Regards,
 Brian

Brian G. Peterson
http://braverock.com/brian/Ph: 7734594973
IM: bgpbraverock
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.


Luenberger discusses the task of creating riskneutral probabilities from
any set of observed returns. I cannot find the specific reference in his
book (Investment Science) but there is a discussion paper by Winston
(google) that applies his method using Excel and @Risk to a sample of
Microsoft returns. I think it would be relatively simple to program it in R.
Charles Ward
On 24 November 2011 19:43, Brian G. Peterson < [hidden email]> wrote:
> On Thu, 20111124 at 16:19 +0100, Joachim Breit wrote:
> > Thank you for the interesting link.
> >
> > I agree that you can use a (fit of a) stable distribution for sampling
> > purposes. But please: Why not simply use the raw data and sample from
> > that? What could be a better starting point? Again, it is clear that
> > you cannot use the raw return series as it comes off your data feed;
> > there is a need for adjusting. But there cannot be a better fit to the
> > raw data than the raw data itself...
>
> Of course you start with sampling from your data, but if all you do is
> sample from the data, with no 'noise', then all you are doing is
> smoothing out the prior observations. This may lead you to a false
> sense of security, and give you an incorrect idea of how likely a fat
> tailed event is. So by adding noise from a stable or other fat tailed
> distribution (skewed Student's T is also popular), you are believing
> that your prior data is both stationary and fully representative.
>
> Regards,
>
>  Brian
>
> 
> Brian G. Peterson
> http://braverock.com/brian/> Ph: 7734594973
> IM: bgpbraverock
>
> _______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/rsigfinance>  Subscriberposting only. If you want to post, subscribe first.
>  Also note that this is not the rhelp list where general R questions
> should go.
>
[[alternative HTML version deleted]]
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.


Hi again guys,
Let me say that (it's what I see looking at data) if you sample at 1 day or less,
log return of financial instruments are not normal so I think you should use the distribution that best fit this situation.
May be stable it's not the best so you can decide to use some normal mixture to fit fat tail behavior.
The point is: how do you look at option ?
Are you an hedger or a risk taker (retail trader) ?
It's my experience that as risk taker it's better to have some pdf that let you build very extreme scenarios and then plan the option strategy to protect you from this unlikely situation but not impossible.
The interesting think is that R give you the possibility to use different pdf so write the code to price with MC in a such way to
separate the innovation function from pricing optionchain function, than change the innovations pdf and look what happen, it's the best school : experiment yourself.


This post has NOT been accepted by the mailing list yet.
Hi Klswacha, could you be little bit more specific on Winston's work? I tried with Google but could not get anything specific.
Thanks,


This is the paper:
http://www.realoptions.org/papers1999/WINSTONOptions.PDFAm 25.11.2011 09:48, schrieb Charles Ward:
> Luenberger discusses the task of creating riskneutral probabilities
> from any set of observed returns. I cannot find the specific reference
> in his book (Investment Science) but there is a discussion paper by
> Winston (google) that applies his method using Excel and @Risk to a
> sample of Microsoft returns. I think it would be relatively simple to
> program it in R.
>
> Charles Ward
>
> On 24 November 2011 19:43, Brian G. Peterson < [hidden email]
> <mailto: [hidden email]>> wrote:
>
> On Thu, 20111124 at 16:19 +0100, Joachim Breit wrote:
> > Thank you for the interesting link.
> >
> > I agree that you can use a (fit of a) stable distribution for
> sampling
> > purposes. But please: Why not simply use the raw data and sample from
> > that? What could be a better starting point? Again, it is clear that
> > you cannot use the raw return series as it comes off your data feed;
> > there is a need for adjusting. But there cannot be a better fit
> to the
> > raw data than the raw data itself...
>
> Of course you start with sampling from your data, but if all you do is
> sample from the data, with no 'noise', then all you are doing is
> smoothing out the prior observations. This may lead you to a false
> sense of security, and give you an incorrect idea of how likely a fat
> tailed event is. So by adding noise from a stable or other fat tailed
> distribution (skewed Student's T is also popular), you are believing
> that your prior data is both stationary and fully representative.
>
> Regards,
>
>  Brian
>
> 
> Brian G. Peterson
> http://braverock.com/brian/> Ph: 7734594973
> IM: bgpbraverock
>
> _______________________________________________
> [hidden email] <mailto: [hidden email]>
> mailing list
> https://stat.ethz.ch/mailman/listinfo/rsigfinance>  Subscriberposting only. If you want to post, subscribe first.
>  Also note that this is not the rhelp list where general R
> questions should go.
>
>
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.


I gave it a try. See the (ugly) code below. Comments are welcome. There
are still a few issues to solve:
 How can one do the Monte Carlo simulation faster/more efficient?
 Can somebody explain Winston's daily risk free rate?
 Is there a better optimization algorithm for the utility function than
genoud?

library("fImport")
library('fOptions')
library("rgenoud")
msft =
yahooImport(query="MSFT&a=00&b=4&c=1999&d=03&e=9&f=1999&ignore=.csv",file =
"msft", source = " http://ichart.finance.yahoo.com/table.csv?", save = TRUE)
cl = msft@data[,"Close"] ### unadjusted close prices
cl[10:67] = cl[10:67]/2 ### a 2:1 split on 19990329
ret = cl[1:66] /cl[2:67]  1 ### the returns
logret = log(1+ret) ### log returns
vola = sd(logret) * sqrt(252) ### 0.4091751
bsm = GBSOption(TypeFlag = "p", S = 90, X = 80, Time = 63/252, r =
log(1+0.08), b = log(1+0.08), sigma = vola)@price #### 2.577398 (just
to compare)
rf = .000317 ## don't know where this comes from in Winston's paper;
## should be 1.08^(1/252)1 = 0.0003054476, right?
meanutil < function (alpha) {
x = log(alpha * (1 + ret) + (1  alpha) * (1 + rf)) ### portf. utility
x[which(is.nan(x))] = 100000 ### correct for ln(x<0) undefined
x = mean(x)
return(x)
}
optalpha = genoud(meanutil, nvars = 1, max = TRUE, pop.size = 1000)$par
### the optimal portfolio under utility function ln(x)
portret = optalpha * (1 + ret) + (1  optalpha) * (1 + rf)
## the optimal portfolio returns (in the sense of: daily end wealth)
num = (1 / length(ret)) * (1 / portret)
rnprob = num/sum(num) ### the risk neutral probabilities
ov=0 #### monte carlo simulation
for (i in 1:100000) {
ov[i] = max(80  90 * prod(1+ sample(ret, size=63, replace = TRUE, prob
= rnprob)), 0 )
}
mov = 1/1.08^(63/252) * mean(ov) ### 2.512722 = option value
Am 25.11.2011 09:48, schrieb Charles Ward:
> Luenberger discusses the task of creating riskneutral probabilities
> from any set of observed returns. I cannot find the specific reference
> in his book (Investment Science) but there is a discussion paper by
> Winston (google) that applies his method using Excel and @Risk to a
> sample of Microsoft returns. I think it would be relatively simple to
> program it in R.
>
> Charles Ward
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.


Joachim,
I'm guessing that you could get
some better speed by doing the
'sample' call all at once, do it
with 'logret' instead of 'ret'
and sum them efficiently. Something
like:
sampmat < array(sample(logret,
size=63 * 1e5, replace=TRUE,
prob=rnprob), c(63, 1e5))
samplret < colSums(sampmat)
# then transform to simple returns
# and use 'pmax' instead of 'max'
If the matrix is too large for your
memory, you could do it in a few
blocks.
On 23/12/2011 14:59, Joachim Breit wrote:
> I gave it a try. See the (ugly) code below. Comments are welcome. There
> are still a few issues to solve:
>
>  How can one do the Monte Carlo simulation faster/more efficient?
>  Can somebody explain Winston's daily risk free rate?
>  Is there a better optimization algorithm for the utility function than
> genoud?
>
> 
> library("fImport")
> library('fOptions')
> library("rgenoud")
> msft =
> yahooImport(query="MSFT&a=00&b=4&c=1999&d=03&e=9&f=1999&ignore=.csv",file =
> "msft", source = " http://ichart.finance.yahoo.com/table.csv?", save = TRUE)
>
> cl = msft@data[,"Close"] ### unadjusted close prices
> cl[10:67] = cl[10:67]/2 ### a 2:1 split on 19990329
>
> ret = cl[1:66] /cl[2:67]  1 ### the returns
> logret = log(1+ret) ### log returns
> vola = sd(logret) * sqrt(252) ### 0.4091751
> bsm = GBSOption(TypeFlag = "p", S = 90, X = 80, Time = 63/252, r =
> log(1+0.08), b = log(1+0.08), sigma = vola)@price #### 2.577398 (just to
> compare)
>
> rf = .000317 ## don't know where this comes from in Winston's paper;
> ## should be 1.08^(1/252)1 = 0.0003054476, right?
>
> meanutil < function (alpha) {
> x = log(alpha * (1 + ret) + (1  alpha) * (1 + rf)) ### portf. utility
> x[which(is.nan(x))] = 100000 ### correct for ln(x<0) undefined
> x = mean(x)
> return(x)
> }
>
> optalpha = genoud(meanutil, nvars = 1, max = TRUE, pop.size = 1000)$par
> ### the optimal portfolio under utility function ln(x)
>
> portret = optalpha * (1 + ret) + (1  optalpha) * (1 + rf)
> ## the optimal portfolio returns (in the sense of: daily end wealth)
>
> num = (1 / length(ret)) * (1 / portret)
> rnprob = num/sum(num) ### the risk neutral probabilities
>
> ov=0 #### monte carlo simulation
> for (i in 1:100000) {
> ov[i] = max(80  90 * prod(1+ sample(ret, size=63, replace = TRUE, prob
> = rnprob)), 0 )
> }
>
> mov = 1/1.08^(63/252) * mean(ov) ### 2.512722 = option value
>
>
> Am 25.11.2011 09:48, schrieb Charles Ward:
>> Luenberger discusses the task of creating riskneutral probabilities
>> from any set of observed returns. I cannot find the specific reference
>> in his book (Investment Science) but there is a discussion paper by
>> Winston (google) that applies his method using Excel and @Risk to a
>> sample of Microsoft returns. I think it would be relatively simple to
>> program it in R.
>>
>> Charles Ward
>
> _______________________________________________
> [hidden email] mailing list
> https://stat.ethz.ch/mailman/listinfo/rsigfinance>  Subscriberposting only. If you want to post, subscribe first.
>  Also note that this is not the rhelp list where general R questions
> should go.
>

Patrick Burns
[hidden email]
http://www.burnsstat.comhttp://www.portfolioprobe.com/blogtwitter: @portfolioprobe
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.


Thank you, Patrick. That was indeed a huge speedup. I can now easily do
the monte carlo simulation with 2e6 trials and the calculated option
value seems quite stable:
1e5 trials > option values 2.481841, 2.464703, 2.505218
1e6 trials > 2.491286, 2.502324, 2.491899, 2.495442
2e6 trials > 2.493493, 2.495717

library("fImport")
library('fOptions')
library("rgenoud")
msft =
yahooImport(query="MSFT&a=00&b=4&c=1999&d=03&e=9&f=1999&ignore=.csv",file =
"msft", source = " http://ichart.finance.yahoo.com/table.csv?", save = TRUE)
cl = msft@data[,"Close"] ### unadjusted close prices
cl[10:67] = cl[10:67]/2 ### a 2:1 split on 19990329
ret = cl[1:66] /cl[2:67]  1 ### the returns
logret = log(1+ret) ### log returns
vola = sd(logret) * sqrt(252) ### 0.4091751
bsm = GBSOption(TypeFlag = "p", S = 90, X = 80, Time = 63/252, r =
log(1+0.08), b = log(1+0.08), sigma = vola)@price #### 2.577398 (just
to compare)
rf = .000317 ## don't know where this comes from in Winston's paper;
## should be 1.08^(1/252)1 = 0.0003054476, right?
meanutil < function (alpha) {
x = log(alpha * (1 + ret) + (1  alpha) * (1 + rf)) ### portf. utility
x[which(is.nan(x))] = 100000 ### correct for ln(x<0) undefined
x = mean(x)
return(x)
}
optalpha = genoud(meanutil, nvars = 1, max = TRUE, pop.size = 1000)$par
### the optimal portfolio under utility function ln(x)
portret = optalpha * (1 + ret) + (1  optalpha) * (1 + rf)
## the optimal portfolio returns (in the sense of: daily end wealth)
num = (1 / length(ret)) * (1 / portret)
rnprob = num/sum(num) ### the risk neutral probabilities
sampmat < array(sample(logret, size=63 * 2e6, replace=TRUE,
prob=rnprob), c(63, 2e6)) ### array with 63 rows ans 2000000 columns
samplret < colSums(sampmat) #### calc sum for each column
ov = pmax(0, 80  90 * exp(samplret))
mov = 1/1.08^(63/252) * mean(ov) ### gecheckt, dass da dasselbe
rauskommt wie bei der Schleife!!!
rm(sampmat, samplret, ov)
Am 23.12.2011 16:35, schrieb Patrick Burns:
> Joachim,
>
> I'm guessing that you could get
> some better speed by doing the
> 'sample' call all at once, do it
> with 'logret' instead of 'ret'
> and sum them efficiently. Something
> like:
>
> sampmat < array(sample(logret,
> size=63 * 1e5, replace=TRUE,
> prob=rnprob), c(63, 1e5))
>
> samplret < colSums(sampmat)
>
> # then transform to simple returns
> # and use 'pmax' instead of 'max'
>
> If the matrix is too large for your
> memory, you could do it in a few
> blocks.
>
_______________________________________________
[hidden email] mailing list
https://stat.ethz.ch/mailman/listinfo/rsigfinance Subscriberposting only. If you want to post, subscribe first.
 Also note that this is not the rhelp list where general R questions should go.

