Steven R. Dunbar
Department of Mathematics
203 Avery Hall
Lincoln, NE 68588-0130
http://www.math.unl.edu
Voice: 402-472-3731
Fax: 402-472-8466

Stochastic Processes and

__________________________________________________________________________

Monte Carlo Simulation of Option Prices

_______________________________________________________________________

Note: These pages are prepared with MathJax. MathJax is an open source JavaScript display engine for mathematics that works in all browsers. See http://mathjax.org for details on supported browsers, accessibility, copy-and-paste, and other features.

_______________________________________________________________________________________________

### Rating

Mathematically Mature: may contain mathematics beyond calculus with proofs.

_______________________________________________________________________________________________

### Section Starter Question

What is a $95%$ conﬁdence interval in a statistical experiment?

_______________________________________________________________________________________________

### Key Concepts

1. Monte Carlo methods (or Monte Carlo experiments) are mathematical solution methods that rely on repeated random sampling to obtain numerical results.
2. Variance reduction techniques create a new Monte Carlo problem with the same answer as the original but with a smaller value for the variance to increase the eﬃciency of the Monte Carlo estimation. Many methods for variance reduction for Monte Carlo methods exist.
3. The idea behind importance sampling is that some values of the simulation have more eﬀect on the estimation than others. The probability distribution of the sample is carefully changed to give “important” outcomes more weight to achieve some goal such as variance reduction.

__________________________________________________________________________

### Vocabulary

1. Monte Carlo methods (or Monte Carlo experiments) are a large and diverse class of solution methods using random sampling from a speciﬁed distribution to generate numerical results for a mathematical model.
2. The 95% level Monte Carlo conﬁdence interval for $𝔼\left[g\left(X\right)\right]$ is
$\left({ḡ}_{n}-{t}_{0.975}\frac{s}{\sqrt{n}},{ḡ}_{n}+{t}_{0.975}\frac{s}{\sqrt{n}}\right).$

3. The quantity $s∕\sqrt{n}$ is called the standard error.
4. Variance reduction techniques create a new Monte Carlo problem with the same answer as the original but with a smaller value for the variance to increase the eﬃciency of Monte Carlo estimation.
5. An antithetic sample is a sample that uses an opposite value of $f\left(x\right)$, being low when $f\left(x\right)$ is high and vice versa, to reduce the variance.
6. Importance sampling is another variance reduction technique. The idea behind importance sampling is that some values in a simulation have more inﬂuence on the estimation than others. The probability distribution is carefully changed to give “important” outcomes more weight.

__________________________________________________________________________

### Mathematical Ideas

#### Conﬁdence Intervals for Option Prices

Monte Carlo methods (or Monte Carlo experiments) are a large and diverse class of solution methods using random sampling from a speciﬁed distribution to generate numerical results for a mathematical model. Monte Carlo methods are a statistical solution technique, among the various mathematical techniques that move from a mathematical model to a relation among the quantities of interest.

With Geometric Brownian Motion as a stochastic process model for security prices, we can use that process for simulating option prices with a Monte Carlo method. We assume that a security price is modeled by Geometric Brownian Motion, so we know the lognormal probability distribution at the expiration time. Draw $n$ pseudo-random numbers ${x}_{1},\dots ,{x}_{n}$ from the lognormal distribution modeling the stock price $S$. Then approximate a put option price as the present value of the expected value of the function $g\left(S\right)=max\left(K-S,0\right)$, with the sample mean

$𝔼\left[g\left(S\right)\right]\approx \frac{1}{n}\sum _{i=1}^{n}g\left({x}_{i}\right)={ḡ}_{n}.$

The examples in this section feature put options rather than the call options illustrated previously for a reason that will become clear later when we modify the Monte Carlo simulation with importance sampling.

The Weak and Strong Laws of Large Numbers ensure us that this approximation is valid for large $n$, and improves as the number of samples increases. Even more is true, since the Central Limit Theorem implies that the sample mean ${ḡ}_{n}$ is approximately normally distributed with mean $𝔼\left[g\left(S\right)\right]$ and variance $Var\left[g\left(S\right)\right]∕n$,

${ḡ}_{n}\sim N\left(𝔼\left[g\left(S\right)\right],Var\left[g\left(S\right)\right]∕n\right).$

This says that the value ${ḡ}_{n}$ that we estimate with the Monte Carlo simulations will have a deviation from the true expected value $𝔼\left[g\left(S\right)\right]$ that is on the order of the standard deviation divided by $\sqrt{n}$. More precisely, recall that for the standard normal distribution $ℙ\left[|Z|<1.96\right]\approx 0.95$. Then by the shifting and rescaling properties of the normal distribution, we can construct a corresponding $95%$ conﬁdence interval for the estimate ${ḡ}_{n}$ as

$\left(𝔼\left[g\left(S\right)\right]-1.96\sqrt{\frac{Var\left[g\left(S\right)\right]}{n}},𝔼\left[g\left(S\right)\right]+1.96\sqrt{\frac{Var\left[g\left(S\right)\right]}{n}}\right).$

We interpret the interval above as saying that the estimate ${ḡ}_{n}$ will be in this interval in about $95%$ of samples. That is, if $100$ people perform the sampling experiment with diﬀerent draws from the pseudo-random number generator, then about $95$ of them will generate an expected value that falls in this interval, but about $5$ of them will be unlucky enough to draw a sample that generates an expected value falling outside the interval.

A problem with obtaining the conﬁdence interval is that the mean $𝔼\left[g\left(S\right)\right]$ and the variance $Var\left[g\left(S\right)\right]$ are both unknown. These are respectively estimated with the sample mean ${ḡ}_{n}$ and the sample variance

${s}^{2}=\frac{1}{n-1}\sum _{i=1}^{n}{\left(g\left({x}_{i}\right)-{ḡ}_{n}\right)}^{2}$

to obtain the 95% level Monte Carlo conﬁdence interval for $𝔼\left[g\left(S\right)\right]$

$\left({ḡ}_{n}-{t}_{n-1,0.975}\frac{s}{\sqrt{n}},{ḡ}_{n}+{t}_{n-1,0.975}\frac{s}{\sqrt{n}}\right).$

The quantity $s∕\sqrt{n}$ is called the standard error. Note that the standard error is dependent on the drawn numbers ${x}_{1},{x}_{2},\dots ,{x}_{n}$ so it is subject to sample variability too. In fact, the sample quantity

$\frac{\left({ḡ}_{n}-𝔼\left[g\left(S\right)\right]\right)}{s∕\sqrt{n}}$

has a probability distribution known as Student’s t-distribution, so the $95%$ conﬁdence interval limits of $±1.96$ must be modiﬁed with the corresponding $95%$ conﬁdence limits of the appropriate Student’s t-distribution.

Another problem with the Monte Carlo conﬁdence interval is that the width of the interval decreases as $1∕\sqrt{n}$, which is not particularly fast. To decrease the width of the conﬁdence interval by half requires $4$ times as many sample variates.

##### Simpliﬁed Example

Now apply conﬁdence interval estimation to the present value of a simple put option price for a simpliﬁed security. The simpliﬁed security has a stochastic diﬀerential equation $\phantom{\rule{0.3em}{0ex}}dS=rS\phantom{\rule{0.3em}{0ex}}dt+\sigma S\phantom{\rule{0.3em}{0ex}}dW$ with a starting price $S\left(0\right)={z}_{0}=1$ at $t=0$ with risk-free interest rate $r={\sigma }^{2}∕2$, and a standard deviation $\sigma =1$. Then the stochastic process is $S\left(t\right)={e}^{W\left(t\right)}$ where $W\left(t\right)$ is standard Brownian Motion. (The parameters are ﬁnancially unrealistic, but lead to the simple exponential.) For the simpliﬁed put option the strike price is $K=1$, and the time to expiration is $T-t=T-0=1$. Short programs to create $95%$ conﬁdence intervals for $100$, $1,000$, and $10,000$ samples are in the Scripts section. The random seed is set to the arbitrary value $2015$ so that for replication of the Monte Carlo simulation the sample script always yields the same results. The true value comes from the Black-Scholes put option value formula in Put-Call Parity Principle.. Some typical results from the R script are in Table 1.

 $n$ ${ḡ}_{n}$ $95%$ Monte Carlo conﬁdence interval $100$ $0.1580783$ $\left(0.1206047,0.1955519\right)$ $1000$ $0.1539137$ $\left(0.1425191,0.1653084\right)$ $10000$ $0.1451896$ $\left(0.1416714,0.1487078\right)$

Table 1: Monte Carlo conﬁdence intervals for the present value of a simple put option values. The present value of the put option from the Black-Scholes formula is $0.1446101$.

Although the Monte Carlo simulation yields good results fairly easily, a disadvantage is that it applies for only one set of parameter values. To ﬁnd values for a variety of parameters requires more computations, but even that does not demonstrate the functional relationship among the parameters and the values. The next chapter will derive a complete mathematical solution for the value of European call and put options. However, mathematical solution is not always possible for more complex security derivatives, and then Monte Carlo simulation is a good alternative.

#### Variance Reduction Techniques

Applying Monte Carlo estimation to a random variable with a large variance creates a conﬁdence interval that is correspondingly large. The width of the conﬁdence interval can be reduced by increasing the sample size, but the rate of reduction is $\sqrt{n}$. Reducing the width of the conﬁdence interval by half requires $4$ times more sample points. Sometimes we can ﬁnd a way to reduce $s$ instead. To do this, we construct a new Monte Carlo problem with the same answer as our original problem but with a smaller value for $s$. Methods to do this are known as variance reduction techniques. Since Geometric Brownian Motion has a variance that grows exponentially, Monte Carlo estimates of quantities based on Geometric Brownian Motion are excellent candidates for variance reduction techniques.

Variance reduction techniques increase the eﬃciency of Monte Carlo estimation. Having methods to reduce variability with a given number of sample points, or for eﬃciency to achieve the same variability with fewer sample points, is desirable. Following are two methods that illustrate typical variance reduction techniques.

##### Antithetic Sampling

An antithetic sample is a sample that gives a opposite value of $g\left(x\right)$, being low when $g\left(x\right)$ is high and vice versa. Ordinarily we get an opposite $g$ by sampling at a point that is somehow opposite to $x$. The idea of antithetic sampling can be applied when it is possible to ﬁnd transformations of $X$ that leave its probability distribution unchanged. For example, if $X$ is normally distributed $N\left(0,1\right)$, then $\stackrel{̃}{X}=-X$ is normally distributed as well, that is the antithetic sample we will apply here. More generally, if $X$ has a symmetric distribution about $0$, then $\stackrel{̃}{X}=-X$ has the same distribution. Another antithetic sample occurs when $X$ is uniformly distributed on $\left[0,1\right]$ so $\stackrel{̃}{X}=1-X$ has the same probability distribution.

The antithetic sampling estimate of a value $\mu =𝔼\left[g\left(X\right)\right]$ with a sample size $n$ is

${\stackrel{̂}{\mu }}_{\text{anti}}=\frac{1}{2n}\sum _{j=1}^{n}\left(g\left({X}_{j}\right)+g\left({\stackrel{̃}{X}}_{j}\right)\right).$

Suppose the variance is ${\sigma }^{2}=𝔼\left[{\left(g\left(X\right)-\mu \right)}^{2}\right]<\infty$. Then the variance in the antithetic sample of size $n$ is

$\begin{array}{llll}\hfill Var\left[{\stackrel{̂}{\mu }}_{\text{anti}}\right]& =Var\left[\frac{1}{2n}\sum _{j=1}^{n}g\left({X}_{j}\right)+g\left({\stackrel{̃}{X}}_{j}\right)\right]\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{n}{4{n}^{2}}\left(Var\left[g\left(X\right)+g\left(\stackrel{̃}{X}\right)\right]\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{1}{4n}\left(Var\left[g\left(X\right)\right]+Var\left[g\left(\stackrel{̃}{X}\right)\right]+2Cov\left[g\left(X\right),g\left(\stackrel{̃}{X}\right)\right]\right)\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & =\frac{{\sigma }^{2}}{2n}\left(1+\rho \right).\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\end{array}$

Previously, we have often assumed diﬀerent samples are independent so that the covariance term $\rho$ is $0$, reducing the number of terms in an expression. However, here is a situation where we deliberately create statistical dependence with negative covariance $-1\le \rho <0$ to reduce the variance in the antithetic sampling estimate.

Table 2 gives the results of antithetic sampling on the simpliﬁed put option example after a simple change of variables to make the sampling distribution normal. Table 2 shows that for the put option Monte Carlo estimation not only are the option value estimates much better with antithetic sampling, but the conﬁdence intervals are reduced to less than half of the width compared with Table 1.

 $n$ ${ḡ}_{n}$ $95%$Monte Carlo conﬁdence interval $100$ $0.1429556$ $\left(0.1267189,0.1591923\right)$ $1000$ $0.1432171$ $\left(0.1383814,0.1480529\right)$ $10000$ $0.1452026$ $\left(0.1437125,0.1466926\right)$

Table 2: Monte Carlo conﬁdence intervals with antithetic sampling for the present value of a simple put option. The present value of the put option from the Black-Scholes formula is $0.1446101$.

Note that if $g\left(x\right)$ is a function with odd symmetry, for example $g\left(x\right)=cx$, and $X$ is symmetric about $0$, for example normally distributed with antithetic sample $\stackrel{̃}{X}=-X$, then $\mu =0$. Furthermore, the statistical estimate ${\stackrel{̂}{\mu }}_{\text{anti}}$ would be always be $0$ and in fact the estimate could be achieved with just one sample point and no variance. If $g$ is not symmetric, as in option estimation with $g\left(x\right)=max\left(K-x,0\right)$, we can still expect some improvement. The degree of improvement in antithetic sampling depends on the relative contribution of the odd and even parts of the function $g$, see [1] for details.

##### Importance Sampling

Importance sampling is another variance reduction technique. The idea behind importance sampling is that some values in a simulation have more inﬂuence on the estimation than others. The probability distribution is carefully changed to give “important” outcomes more weight. If “important” values are emphasized by sampling more frequently, then the estimator variance can be reduced. Hence, the key to importance sampling is to choose a new sampling distribution that “encourages” the important values.

Let $f\left(x\right)$ be the density of the random variable, so we are trying to estimate

$𝔼\left[g\left(x\right)\right]={\int }_{ℝ}g\left(x\right)f\left(x\right)\phantom{\rule{0.3em}{0ex}}dx.$

Estimate $𝔼\left[g\left(x\right)\right]$ with respect to another strictly positive density $h\left(x\right)$. Then easily

$𝔼\left[g\left(x\right)\right]={\int }_{ℝ}\frac{g\left(x\right)f\left(x\right)}{h\left(x\right)}h\left(x\right)\phantom{\rule{0.3em}{0ex}}dx.$

or equivalently, estimate

$𝔼\left[\frac{g\left(Y\right)f\left(Y\right)}{h\left(Y\right)}\right]=𝔼\left[\stackrel{̃}{g}\left(Y\right)\right]$

where $Y$ is a new random variable with density $h\left(y\right)$ and $\stackrel{̃}{g}\left(y\right)=g\left(y\right)f\left(y\right)∕h\left(y\right)$. For variance reduction, we wish to determine a new density $h\left(y\right)$ so that $Var\left[\stackrel{̃}{g}\left(Y\right)\right]. Consider

$Var\left[\stackrel{̃}{g}\left(Y\right)\right]=𝔼\left[\stackrel{̃}{g}{\left(Y\right)}^{2}\right]-{\left(𝔼\left[\stackrel{̃}{g}\left(Y\right)\right]\right)}^{2}={\int }_{ℝ}\frac{{g}^{2}\left(x\right){f}^{2}\left(x\right)}{h\left(x\right)}\phantom{\rule{0.3em}{0ex}}dx-𝔼{\left[g\left(X\right)\right]}^{2}.$

By inspection, we can see that we can make $Var\left[\stackrel{̃}{g}\left(Y\right)\right]=0$ by choosing $h\left(x\right)=g\left(x\right)f\left(x\right)∕𝔼\left[g\left(X\right)\right]$. This is the ultimate variance reduction but it relies on knowing $𝔼\left[g\left(X\right)\right]$, the quantity we are trying to estimate. It also requires that $g\left(x\right)$ is strictly positive.

To choose a good importance sampling distribution requires some educated guessing. Importance sampling is equivalent to a change of variables, and like a change of variables often requires insight. Each instance of importance sampling depends on the function and the distribution. A speciﬁc example will illustrate importance sampling.

The example is again to calculate conﬁdence intervals for the present value of a Monte Carlo estimate of a European put option price, where $g\left(x\right)=max\left(K-x,0\right)$ and $S$ is distributed as a geometric Brownian motion derived from $\phantom{\rule{0.3em}{0ex}}dS=rS\phantom{\rule{0.3em}{0ex}}dt+\sigma S\phantom{\rule{0.3em}{0ex}}dW$ with $S\left(0\right)={z}_{0}$. To set some simple parameters, we take the initial value ${z}_{0}=1$, the risk free interest rate $r={\sigma }^{2}∕2$, the standard deviation $\sigma =1$, the strike price $K=1$ and the time to expiration $T=1$. The security process is then $S\left(t\right)={e}^{W\left(t\right)}$. At $T=1$ the process has a lognormal distribution with parameters $\mu =0$ and $\sigma \sqrt{T}=1$. Thus we start with

${\int }_{0}^{\infty }max\left(K-x,0\right)\frac{1}{\sqrt{2\pi }\sigma x\sqrt{T}}exp\left(-\frac{1}{2}{\left(ln\left(x\right)∕\sigma \sqrt{T}\right)}^{2}\right)\phantom{\rule{0.3em}{0ex}}dx.$

Using $K=1$, $\sigma =1$, and $T=1$ after a ﬁrst change of variable the integral we are estimating is

$\begin{array}{llll}\hfill 𝔼\left[g\left(S\right)\right]& ={\int }_{-\infty }^{\infty }max\left(1-{e}^{x},0\right)\frac{{e}^{-{x}^{2}∕2}}{\sqrt{2\pi }}\phantom{\rule{0.3em}{0ex}}dx\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & ={\int }_{-\infty }^{0}\left(1-{e}^{x}\right)\frac{{e}^{-{x}^{2}∕2}}{\sqrt{2\pi }}\phantom{\rule{0.3em}{0ex}}dx.\phantom{\rule{2em}{0ex}}& \hfill & \phantom{\rule{2em}{0ex}}\\ \hfill & \phantom{\rule{2em}{0ex}}& \hfill \end{array}$

Introduce the exponential distribution ${e}^{-y∕2}$ with a second change of variables $x=-\sqrt{y}$. Then the expectation integral becomes

${\int }_{0}^{\infty }\frac{1-{e}^{-\sqrt{y}}}{\sqrt{2\pi }\sqrt{y}}\frac{{e}^{-y∕2}}{2}\phantom{\rule{0.3em}{0ex}}dy.$

This change of probability distribution has the added advantage that now the sampling from the exponential distribution is concentrated where the function contributes most to the integral. This explains why the examples use put options, not call options.

Now apply this importance sampling to the conﬁdence interval estimation for the present value of put option prices. Short programs to create the conﬁdence intervals for $100$, $1,000$, and $10,000$ samples are in the Scripts section. Some typical results from the R script are in Table 1.

 $n$ ${ḡ}_{n}$ $95%$ Monte Carlo conﬁdence interval $100$ $0.1465861$ $\left(0.1395778,0.1535944\right)$ $1000$ $0.1449724$ $\left(0.1427713,0.1471736\right)$ $10000$ $0.1443417$ $\left(0.1436435,0.1436435\right)$

Table 3: Monte Carlo conﬁdence intervals with importance sampling for the present value of a simple put option. The present value of the put option from the Black-Scholes formula is $0.1446101$.

Table 3 shows that the put option Monte Carlo estimates with importance sampling are closer to the Black-Scholes value than the simple Monte Carlo sampling. The conﬁdence intervals are about one-ﬁfth the width compared with Table 1.

#### Application to Data

Standard and Poor’s 500 Index, with symbol SPX, is a weighted index of 500 stocks. This index is designed to measure performance of the broad domestic economy through changes in the aggregate market value of $500$ stocks representing all major industries. As such, it is a good candidate for a stock index modeled by Geometric Brownian Motion. The index started with a level of $10$ for the 1941-43 base period.

To test Monte Carlo simulation on real data, use historical data on put options on the SPX that provides values for $K$, $T-t$, and ${z}_{0}$. Values for $r$ and $\sigma$ can be inferred from historical data.

The expectation to calculate for the Monte Carlo simulation of the value of a put option is

${\int }_{0}^{\infty }max\left(K-x,0\right)\frac{1}{\sqrt{2\pi }\sigma x\sqrt{t}}exp\left(\frac{-1}{2}{\left[\frac{ln\left(x\right)-ln\left({z}_{0}\right)-\left(r-{\sigma }^{2}∕2\right)t}{\sigma \sqrt{t}}\right]}^{2}\right)\phantom{\rule{0.3em}{0ex}}dx.$

Then the expectation integral is equivalent to

$\begin{array}{c}{\int }_{0}^{\infty }max\left(K-{z}_{0}{e}^{\left(r-{\sigma }^{2}∕2\right)t+\sigma \sqrt{t}\sqrt{u}},0\right)+max\left(K-{z}_{0}{e}^{\left(r-{\sigma }^{2}∕2\right)t-\sigma \sqrt{t}\sqrt{u}},0\right)\\ \frac{1}{\sqrt{2\pi }\phantom{\rule{0.3em}{0ex}}2\sqrt{u}}exp\left(\frac{-u}{2}\right)\phantom{\rule{0.3em}{0ex}}du.\end{array}$

This is now the importance sampling estimate of the value of a put option on the SPX index. The Scripts section has programs to evaluate conﬁdence intervals for the put option value using real data.

#### Sources

This section is adapted from: Simulation and Inference for Stochastic Diﬀerential Equations, by Stephen Iacus, Sections 1.3 and 1.4, Springer-Verlag, 2008. The section on antithetic sample is adapted from Owen, Variance Reduction. and Lehna.. Interest rates are from Treasury.gov.

_______________________________________________________________________________________________

### Algorithms, Scripts, Simulations

#### Algorithm

This script calculates the value of a put option on the Standard & Poor’s 500 stock index (SPX) in several ways. First the script uses the Black Scholes formula for a put option to calculate the theoretical value for a put option. Then the script uses Monte Carlo simulation based on the Geometric Brownian Motion with the SPX parameters to ﬁnd an estimate for the put option value. Finally, the script uses Monte Carlo simulation with antithetic sampling and importance sampling variance reduction methods to reﬁne the estimates. The ultimate purpose is to compare the various Monte Carlo methods with the put option value. The output will be a table of SPX put option value from the Black Scholes formula, Monte Carlo simulation with $95%$ conﬁdence interval, antithetic sampling with $95%$ conﬁdence interval and importance sampling with $95%$ conﬁdence interval Set the total number of trials: $n=10000$ Set the values of $S$, $r$, $\sigma$, $K$, $T-t$ Calculate the theoretical value of put option with Black-Scholes formula Fill the vector $x\left[1..n\right]$ with random normal variates $y1=max\left(0,K-Sexp\left(\left(r-{\sigma }^{2}∕2\right)\left(T-t\right)+\sigma x\sqrt{\left(}T-t\right)\right)\right)$ $y2=max\left(0,K-Sexp\left(\left(r-{\sigma }^{2}∕2\right)\left(T-t\right)+\sigma \left(-x\right)\sqrt{\left(}T-t\right)\right)\right)$ The antithetic sample becomes $y=\left(y1+y2\right)∕2$ Fill vector $u\left[1..n\right]$ with random normal variates. The importance sample becomes $u1=\left(max\left(0,K-Sexp\left(\left(r-{\sigma }^{2}∕2\right)\left(T-t\right)-\sigma \sqrt{T-t}\sqrt{x}\right)\right)+$ $max\left(0,K-Sexp\left(\left(r-{\sigma }^{2}∕2\right)\left(T-t\right)+\sigma \sqrt{T-t}\sqrt{x}\right)\right)\right)∕\sqrt{2\pi x}$. Apply the $t$-test to calculate mean and conﬁdence intervals of the Monte Carlo simulations $y1\left[1:100\right]$, $y1\left[1:1000\right]$and $y1\left[1:n\right]$. Apply the $t$-test to calculate mean and conﬁdence intervals of the antithetic samples $y\left[1:100\right]$, $y\left[1:1000\right]$ and $y\left[1:n\right]$ Apply the $t$-test to calculate mean and conﬁdence intervals of the importance samples $u1\left[1:100\right]$, $u1\left[1:1000\right]$ and $u1\left[1:n\right]$. Finally, collect and display a table of theoretical value and means and conﬁdence intervals.

#### Scripts

R
1set.seed(2015)
2n <- 10000
3
4S <- 1
5sigma <- 1
6r <- sigma^2/2
7K <- 1
8Tminust <- 1
9
10## the value of the put option by Black-Scholes
11d2 <- (log(S/K) + (r - sigma^2/2) * (Tminust))/(sigma * sqrt(Tminust))
12d1 <- (log(S/K) + (r + sigma^2/2) * (Tminust))/(sigma * sqrt(Tminust))
13VP <- K * exp(-r * Tminust) * pnorm(-d2) - S * pnorm(-d1)
14
15x <- rlnorm(n)  #Note use of default meanlog=0, sdlog=1
16y <- sapply(x, function(z) max(0, K - z))
17
18mc100 <- t.test(exp(-r * Tminust) * y[1:100])  # first 100 simulations
19mc1000 <- t.test(exp(-r * Tminust) * y[1:1000])  # first 1000 simulations
20mcall <- t.test(exp(-r * Tminust) * y)  # all simulation results
21
22type <- c("Blacksholes", "Monte Carlo 100", "Monte Carlo 1000", "Monte Carlo all")
23putestimate <- c(VP, mc100$estimate, mc1000$estimate, mcall$estimate) 24putconfintleft <- c(NA, mc100$conf.int[1], mc1000$conf.int[1], mcall$conf.int[1])
25putconfintright <- c(NA, mc100$conf.int[2], mc1000$conf.int[2], mcall$conf.int[2]) 26d <- data.frame(type, putestimate, putconfintleft, putconfintright) 27 28print(d) 29 30x <- rnorm(n) 31y1 <- sapply(x, function(z) max(0, K - exp(z))) 32y2 <- sapply(-x, function(z) max(0, K - exp(z))) 33y <- (y1 + y2)/2 34 35anti100 <- t.test(exp(-r * Tminust) * y[1:100]) # first 100 simulations 36anti1000 <- t.test(exp(-r * Tminust) * y[1:1000]) # first 1000 simulations 37antiall <- t.test(exp(-r * Tminust) * y) # all simulation results 38 39type <- c("Blacksholes", "Antithetic 100", "Antithetic 1000", "Antithetic all") 40putestimate <- c(VP, anti100$estimate, anti1000$estimate, antiall$estimate)
41putconfintleft <- c(NA, anti100$conf.int[1], anti1000$conf.int[1], antiall$conf.int[1]) 42putconfintright <- c(NA, anti100$conf.int[2], anti1000$conf.int[2], antiall$conf.int[2])
43d <- data.frame(type, putestimate, putconfintleft, putconfintright)
44
45print(d)
46
47xN <- rnorm(n)
48yN <- sapply(xN, function(z) max(0, K - exp(z)))
49
50mcN100 <- t.test(exp(-r * Tminust) * yN[1:100])  # first 100 simulations
51mcN1000 <- t.test(exp(-r * Tminust) * yN[1:1000])  # first 1000 simulations
52mcNall <- t.test(exp(-r * Tminust) * yN)  # all simulation results
53
54type <- c("Blacksholes", "Monte Carlo 100", "Monte Carlo 1000", "Monte Carlo all")
55putestimate <- c(VP, mcN100$estimate, mcN1000$estimate, mcNall$estimate) 56putconfintleft <- c(NA, mcN100$conf.int[1], mcN1000$conf.int[1], mcNall$conf.int[1])
57putconfintright <- c(NA, mcN100$conf.int[2], mcN1000$conf.int[2], mcNall$conf.int[2]) 58d <- data.frame(type, putestimate, putconfintleft, putconfintright) 59 60print(d) 61 62xE <- rexp(n, rate = 1/2) 63yE <- sapply(xE, function(z) max(0, K - exp(-sqrt(z)))/(sqrt(2 * pi) * sqrt(z))) 64 65mcE100 <- t.test(exp(-r * Tminust) * yE[1:100]) # first 100 simulations 66mcE1000 <- t.test(exp(-r * Tminust) * yE[1:1000]) # first 1000 simulations 67mcEall <- t.test(exp(-r * Tminust) * yE) # all simulation results 68 69type <- c("Blacksholes", "Monte Carlo 100", "Monte Carlo 1000", "Monte Carlo all") 70putestimate <- c(VP, mcE100$estimate, mcE1000$estimate, mcEall$estimate)
71putconfintleft <- c(NA, mcE100$conf.int[1], mcE1000$conf.int[1], mcEall$conf.int[1]) 72putconfintright <- c(NA, mcE100$conf.int[2], mcE1000$conf.int[2], mcEall$conf.int[2])
73d <- data.frame(type, putestimate, putconfintleft, putconfintright)
74
75print(d)
Octave
1n = 10000;
2
3S = 1614.96; # Standard and Poors 500 Index, on 07 / 01 / 2013
4r = 0.008; # implied risk free interest rate, between 3 year and 5 year T - bill rate
5sigma = 0.1827; # implied volatility
6K = 1575; # strike price
7Tminust = 110 / 365; # 07 / 01 / 2013 to 10 / 19 / 2013
8
9## the true value from the Black Scholes Formula for put option
10numerd1 = log(S / K) + (r + sigma ^ 2 / 2) * (Tminust);
11numerd2 = log(S / K) + (r - sigma ^ 2 / 2) * (Tminust);
12d1 = numerd1 / (sigma * sqrt(Tminust));
13d2 = numerd2 / (sigma * sqrt(Tminust));
14part1 = S * (stdnormal_cdf(d1) - 1);
15part2 = K * exp(- r * (Tminust)) * (stdnormal_cdf(d2) - 1);
16VP = part1 - part2;
17
18x = stdnormal_rnd(n, 1);
19y1 = max(0, K - S * exp((r - sigma ^ 2 / 2) * Tminust + sigma * x * sqrt(Tminust)));
20y2 = max(0, K - S * exp((r - sigma ^ 2 / 2) * Tminust + sigma * (- x) * sqrt(Tminust)));
21y = (y1 + y2) / 2;
22
23u = exprnd(1 / 0.5, [n, 1]); # note exponential parameter
24tildeg = ...
25  (...
26  max(0, K - S * exp((r - sigma ^ 2 / 2) * Tminust - sigma * sqrt(Tminust) * sqrt(u))) + ...
27  max(0, K - S * exp((r - sigma ^ 2 / 2) * Tminust + sigma * sqrt(Tminust) * sqrt(u)))...
28  ) ./ sqrt(2 * pi * u); # note use of broadcasting
29
30function [m, confintleft, confintright] = ttest(data, confidence)
31m = mean(data);
32se = std(data) / sqrt(length(data));
33df = length(data) - 1;
34q = tinv((1 + confidence) / 2, df);
35confintleft = m - q * se;
36confintright = m + q * se;
37endfunction
38
39[mc100mu, mc100cil, mc100cir] = ttest(exp(- r * Tminust) * y1(1:100), 0.95);
40[mc1000mu, mc1000cil, mc1000cir] = ttest(exp(- r * Tminust) * y1(1:1000), 0.95);
41[mcallmu, mcallcil, mcallcir] = ttest(exp(- r * Tminust) * y1, 0.95);
42
43[anti100mu, anti100cil, anti100cir] = ttest(exp(- r * Tminust) * y(1:100), 0.95);
44[anti1000mu, anti1000cil, anti1000cir] = ttest(exp(- r * Tminust) * y(1:1000), 0.95);
45[antiallmu, antiallcil, antiallcir] = ttest(exp(- r * Tminust) * y, 0.95);
46
47[imp100mu, imp100cil, imp100cir] = ttest(exp(- r * Tminust) * tildeg(1:100), 0.95);
48[imp1000mu, imp1000cil, imp1000cir] = ttest(exp(- r * Tminust) * tildeg(1:1000), 0.95);
49[impallmu, impallcil, impallcir] = ttest(exp(- r * Tminust) * tildeg, 0.95);
50
51putestimate = [VP, ...
52  mc100mu, mc1000mu, mcallmu, ...
53  anti100mu, anti1000mu, antiallmu, ...
54  imp100mu, imp1000mu, impallmu];
55putconfintleft = [NA, ...
56  mc100cil, mc1000cil, mcallcil, ...
57  anti100cil, anti1000cil, antiallcil, ...
58  imp100cil, imp1000cil, impallcil];
59putconfintright = [NA, ...
60  mc100cir, mc1000cir, mcallcir, ...
61  anti100cir, anti1000cir, antiallcir, ...
62  imp100cir, imp1000cir, impallcir];
63d = transpose([putestimate; putconfintleft; putconfintright])
Perl
1use PDL::NiceSlice;
2use PDL::GSL::RNG;
3$rng = PDL::GSL::RNG->new(taus); 4$rng->set_seed( time() );
5use PDL::Constants qw(PI E);
6use Statistics::Distributions;
7
8$n = 10000; 9 10$S = 1614.96;    # Standard and Poors 500 Index, on 07/01/2013
11$r = 12 0.008; # implied risk free interest rate, between 3 and 5 year T-bill rate 13$sigma   = 0.1827;       # implied volatility
14$K = 1575; # strike price 15$Tminust = 110 / 365;    # 07/01/2013 to 10/19/2013
16
17sub pnorm {
18    my ( $x,$sigma, $mu ) = @_; 19$sigma = 1 unless defined($sigma); 20$mu    = 0 unless defined($mu); 21 22 return 0.5 * ( 1 + erf( ($x - $mu ) / ( sqrt(2) *$sigma ) ) );
23}
24
25$numerd1 = log($S / $K ) + ( ($r + $sigma**2 / 2 ) * ($Tminust) );
26$numerd2 = log($S / $K ) + ( ($r - $sigma**2 / 2 ) * ($Tminust) );
27$d1 =$numerd1 / ( $sigma * sqrt($Tminust) );
28$d2 =$numerd2 / ( $sigma * sqrt($Tminust) );
29$part1 =$S * ( pnorm($d1) - 1 ); 30$part2 = $K * exp( -$r * $Tminust ) * ( pnorm($d2) - 1 );
31$VP =$part1 - $part2; 32 33$x = $rng->ran_gaussian( 1,$n );    # tested to here
34$y1 = pdl( 35 zeroes($n),
36    $K -$S * exp(
37        ( $r -$sigma**2 / 2 ) * $Tminust +$sigma * $x * sqrt($Tminust)
38    )
39)->transpose->maximum;
40$y2 = pdl( 41 zeroes($n),
42    $K -$S * exp(
43        ( $r -$sigma**2 / 2 ) * $Tminust +$sigma * ( -$x ) * sqrt($Tminust)
44    )
45)->transpose->maximum;
46$y = ($y1 + $y2 ) / 2; 47 48$u = $rng->ran_exponential( 1 / 0.5,$n );    # note exponential parameter
49$tildeg1 = pdl( 50 zeroes($n),
51    $K -$S * exp(
52        ( $r -$sigma**2 / 2 ) * $Tminust -$sigma * sqrt($Tminust) * sqrt($u)
53    )
54)->transpose->maximum;
55$tildeg2 = pdl( 56 zeroes($n),
57    $K -$S * exp(
58        ( $r -$sigma**2 / 2 ) * $Tminust +$sigma * sqrt($Tminust) * sqrt($u)
59    )
60)->transpose->maximum;
61$tildeg = ($tildeg1 + $tildeg2 ) / sqrt( 2 * PI *$u );
62
63sub ttest {
64    my ( $data,$confidence ) = @_;
65    $confidence = 0.95 unless defined($confidence);
66    ( $mean,$prms, $median,$min, $max,$adev, $rms ) = stats($data);
67    $df = nelem($data) - 1;
68    $q = Statistics::Distributions::tdistr($df, ( 1 - $confidence ) / 2 ); 69 70# Note use of level (1-$confidence)/2) instead of percentile (1 + $confidence)/2 71$widthci      = $q *$rms / sqrt( nelem($data) ); 72$confintleft  = $mean -$widthci;
73    $confintright =$mean + $widthci; 74 return ($mean, $confintleft,$confintright );
75}
76
77( $mc100mu,$mc100cil, $mc100cir ) = 78 ttest( exp( -$r * $Tminust ) *$y1 ( 0 : 99 ), 0.95 );
79( $mc1000mu,$mc1000cil, $mc1000cir ) = 80 ttest( exp( -$r * $Tminust ) *$y1 ( 0 : 999 ), 0.95 );
81( $mcallmu,$mcallcil, $mcallcir ) = 82 ttest( exp( -$r * $Tminust ) *$y1, 0.95 );
83
84( $anti100mu,$anti100cil, $anti100cir ) = 85 ttest( exp( -$r * $Tminust ) *$y ( 0 : 99 ), 0.95 );
86( $anti1000mu,$anti1000cil, $anti1000cir ) = 87 ttest( exp( -$r * $Tminust ) *$y ( 0 : 999 ), 0.95 );
88( $antiallmu,$antiallcil, $antiallcir ) = 89 ttest( exp( -$r * $Tminust ) *$y, 0.95 );
90
91( $imp100mu,$imp100cil, $imp100cir ) = 92 ttest( exp( -$r * $Tminust ) *$tildeg ( 0 : 99 ), 0.95 );
93( $imp1000mu,$imp1000cil, $imp1000cir ) = 94 ttest( exp( -$r * $Tminust ) *$tildeg ( 0 : 999 ), 0.95 );
95( $impallmu,$impallcil, $impallcir ) = 96 ttest( exp( -$r * $Tminust ) *$tildeg, 0.95 );
97
98$d = pdl( 99 [$VP,       "NaN",      "NaN" ],
100    [ $mc100mu,$mc100cil,  $mc100cir ], 101 [$mc1000mu, $mc1000cil,$mc1000cir ],
102    [ $mcallmu,$mcallcil,  $mcallcir ], 103 104 [$anti100mu,  $anti100cil,$anti100cir ],
105    [ $anti1000mu,$anti1000cil, $anti1000cir ], 106 [$antiallmu,  $antiallcil,$antiallcir ],
107
108    [ $imp100mu,$imp100cil,  $imp100cir ], 109 [$imp1000mu, $imp1000cil,$imp1000cir ],
110    [ $impallmu,$impallcil,  $impallcir ] 111); 112 113print$d;
SciPy
1import scipy
2from scipy.stats import norm
3
4n = 10000
5
6S = 1614.96  # Standard and Poors 500 Index, on 07/01/2013
7r = 0.008  # implied risk free interest rate, between 3 year and 5 year T-bill rate
8sigma = 0.1827  # implied volatility
9K = 1575  # strike price
10Tminust = 110. / 365.  # 07/01/2013 to 10/19/2013
11
12numerd1 = scipy.log(S / K) + (r + sigma ** 2 / 2) * Tminust
13numerd2 = scipy.log(S / K) + (r - sigma ** 2 / 2) * Tminust
14d1 = numerd1 / (sigma * scipy.sqrt(Tminust))
15d2 = numerd2 / (sigma * scipy.sqrt(Tminust))
16part1 = S * (norm.cdf(d1) - 1)
17part2 = K * scipy.exp(-r * Tminust) * (norm.cdf(d2) - 1)
18VP = part1 - part2
19
20x = norm.rvs(size=n)
21y1 = scipy.maximum(0, K - S * scipy.exp((r - sigma ** 2 / 2) * Tminust
22                   + sigma * x * scipy.sqrt(Tminust)))
23y2 = scipy.maximum(0, K - S * scipy.exp((r - sigma ** 2 / 2) * Tminust
24                   + sigma * -x * scipy.sqrt(Tminust)))
25y = (y1 + y2) / 2
26
27from scipy.stats import expon
28u = expon.rvs(size=n, scale=1. / 0.5)
29tildeg = (scipy.maximum(0, K - S * scipy.exp((r - sigma ** 2 / 2)
30          * Tminust - sigma * scipy.sqrt(Tminust) * scipy.sqrt(u)))
31          + scipy.maximum(0, K - S * scipy.exp((r - sigma ** 2 / 2)
32          * Tminust + sigma * scipy.sqrt(Tminust) * scipy.sqrt(u)))) \
33    / scipy.sqrt(2 * scipy.pi * u)
34
35
36def ttest(data, confidence=0.95):
37    m = scipy.mean(data)
38    se = scipy.stats.sem(data)
39    df = len(data) - 1
40    h = se * scipy.stats.t.ppf((1 + confidence) / 2., df)
41    return (m, m - h, m + h)
42
43
44[mc100mu, mc100cil, mc100cir] = ttest(scipy.exp(-r * Tminust)
45        * y1[0:99], 0.95)
46[mc1000mu, mc1000cil, mc1000cir] = ttest(scipy.exp(-r * Tminust)
47        * y1[0:999], 0.95)
48[mcallmu, mcallcil, mcallcir] = ttest(scipy.exp(-r * Tminust) * y1,
49        0.95)
50
51[anti100mu, anti100cil, anti100cir] = ttest(scipy.exp(-r * Tminust)
52        * y[0:99], 0.95)
53[anti1000mu, anti1000cil, anti1000cir] = ttest(scipy.exp(-r * Tminust)
54        * y[0:999], 0.95)
55[antiallmu, antiallcil, antiallcir] = ttest(scipy.exp(-r * Tminust)
56        * y, 0.95)
57
58[imp100mu, imp100cil, imp100cir] = ttest(scipy.exp(-r * Tminust)
59        * tildeg[0:99], 0.95)
60[imp1000mu, imp1000cil, imp1000cir] = ttest(scipy.exp(-r * Tminust)
61        * tildeg[0:999], 0.95)
62[impallmu, impallcil, impallcir] = ttest(scipy.exp(-r * Tminust)
63        * tildeg, 0.95)
64
65putestimate = [
66    VP,
67    mc100mu,
68    mc1000mu,
69    mcallmu,
70    anti100mu,
71    anti1000mu,
72    antiallmu,
73    imp100mu,
74    imp1000mu,
75    impallmu,
76    ]
77putconfintleft = [
78    scipy.nan,
79    mc100cil,
80    mc1000cil,
81    mcallcil,
82    anti100cil,
83    anti1000cil,
84    antiallcil,
85    imp100cil,
86    imp1000cil,
87    impallcil,
88    ]
89putconfintright = [
90    scipy.nan,
91    mc100cir,
92    mc1000cir,
93    mcallcir,
94    anti100cir,
95    anti1000cir,
96    antiallcir,
97    imp100cir,
98    imp1000cir,
99    impallcir,
100    ]
101d = scipy.transpose(scipy.array([putestimate, putconfintleft,
102                    putconfintright]))
103
104print(d)

__________________________________________________________________________

### Problems to Work for Understanding

1. Experiment with various values of sample size in the Monte Carlo simulation of the simpliﬁed put option to determine how the width of the conﬁdence interval depends on the sample size.
2. Experiment with various random seeds and values of the parameter $\sigma$ in the simpliﬁed put option model to ﬁnd the relative errors of the basic Monte Carlo method, antithetic sampling and importance sampling.
3. From Options. we know the value of a put option increases with increasing volatility $\sigma$. Additionally the standard deviation of the simpliﬁed security price increases exponentially with increasing $\sigma$. Experiment with various values of $\sigma$ in the Monte Carlo simulation of the simpliﬁed put option to determine how the put option price and the width of the conﬁdence intervals vary with $sigma$.
4. Evaluate the following integrals numerically (that is, with numerical integration software, not Monte Carlo evaluation):
1. ${\int }_{0}^{\infty }max\left(K-x,0\right)\frac{1}{\sqrt{2\pi }\sigma x\sqrt{T}}exp\left(-\frac{1}{2}{\left(ln\left(x\right)∕\sigma \sqrt{T}\right)}^{2}\right)\phantom{\rule{0.3em}{0ex}}dx;$

2. ${\int }_{-\infty }^{0}\left(1-{e}^{x}\right)\frac{{e}^{-{x}^{2}∕2}}{\sqrt{2\pi }}\phantom{\rule{0.3em}{0ex}}dx;$

3. ${\int }_{0}^{\infty }\frac{1-{e}^{-\sqrt{y}}}{\sqrt{2\pi }\sqrt{y}}\frac{{e}^{-y∕2}}{2}\phantom{\rule{0.3em}{0ex}}dy.$

Interpreting each as the expected value of a put option at expiration, ﬁnd the present value i of each if the interest rate is $r={\sigma }^{2}∕2$ with $\sigma =1$ and $T-t=T-0=1$.

5. Change the scripts to provide descriptive output.
6. Change the scripts to create a Monte Carlo simulation estimate of the value of a put option on the Standard and Poor’s 500 index (SPX) directly using $10,000$ random variates with a lognormal distribution. Use parameters $S=1614.96$, $r=0.008$, $\sigma =0.1827$, $K=1575$, $T-t=110∕365$. Also construct the $95%$ conﬁdence interval for the estimate.
7. Change the scripts to create a Monte Carlo simulation estimate of the value of a call option on the Standard and Poor’s 500 index (SPX) directly using $10,000$ random variates with a lognormal distribution. Use parameters $S=1614.96$, $r=0.008$, $\sigma =0.1827$, $K=1575$, $T-t=110∕365$.
8. Provide a detailed explanation of the changes of variables that lead from
${\int }_{0}^{\infty }max\left(K-x,0\right)\frac{1}{\sqrt{2\pi }\sigma x\sqrt{t}}exp\left(\frac{-1}{2}{\left[\frac{ln\left(x\right)-ln\left({z}_{0}\right)-\left(r-{\sigma }^{2}∕2\right)t}{\sigma \sqrt{t}}\right]}^{2}\right)\phantom{\rule{0.3em}{0ex}}dx.$

to

$\begin{array}{c}{\int }_{0}^{\infty }max\left(K-{z}_{0}{e}^{\left(r-{\sigma }^{2}∕2\right)t+\sigma \sqrt{t}\sqrt{u}},0\right)+max\left(K-{z}_{0}{e}^{\left(r-{\sigma }^{2}∕2\right)t-\sigma \sqrt{t}\sqrt{u}},0\right)\\ \frac{1}{\sqrt{2\pi }\phantom{\rule{0.3em}{0ex}}2\sqrt{u}}exp\left(\frac{-u}{2}\right)\phantom{\rule{0.3em}{0ex}}du.\end{array}$

__________________________________________________________________________

### References

[1]   Art Owen. Monte carlo theory, methods and examples. http://statweb.stanford.edu/ owen/mc/, 2013.

__________________________________________________________________________

__________________________________________________________________________

I check all the information on each page for correctness and typographical errors. Nevertheless, some errors may occur and I would be grateful if you would alert me to such errors. I make every reasonable eﬀort to present current and accurate information for public use, however I do not guarantee the accuracy or timeliness of information on this website. Your use of the information from this website is strictly voluntary and at your risk.

I have checked the links to external sites for usefulness. Links to external websites are provided as a convenience. I do not endorse, control, monitor, or guarantee the information contained in any external website. I don’t guarantee that the links are active at all times. Use the links here with the same caution as you would all information on the Internet. This website reﬂects the thoughts, interests and opinions of its author. They do not explicitly represent oﬃcial positions or policies of my employer.

Information on this website is subject to change without notice.