2 Law of Large Numbers and Central Limit Theorem

In this section we will:

  • Introduce the Law of Large Numbers (LLN)
  • We will use simulations to demonstrate the LLN
    • Simulations -> Math
    • Math -> Simulations
    • Then we will create a simulation where the LLN doesn’t work

2.0.1 Conceptual overview

This is to illustrate the law of large numbers using some example data.

  • The law of large numbers describes what happens to the empirical average as it is taken over increasing sample sizes.
Code
# read in student names
# each student is a researcher
students = read.csv('../../unshared/students2023.csv')

# read in data
puf = readRDS('../datasets/nsduh/puf.rds')
# ever tried cigarettes indicator
triedCigs = puf$cigflag
# make it a Bernoulli random variable
triedCigs = ifelse(triedCigs=='yes', 1, 0)

ns = c(10, 50, 100, 200, 300, 400, 500)

for(n in ns){
  # Each person in the class is performing a study of smoking
  studies = lapply(c(students$First.Name, 'Kun', 'Simon'), function(student) sample(triedCigs, size=n))
  names(studies) = students$First.Name
  
  # get the mean for each person's study
  studyMeans = sapply(studies, mean)
  # histogram of the study means
  hist(studyMeans, xlim=c(0,1), breaks=10, main=paste0('n=', n))
  abline(v=mean(triedCigs))
}

Code
#hist(sqrt(n)*(studyMeans-mean(triedCigs)), xlim=c(0,1), breaks=10, main=paste0('n=', n))

2.1 Law of Large Numbers

  • The LLN is a tool we can use to say that the average of a sample is close to its expected value (and get’s closer with larger sample sizes).
  • I wouldn’t consider an estimator that doesn’t satisfy the LLN

2.1.1 Simulations -> Math: illustrating the LLN with simulations

2.1.1.1 Initial simulation without convergence definition

  • First, let’s choose a distribution.
  • Then, let’s initialize empty vectors, x and means.
  • For n in 1:maxn,
    • Draw a sample from our distribution and add it to x, x<-c(x,<new random sample>).
    • Compute the mean of x and save it in another vector means[n]<- mean(x).
  • Plot the vector means.
Code
# distribution we're sampling from
p = mean(triedCigs)
RVfunc = function(n) rbinom(n, size=1, prob=p)
# x is the observed sample
x = c()
# this is the mean of the observed sample
means = c()
# maximum sample size to get to
maxn=5000

# for loop through samples
for(n in 1:maxn){
  xn = RVfunc(1)
  x = c(x, xn)
  means = c(means, mean(x))
  }



plot(1:maxn, means, type='l', xlab='', ylab='', ylim=c(0,1))
# what is this converging to?
abline(h=p, lty=2)

What do you notice about this plot?

Let’s describe mathematically what we did.

  • \(X_i \sim\), for \(n=1,2, \ldots\)
  • \(\bar X_n = \frac{1}{n} \sum_{i=1}^n X_i\)
  • \(\bar X_n \to\) what? What does this little arrow mean?

2.1.1.2 Convergence of random variables

Definition: \(Y_n\) converges to \(Y\) in probability if for all \(\epsilon>0\), \(\lim_{n\to\infty}\mathbb{P}(\lvert Y_n - Y \rvert \le \epsilon) = 1\) as \(n\to\infty\).

  • “Eventually, every world will have \(Y_n\) within \(\epsilon\) of \(Y\).”
  • “The probability that \(Y_n\) is within \(\epsilon\) of \(Y\) goes to 1.”

Definition: \(Y_n\) converges almost surely to \(Y\) if \(\mathbb{P}(\lim_{n\to \infty}\lvert Y_n - Y \rvert =0) = 1\) as \(n\to\infty\).

  • “There is not a world where \(Y_n\) does not converge to \(Y\).”
  • “The probability \(Y_n\) does not converge to \(Y\) is zero.”

2.1.2 Math -> Simulations: adding convergence definition

How do we incorporate one of these convergence definitions into our simulations?

  • Someone in the class give epsilon>0.
  • For each sim in 1:nsim
    • Then, let’s initialize empty vector, x and empty array means=matrix(NA, nrow=nsim, ncol=maxn).
    • For n in 1:maxn,
      • Draw a sample from our distribution and add it to x, x<-c(x,<new random sample>).
      • Compute the mean of x and save it in another vector means[sim,n]<- mean(x).
  • Use means to compute \(\mathbb{P}(\lvert \bar X_n - \mathbb{E}X_i \rvert < \epsilon)\).
  • Plot \(\mathbb{P}(\lvert \bar X_n - \mathbb{E}X_i \rvert < \epsilon)\) as a function of n.
Code
epsilon = 0.02
nsim = 500
# distribution we're sampling from
p = mean(triedCigs)
RVfunc = function(n) rbinom(n, size=1, prob=p)
# maximum sample size to get to
maxn=10000
# this is the mean of the observed sample
meansout = matrix(NA, nrow=nsim, ncol=maxn)


for(sim in 1:nsim){
  cat(sim, '\t')
  # for loop through samples
  # x is the observed sample
  x = c()
  means = c()
  
  x = RVfunc(maxn)
  meansout[sim,] = cumsum(x)/1:maxn
  # for(n in 1:maxn){
  #   xn = RVfunc(1)
  #   x = c(x, xn)
  #   means = c(means, mean(x))
  # }
  # meansout[sim,] = means
}
## 1    2   3   4   5   6   7   8   9   10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99  100     101     102     103     104     105     106     107     108     109     110     111     112     113     114     115     116     117     118     119     120     121     122     123     124     125     126     127     128     129     130     131     132     133     134     135     136     137     138     139     140     141     142     143     144     145     146     147     148     149     150     151     152     153     154     155     156     157     158     159     160     161     162     163     164     165     166     167     168     169     170     171     172     173     174     175     176     177     178     179     180     181     182     183     184     185     186     187     188     189     190     191     192     193     194     195     196     197     198     199     200     201     202     203     204     205     206     207     208     209     210     211     212     213     214     215     216     217     218     219     220     221     222     223     224     225     226     227     228     229     230     231     232     233     234     235     236     237     238     239     240     241     242     243     244     245     246     247     248     249     250     251     252     253     254     255     256     257     258     259     260     261     262     263     264     265     266     267     268     269     270     271     272     273     274     275     276     277     278     279     280     281     282     283     284     285     286     287     288     289     290     291     292     293     294     295     296     297     298     299     300     301     302     303     304     305     306     307     308     309     310     311     312     313     314     315     316     317     318     319     320     321     322     323     324     325     326     327     328     329     330     331     332     333     334     335     336     337     338     339     340     341     342     343     344     345     346     347     348     349     350     351     352     353     354     355     356     357     358     359     360     361     362     363     364     365     366     367     368     369     370     371     372     373     374     375     376     377     378     379     380     381     382     383     384     385     386     387     388     389     390     391     392     393     394     395     396     397     398     399     400     401     402     403     404     405     406     407     408     409     410     411     412     413     414     415     416     417     418     419     420     421     422     423     424     425     426     427     428     429     430     431     432     433     434     435     436     437     438     439     440     441     442     443     444     445     446     447     448     449     450     451     452     453     454     455     456     457     458     459     460     461     462     463     464     465     466     467     468     469     470     471     472     473     474     475     476     477     478     479     480     481     482     483     484     485     486     487     488     489     490     491     492     493     494     495     496     497     498     499     500     
Code
P_XbarMinusp = colMeans(abs(meansout - p) < epsilon)

plot(1:maxn, P_XbarMinusp, type='l', xlab='', ylab='', ylim=c(0,1))

Code
# plot(1:maxn, means, type='l', xlab='', ylab='')
# what is this converging to?
# abline(h=, lty=2)

2.1.3 Formal definition of LLN

2.1.3.1 IID random variables

IID stands for independent and identically distributed.

For a sample \(X_i, \ldots, X_n\):

  • Independence means that their probability distributions factor
  • Identical means \(X_i \sim F_x\) (They all have the same distribution function).

2.1.3.2 Law of large numbers (Weak law)

  • Weak LLN is about convergence in probability

Theorem (Durrett, pg. 61): Let \(X_1, X_2, \ldots\), be iid random variables with \(\mathbb{E}X_i = \mu\) and \(\mathbb{E}\lvert X_i \rvert <\infty\).

If \(S_n = X_1 + \ldots + X_n\), then as \(n\to \infty\), \[ S_n/n \to_p \mu. \]

Better weak laws can allow some dependence among the observations and usually just require that \(\mathbb{E}X_i^2 < \infty\) (a slightly stronger assumption than \(\mathbb{E}\lvert X_i \rvert <\infty\)). Check out google if you’re interested.

The strong law is about almost sure convergence and doesn’t require any further assumptions.

Intuition about LLN. What is the mean (expected value) of \(S_n/n\)? What is the variance of \(S_n/n\)?

  • Mean
  • Variance

2.1.3.3 How do we use theorems in research?

  1. Demonstrate that assumptions of theorem are satisfied for our case.
  2. Draw conclusion about our estimator.

What if assumptions are violated?

  • The average may or may not converge
  • There are more precise statements that are necessary and sufficient.

2.1.4 Breaking the LLN: Cauchy distribution

  • What is a Cauchy distribution (ratio of normal random variables; t RV on 1 DoF)
  • The LLN can break if \(\mathbb{E}\lvert X_i \rvert\) doesn’t exist (is infinite). What does this mean?

It does not have any moments (\(\mathbb{E}X_i = \infty\)).

Let’s use the simulations we wrote above to show that the mean of Cauchy random variables does not converge.

Code
epsilon = 0.01
nsim = 500
# Changed this to a Cauchy random variable!
RVfunc = function(n) rcauchy(n)
# maximum sample size to get to
maxn=5000
# this is the mean of the observed sample
meansout = matrix(NA, nrow=nsim, ncol=maxn)


for(sim in 1:nsim){
  cat(sim, '\t')
  # for loop through samples
  # x is the observed sample
  x = RVfunc(maxn)
  means = cumsum(x)/1:maxn
  meansout[sim,] = means
}
## 1    2   3   4   5   6   7   8   9   10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99  100     101     102     103     104     105     106     107     108     109     110     111     112     113     114     115     116     117     118     119     120     121     122     123     124     125     126     127     128     129     130     131     132     133     134     135     136     137     138     139     140     141     142     143     144     145     146     147     148     149     150     151     152     153     154     155     156     157     158     159     160     161     162     163     164     165     166     167     168     169     170     171     172     173     174     175     176     177     178     179     180     181     182     183     184     185     186     187     188     189     190     191     192     193     194     195     196     197     198     199     200     201     202     203     204     205     206     207     208     209     210     211     212     213     214     215     216     217     218     219     220     221     222     223     224     225     226     227     228     229     230     231     232     233     234     235     236     237     238     239     240     241     242     243     244     245     246     247     248     249     250     251     252     253     254     255     256     257     258     259     260     261     262     263     264     265     266     267     268     269     270     271     272     273     274     275     276     277     278     279     280     281     282     283     284     285     286     287     288     289     290     291     292     293     294     295     296     297     298     299     300     301     302     303     304     305     306     307     308     309     310     311     312     313     314     315     316     317     318     319     320     321     322     323     324     325     326     327     328     329     330     331     332     333     334     335     336     337     338     339     340     341     342     343     344     345     346     347     348     349     350     351     352     353     354     355     356     357     358     359     360     361     362     363     364     365     366     367     368     369     370     371     372     373     374     375     376     377     378     379     380     381     382     383     384     385     386     387     388     389     390     391     392     393     394     395     396     397     398     399     400     401     402     403     404     405     406     407     408     409     410     411     412     413     414     415     416     417     418     419     420     421     422     423     424     425     426     427     428     429     430     431     432     433     434     435     436     437     438     439     440     441     442     443     444     445     446     447     448     449     450     451     452     453     454     455     456     457     458     459     460     461     462     463     464     465     466     467     468     469     470     471     472     473     474     475     476     477     478     479     480     481     482     483     484     485     486     487     488     489     490     491     492     493     494     495     496     497     498     499     500     
Code
P_XbarMinusp = colMeans(abs(meansout)<epsilon)

plot(1:maxn, P_XbarMinusp, type='l', xlab='', ylab='', ylim=c(0,1))

Code
plot(1:maxn, means, type='l', xlab='n', ylab='Xbar')

Note: we’ve already taken advantage of the LLN in the class. Does anyone know how?

2.2 Central Limit Theorems (CLTs)

Today we’ll consider another theorem about a different type of convergence for random variables.

The goals for today are:

  1. Illustrate the CLT using Simulations -> Math
  2. Illustrate the CLT using the Math -> Simulation approach with the Bernoulli distribution

2.2.1 Preliminary: Normalizing sums of random variables

The CLT is about things that look like this:

  • \(X_i \sim F\) (IID arbitrary distribution) with \(\mathbb{E}X_i = \mu\) and \(\text{Var}(X_i) = \sigma^2\).
  • \(\bar X = n^{-1} \sum_{i=1}^n X_i\)
  • \(Z = \sqrt{n} (\bar X - \mu)/\sigma\)

2.2.2 Preliminary: The normal distribution

Things about the normal distribution:

  • Standard normal \(Z\sim N(0,1)\) often denoted with a \(Z\).
  • PDF often denoted by \(\phi(z)\).
  • CDF often denoted by \(\Phi(z)\).
  • For \(Y \sim N(\mu, \sigma^2)\), \((Y-\mu)/\sigma \sim N(0, 1)\) (often called Z-normalization).
  • \(\mathbb{P}(\lvert Z \rvert\le 1.96) = \Phi(1.96) - \Phi(-1.96) \approx 0.95\).
  • \(\mathbb{P}( Z \le 1.64) = \Phi(1.64) \approx 0.95\).

2.2.3 Conceptual overview

This is to illustrate the CLT numbers using some example data.

Code
# read in student names
# each student is a researcher
students = read.csv('../../unshared/students2023.csv')

# read in data
puf = readRDS('../datasets/nsduh/puf.rds')
# ever tried cigarettes indicator
triedCigs = puf$cigflag
# make it a Bernoulli random variable
triedCigs = ifelse(triedCigs=='yes', 1, 0)

mu = mean(triedCigs)
sigma = sqrt(var(triedCigs))

ns = c(5, 10, 50, 100, 200, 300, 400, 500)

students = rep(c(students$First.Name, 'Kun', 'Simon'), 100)

layout(matrix(1:8, nrow=2, byrow=TRUE))
for(n in ns){
  # Each person in the class is performing a study of smoking
  studies = lapply(students, function(student) sample(triedCigs, size=n))
  names(studies) = students
  
  # get the mean for each person's study
  studyMeans = sapply(studies, mean)
  stdMeans = sqrt(n)*(studyMeans - mu)/sigma
  # histogram of the study means
  hist(stdMeans, xlim = c(-3,3), breaks=10, main=paste0('n=', n))
}

2.2.4 Preliminary: Convergence in distribution

Definition: A random variable \(Y_n\) with distribution function \(F_n(y)\) is said to converge in distribution to a limit \(Y\) with distribution function \(F(y)\) if, for all \(y\) that are continuity points of \(F\), \(F_n(y) \to F(y)\).

I’ll denote it \(Y_n\to_D Y\).

  • Let’s unpack the notation \(F_n(y) \to F(y)\). This means for any given \(y\) and \(\epsilon>0\) there exists \(N\) such that for all \(n\ge N\), \[ \lvert F_n(y) - F(y)\rvert < \epsilon. \]

In other words, the distance between the distributions goes to zero as \(n\) gets larger.

2.2.5 The Central Limit Theorem

The Central Limit Theorem (Durrett, pg. 124): Let \(X_1, X_2, \ldots\) be iid with \(\mathbb{E}X_i = \mu\) and \(\text{Var}(X_i) = \sigma^2 \in (0, \infty)\).

If \(\bar X_n = n^{-1} \sum_{i=1}^n X_i\), then \[ n^{1/2}(\bar X_n - \mu)/\sigma \to_D X, \] where \(X \sim N(0,1)\).

Comments:

  • We need the variance to be finite (stronger assumptions than LLN)

2.2.6 The Lindeberg-Feller Theorem

The Lindeberg-Feller Theorem (Wikipedia): Let \(X_i\) be independent random variables with \(\mathbb{E}X_i = \mu _i\) and variances \(\text{Var}(X_i) = \sigma^2_i \in (0, \infty)\). Let \(\sigma^2_n = \sum_{i=1}^n \sigma^2_i\).

If this sequence of random variables satisfies Lindeberg’s condition \[ \lim_{n\to\infty}\sigma^{-2}_n \sum_{i=1}^n \mathbb{E}\left\{(X_i-\mu_i)^2 I(\vert X_i - \mu_i\rvert>\epsilon \sigma_n) \right\} = 0, \] for all \(\epsilon>0\). Then \[ Z_n = \sum_{i=1}^n (X_i-\mu_i)/\sigma_n \to_D Z, \] where \(Z\sim N(0,1)\).

Comments:

  • Relaxes iid assumption to just independence!

  • CLTs are well studied, there are other ones for some what dependent variables.

  • Will review CLT statement

  • Will look at what it means for Bernoulli example and using simulations

2.2.7 Example of CLT with Bernoulli distribution

  • The Bernoulli is appealing because we can assess the CLT mathematically.
  • Sum of Bernoulli RV is Binomial.
  • Using the CLT, for samples \(X_1, \ldots, X_n\), from this distribution the standardized mean is \[ \sqrt{n}\frac{(\bar X_n -p)}{p(1-p)} \sim N(0,1) \text{ (approximately)} \]
  • \(\mathbb{P}(\sqrt{n} (\bar X - p)/\sqrt{p(1-p)} \le z) = \mathbb{P}\left(n\bar X \le \left(np(1-p)\right)^{1/2}\times z + np \right) \to \mathbb{P}(Z\le z)\), where \(Z\sim N(0,1)\)

So how do we evaluate this:

  • Pick a value for \(p\).
  • Choose a vector for \(n\) and \(z\).
  • Compare the CDFs by the sample size.

Note: We don’t even need to run simulations because we can do it using the CDFs in R.

Code
ns = 1:5000
p = 0.01
z = seq(-2, 2, length.out=1000)

probDiffs = rep(NA, length(ns))
for(n in ns){
 probDiffs[n] = max(abs( pbinom(sqrt(n*p*(1-p))*z + n*p, size = n, prob=p) - pnorm(z)   )  )
}
plot(ns, probDiffs, xlab='Sample size', ylab='Distribution error', main='CLT for Bernoulli', type='l')
abline(h=0, lty=2)

Code
ns = 1:5000
p = 0.01
z = 1.64

probDiffs = rep(NA, length(ns))
for(n in ns){
  probDiffs[n] = abs( pbinom(sqrt(n*p*(1-p))*z + n*p, size = n, prob=p) - pnorm(z)   )
}
plot(ns, probDiffs, xlab='Sample size', ylab='Distribution error', main='CLT for Bernoulli', type='l')
abline(h=0, lty=2)

2.2.8 QQ-plot!

2.2.8.1 Binomial proportion: Another look using simulations

Code
ns = seq(5, 100, by=10)
layout(matrix(1:10, ncol=5, byrow=TRUE))
p = 0.75
for(n in ns){
  means = replicate(10000,
                 # compute mean 1000 times
                 sqrt(n) * (mean(rbinom(n, 1, p))-p) / sqrt(p*(1-p))
                 )
  hist(means,main=paste('n =', n), probability = TRUE, ylab='', xlab='' )
  # draw standard normal density
  x = qnorm(ppoints(1000))
  points(x, dnorm(x), type='l')
}

Code
layout(matrix(1:10, ncol=5, byrow=TRUE))
for(n in ns){
  means = replicate(10000,
                 # compute mean 1000 times
                 sqrt(n)*(mean(rbinom(n, 1, p))-p)/sqrt(p*(1-p))
                 )
  qqnorm(scale(means), main=paste('n =', n), ylab='Means quantiles', xlab='Normal quantiles' )
  abline(a=0, b=1)
}

2.2.9 Comment about dependence in the CLT

For a bunch of RVs with zero mean and nonzero covariance \[ \begin{aligned} \text{Var}(n^{-1/2}\bar X_n) & = n^{-1}\text{Cov}(S_n, S_n) \\ & = \sum_{i,j}^n \text{Cov}(X_i, X_j) \\ & = n^{-1}\sum_{i=1}^n \text{Var}(X_i) + n^{-1}\sum_{i\ne j} \text{Cov}(X_j, X_k)\\ & = \text{Var}(X_i) + (n-1)\text{Cov}(X_j, X_k)\\ &\text{(if Var and Cov are the same for all RVs)} \end{aligned} \]

Comments:

  • The first term converges under CLT assumptions.
  • The second term has \(n(n-1)\) terms.
    • We need the covariance terms to get “small enough” for the CLT to work.
    • For example, by things that are “far away” having small covariance.

2.2.10 Conclusions of CLT

  • The CLT allows us to make approximate probability statements for things that can be expressed as sums.
  • This is handy when we don’t know the distribution of things.
  • Some form of independence is necessary.

2.2.11 Take home points of LLN and CLT

  • LLN is about the convergence of the mean estimator to a constant.
  • CLT is about the convergence of the mean estimator times \(\sqrt{n}\) to a normal distribution.
  • CLT is used a lot in statistics to make approximate probability statements.

2.2.12 Example: HCP data set

A brain.
A brain.
  • Human Connectome Project data
  • Study designed to understand how regions of the brain are interconnected
Code
hcp = read.csv('../datasets/hcp/hcp.csv')
corr = cor(hcp$FS_Total_GM_Vol, hcp$FS_TotCort_GM_Vol, use = 'pairwise.complete.obs')
plot(hcp$FS_Total_GM_Vol, hcp$FS_TotCort_GM_Vol, xlab='Gray Matter Vol', ylab='Cortical GM Vol', main='Cortical GM Volume vs total GM vol')
legend('topleft', legend=paste('Cor =',  round(corr, 2) ))
abline(lm(FS_TotCort_GM_Vol ~ FS_Total_GM_Vol, data=hcp))

Code
corr = cor(hcp$FS_SubCort_GM_Vol, hcp$FS_TotCort_GM_Vol, use = 'pairwise.complete.obs')
plot(hcp$FS_SubCort_GM_Vol, hcp$FS_TotCort_GM_Vol, xlab='Subcortical Vol', ylab='Cortical GM Vol', main='Cortical GM Volume vs Subcortical GM vol')
legend('topleft', legend=paste('Cor =',  round(corr, 2) ))

Code
#abline(lm(FS_TotCort_GM_Vol ~ FS_Total_GM_Vol, data=hcp))
Code
hist(hcp$PSQI_Score)

Code
corr= cor(hcp$PSQI_Score, hcp$FS_TotCort_GM_Vol, use = 'pairwise.complete.obs')
# compute correlation manually
PSQI_Score = hcp$PSQI_Score[!is.na(hcp$PSQI_Score) & !is.na(hcp$FS_TotCort_GM_Vol)]
TotCorGMVol = hcp$FS_TotCort_GM_Vol[!is.na(hcp$FS_TotCort_GM_Vol) & !is.na(hcp$PSQI_Score)]
corr
## [1] -0.09299307
Code
cov(PSQI_Score, TotCorGMVol)/sqrt(var(PSQI_Score)*var(TotCorGMVol))
## [1] -0.09299307
Code
sum( (PSQI_Score - mean(PSQI_Score)) *(TotCorGMVol - mean(TotCorGMVol)) ) /sqrt(sum( (PSQI_Score - mean(PSQI_Score))^2) * sum( (TotCorGMVol - mean(TotCorGMVol))^2))
## [1] -0.09299307
Code
plot(hcp$PSQI_Score, hcp$FS_TotCort_GM_Vol, xlab='PSQI scpre', ylab='Cortical GM Vol', main='Cortical GM Volume vs PSQI score')
legend('topleft', legend=paste('Cor =',  round(corr, 2) ))
abline(lm(FS_TotCort_GM_Vol ~ PSQI_Score, data=hcp))

2.2.13 Simulated data

Code
set.seed(100)
# number of simulations per n and rho
nsim = 2
# sample sizes
ns = c(100)
# correlation
rhos = c(0.1)

resultsTab = expand.grid(n=ns, rho=rhos)
resultsTab$rhoBias = NA

# each column is a draw of x and y
for(rhoind in 1:length(rhos)){
  rho = rhos[rhoind]
  Sigma = matrix(c(1, rho, rho, 1), nrow=2)
  sqrtSigma = svd(Sigma)
  sqrtSigma = sqrtSigma$u %*% diag(sqrt(sqrtSigma$d))
  for(nind in 1:length(ns)){
    n = ns[nind]
    # NEED TO DO SOMETHING HERE
    # need a vector to store the correlation values in
    simresults = rep(NA, nsim)
    for(sim in 1:nsim){
      xy = tcrossprod(matrix(rnorm(n*2), ncol=2), sqrtSigma)
      # YOU NEED TO EDIT THE CODE HERE
      simresults[sim] = cor(xy[,1], xy[,2])
    }
    
    # AND SOMETHING ELSE HERE (compute bias and assign in resultsTab)
    
  }
  
}