4 Hypothesis testing
4.1 Recap of confidence intervals
4.1.1 hat is the interpretation of confidence intervals?
\[ \mathbb{P}(\hat \mu - z_{\alpha/2} \times \hat\sigma/\sqrt{n} < \mu < \hat \mu + z_{\alpha/2} \times \hat\sigma/\sqrt{n}) \approx 1-\alpha \] …this interpretation is all of Frequentist statistics.
4.1.2 What do I tell Seth?
##
## One Sample t-test
##
## data: mf$`C3 CSA`
## t = 81.738, df = 90, p-value < 2.2e-16
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 74.56922 78.28441
## sample estimates:
## mean of x
## 76.42682
The mean C3 cross-sectional area was 76.43 with a 95% confidence interval (74.57, 78.28).
4.2 Single mean example using the Alzheimer’s disease neuroimaging intiative (ADNI)
4.2.1 The ADNI data
- Cognitive impairment in the ADNI data set it classified into 3 broad
categories
- Healthy controls (HC)
- Mild cognitive impairment (MCI)
- Alzheimer’s disease (AD)
- Cognitive decline (in general and in AD) is characterized most strongly by changes to the hippocampus
- Hippocampus is the part of the brain associated with memory formation (and other memory processes).
- Memory decline in AD is related to deterioration of the hippocampus
Code
We’re going to use the ADNI data to study some features of the hippocampus in healthy people and also compare to people with mild cognitive impairment.
These are the research questions
- “In healthy subjects, is there a difference in the size of the left and right hippocampi?”
- “Is there a difference in the size of the left/right hippocampus between HC and those with MCI?”
What is the difference between these hypotheses?
4.2.2 Single mean/Paired t-test example
- We’ve already constructed some CIs for a single continuous mean.
- A single mean t-test is the testing analog of a single mean CI.
- A “paired” t-test is an example of a single mean hypothesis test.
- Hypothesis testing is convenient for research questions like “Are \(A\) and \(B\) different?” And not for questions like “What is the difference between \(A\) and \(B\)?”
4.3 Hypothesis testing philosophy
- State the hypothesis.
- Set probability threshold for a decision.
- Compute the test statistic and the p-value.
- Retain or reject the null.
4.3.1 Null hypothesis
- Hypothesis testing compares two hypotheses, quantified by restricting the range of a parameter.
- The Null hypothesis is the uninteresting hypothesis, usually, this is what people are trying to show is likely to be wrong.
- It can be described generally in terms of a parameter \[ H_0: \delta \in \Delta_0 \subset \mathbb{R}, \] where \(\Delta_0\) is a “closed” subset of \(\mathbb{R}\).
- In most cases \(\Delta_0\) is chosen as a single point (zero); this is often called a two tailed test. For the difference in left and right hippocampal volume \[ H_0: \delta := \mu_0 - \mu_1 = 0. \]
- Another option of the null for our given research question is that the left hippocampus is smaller than the right (called a one tailed test) \[ H_0: \delta \ge 0. \]
4.3.2 Alternative hypotheses
- The alternative hypothesis can be defined as a particular point, called a simple hypothesis \[ H_a: \delta = \delta_a \notin \Delta_0 \]
- Most often though, the alternative hypothesis is defined as the compliment to \(\Delta_0\), called a composite hypothesis.
- For the null \(H_0: \delta=0\) \[ H_a: \delta \ne 0. \]
- For the null \(H_0: \delta \ge 0\) \[ H_A: \delta < 0. \]
4.3.3 Hypothesis test probabilities
- The Frequentist framework uses dataset-to-dataset probabilities to make a decision about the hypothesis
- The interpretation is similar to how we constructed confidence intervals.
4.3.3.1 Test statistic version of hypothesis testing
- We rely on the fact that
\(T = \sqrt{n}(\hat\delta - \delta_0)/\hat \sigma \sim t(n-1)\) if
\(\delta_0\) is the true value of the parameter.
- Where \(\hat \delta = \bar X\) and \(\hat \sigma^2 = (n-1)^{-1} \sum_{i=1}^n (X_i - \bar X)^2\).
- \(T\) is called the test statistic.
- In this Frequentist approach, an \(\alpha\) level test of \(H_0\) chooses a threshold \(t_{1-\alpha/2,(n-1)}\) such that \(\mathbb{P}(\lvert T \rvert > t_{1-\alpha/2,(n-1)} | H_0) \le \alpha\).
- In words, \(\alpha\) is the probability that we observed a statistic larger than \(t_{1-\alpha/2,(n-1)}\) under the null.
- We “reject” the null hypothesis if our observed test statistics \(\lvert T_{obs} \rvert>t_{1-\alpha/2,(n-1)}\).
- I.e. of the standardized difference between left and right hippocampus is far away from zero in our observed data, that implies that it is probably far away from zero in the population.
4.3.4 Example: performing the test in the ADNI hippocampus data
- To do this in practice, we specify the null hypothesis, choose the alpha level, and compute the test statistic
Code
X = hip$LEFTHIPPO - hip$RIGHTHIPPO
delta0 = 0
deltaHat = mean(X)
sigmaHat = sd(X)
n = nrow(hip)
# Rejection threshold for t-distributed statistic such the probability it is beyond this value is equal to alpha=0.05
alpha = 0.05
c(lowerTquantile=qt(alpha/2, df=n-1),
upperTquantile=qt(1-alpha/2, df=n-1))
## lowerTquantile upperTquantile
## -1.971957 1.971957
Code
## deltaHat sigmaHat Tobs
## -52.176545 204.881850 -3.601528
4.3.5 p-value version of testing
- To do hypothesis testing with a p-value we plug in the observed test statistic into our probability statement
\[ p = \mathbb{P}(\lvert T \rvert \ge \lvert T_{obs} \rvert ) \le \mathbb{P}(\lvert T \rvert > t_{1-\alpha/2,(n-1)} | H_0) \le \alpha \]
- In words, the p-value is the probability we get a statistic at least as large as the one we observed if the null hypothesis is true.
Code
## deltaHat sigmaHat Tobs
## -52.176545 204.881850 -3.601528
4.4 Retaining or rejecting the null
- The decision to retain or reject is based on the probability of observing a result as or more extreme than we do in our observed data if the null hypothesis were true.
- the \(\alpha\) decides how strict we want to be about rejecting our hypothesis.
- What happens if we decrease \(\alpha\)?
4.4.1 Type 1 and type 2 errors
- After making a decision, there are 4 possible outcomes that can occur
Retain \(H_0\) | Reject \(H_0\) | |
---|---|---|
\(H_0\) True | True negative \(\mathbb{P}(\lvert T \rvert< t_{1-\alpha/2,(n-1)} | H_0)\) | Type 1 error \(\mathbb{P}(\lvert T \rvert \ge t_{1-\alpha/2,(n-1)} | H_0)\) |
\(H_0\) False | Type 2 error \(\mathbb{P}(\lvert T \rvert< t_{1-\alpha/2,(n-1)} | H_a)\) | Power \(\mathbb{P}(\lvert T \rvert\ge t_{1-\alpha/2,(n-1)} | H_a)\) |
4.5 Using the t.test
function in ADNI
Methods In order to test whether left and right hippocampal volumes are equal in healthy older adults, we performed a t-test on the difference in volume for each subject (left - right).
##
## One Sample t-test
##
## data: hip$LEFTHIPPO - hip$RIGHTHIPPO
## t = -3.6015, df = 199, p-value = 0.0003995
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## -80.74494 -23.60815
## sample estimates:
## mean of x
## -52.17654
##
## Paired t-test
##
## data: hip$LEFTHIPPO and hip$RIGHTHIPPO
## t = -3.6015, df = 199, p-value = 0.0003995
## alternative hypothesis: true mean difference is not equal to 0
## 95 percent confidence interval:
## -80.74494 -23.60815
## sample estimates:
## mean difference
## -52.17654
- For results sections, if the analysis is complicated and will take a paragraph to describe, I might remind the reader what we’re doing in the first sentence of the results.
Results We used a one-sample t-test to test whether the difference in left and right hippocampal volume was equal to zero. There was evidence for a difference between left and right hippocampal volumes (\(T = \text{-3.6}\), \(\mathrm{df}= \text{199}\), \(p<0.001\) ) with the left hippocampus being smaller on average (\(\text{left-right}=\text{-52.18mm}^3\), confidence interval (CI)= [-80.74, -23.61]).
- More realistically, for a t-test I would just present the result.
Results We found evidence for a difference between left and right hippocampal volumes (\(T = \text{-3.6}\), \(\mathrm{df}= \text{199}\), \(p<0.001\) ) with the left hippocampus being smaller on average (\(\text{left-right}=\text{-52.18mm}^3\), 95% confidence interval (CI)= [-80.74, -23.61]).
4.6 Parallel between CIs and hypothesis testing
- The hypothesis test for a given \(\alpha\) is related to the confidence interval.
- In words, if an \(\alpha\) level confidence interval contains a given value, then your data do not have enough evidence to reject that value with an \(\alpha\) level test.
- If you reject a value \(\delta_0\) with an \(\alpha\) level test, then the \(\alpha\) level confidence interval does not contain that value.
\[ \begin{align*} 1-\alpha & \le 1-\mathbb{P}(\lvert T \rvert\ge t_{1-\alpha/2,(n-1)} \mid H_0: \delta=\delta_0)\\ & = \mathbb{P}(\vert T\rvert < t_{1-\alpha/2,(n-1)} \mid H_0: \delta=\delta_0) \\ & = \mathbb{P}(-t_{1-\alpha/2,(n-1)} < T < t_{1-\alpha/2,(n-1)} \mid H_0: \delta=\delta_0) \\ & = \mathbb{P}(\hat\delta- t_{1-\alpha/2,(n-1)} \times \hat\sigma/\sqrt{n} < \delta_0 <\hat\delta+ t_{1-\alpha/2,(n-1)}\times \hat\sigma/\sqrt{n} \mid H_0: \delta=\delta_0) \end{align*} \]
4.7 Two-tailed hypotheses
- We were using the absolute value because our hypothesis was just that the left and right hippocampus were different.
- Most commonly, people perform a two-tailed test
- We can also do a one-tailed hypothesis test
4.8 One-tailed test in the ADNI data
Code
## [1] -1.652547
## [1] -52.17654
## [1] 204.8819
## [1] -3.601528
## [1] 0.0001997614
Code
##
## Paired t-test
##
## data: hip$LEFTHIPPO and hip$RIGHTHIPPO
## t = -3.6015, df = 199, p-value = 0.0001998
## alternative hypothesis: true mean difference is less than 0
## 95 percent confidence interval:
## -Inf -28.23555
## sample estimates:
## mean difference
## -52.17654
##
## Paired t-test
##
## data: hip$LEFTHIPPO and hip$RIGHTHIPPO
## t = -3.6015, df = 199, p-value = 0.0003995
## alternative hypothesis: true mean difference is not equal to 0
## 95 percent confidence interval:
## -80.74494 -23.60815
## sample estimates:
## mean difference
## -52.17654
4.9 Practice questions
4.9.1 Types of errors
Review with questions:
Retain \(H_0\) | Reject \(H_0\) | |
---|---|---|
\(H_0\) True | True Negative \(\mathbb{P}(\lvert T \rvert< t_{1-\alpha/2,(n-1)} | H_0)\) | 1. _____ \(\mathbb{P}(\lvert T \rvert \ge t_{1-\alpha/2,(n-1)} | H_0)\) |
\(H_0\) False | 2. _____ \(\mathbb{P}(\lvert T \rvert< t_{1-\alpha/2,(n-1)} | H_a)\) | 3. _____ \(\mathbb{P}(\lvert T \rvert\ge t_{1-\alpha/2,(n-1)} | H_a)\) |
How to compute a test statistic for \(H_0: \mu = 2100\)? \[ \frac{(4. \underline{\hspace{2em}} - 5. \underline{\hspace{2em}})}{6.\underline{\hspace{2em}} / 7.\underline{\hspace{2em}}} \]
- How to compute a two-tailed p-value?
4.9.2 Performing a two-tailed test in the HCP sleep data
- Use a t-test to test whether the population mean of sleep is equal to 8. Hint You will have to set the null value equal to 8.
- Write code to reproduce all the output from the t-test command you
ran (
t
,df
,p-value
,95 percent confidence interval
,mean of x
).
4.9.3 Performing a one-tailed test in the HCP sleep data
- Arguably, sleeping more than 8 hours is less harmful than not sleeping enough.
- Use a t-test to test whether the population mean of sleep is greater than or equal to 8. Hint You will have to set the null value equal to 8.
- Write code to reproduce all the output from the t-test command you ran (
t
,df
,p-value
,95 percent confidence interval
,mean of x
).
Code
#hcp = read.csv('../datasets/hcp/hcp.csv')
sleep = c(5,6,6.5,8,7,7,4,7,5.5,5.5,8,6.5,8.5,8,6.5,8,5,7.5,6,4,5.5,6.5,5,4,6.5,8,7,7,7,7,8,7.5,8,8.5,8,6,4.5,7,5.5,8.5,6.5,7,5,8,7,6,7,6,7,8,5.5,4.5,7,6.5,5.5,7,8.5,7.5,6.5,6,5,7,7.5,6,7,7,8,5,7,5,5,5.5,7,7,6,9,8,4,5,7,7.5,6,6,8.5,7.25,6,7.5,5,5,4,6,5.5,7.5,5,7,5,6.5,6.5,8,5,7,4,7,7,7,8.5,5,7.5,6,7,7,7,7,8,7,5,7,7,6.5,8,7,7.5,7,7,7,6,8,2.5,7.5,7,6,6,5,5,8.5,3.5,6,8,7,7,7,6.5,8,6,7,7.5,6.5,6,7.5,8,7,8,6,8.5,7,7,7,6.5,8,7,8.5,7,6,7,6,6,7,7,5.5,7.5,6,7,7,7.5,6.5,9,7,5,8,8,8,7,7.5,7,6.5,7,6,7,7,8,7.5,5,6.5,6,7,7,6,7,7.5,7,7,8,8,8,7.5,8,8,8.5,6,7,7,8,6.5,7.5,5.5,7,6.5,7,5,6,8.5,7.5,6.5,8,9,9.5,5,5,6,7.5,6,7.5,2,5,7,7,7,7,6,6,8,4.5,6.5,5,6,7,4.5,6.5,6,9,7,7,11,6,7,7,4.5,8,6.5,6.5,8,6,6.5,6.5,7.5,6.5,7.5,7.5,7.5,7,8,7,6.5,8,6,6.5,7.5,6,5,7.5,7,4.5,7,7,5,8.5,6,6,6,6,7,5.5,6.5,7,6,9,7,6.75,8.5,9.5,4.75,8.5,6.5,8,8,7,7.5,7,7,5,8,7,7.5,6.5,7,7,7,6.5,7.5,5,7,7,7.5,10.5,7,8,8,5.5,8,6.5,7,12,7,5,7,7,6,4,6,9,6,7,7.5,5,6.5,8,7,8,8,6,7,6,8,7,7,6.5,7,8,5.5,5,7,5.5,5,8,8,9,8,6.5,7,5.5,3,8,8,8,7.5,8,7.5,8.5,7.5,8,7.5,7,6,6,8,5,6,8,9,4.5,7,7,5,8,6.5,7,7,7.5,7,8,7,5,7,8.5,7,6,7,7.5,7,8,6,6,6.5,6.5,7.5,6,8.5,7.5,6,5,8,8,6,7,5.5,7.5,5,6,8,6.5,6,12,6,7,6,7,3.5,6.5,7,7.25,7,7,7,4,9,6,7.5,7,7.5,4,8,7,9,6.5,5,6.5,8.5,8,7.5,6,7,7,7,5,7,7,5.5,6,6,6,5,8.5,7.5,7,7,7,8,7.5,6.5,7,6,8,7,5.5,5.5,5,6.5,7,7.5,9,7.5,7,8,6,7,8,4,6,7,7,6.5,8,7,8,7,7,7,7,4.5,6,8.5,7.5,7,7,6,7,7,9,7,8,6,7.5,6.5,6.5,4,5.5,8,6,8,7,8,5,7,5,6.75,8,7,8,8,6.5,7,6,7,5,6.5,8.5,7.5,6,8,8,9,5.5,7,6.5,7,8.5,8,7.5,8,5.5,7.5,8,8,5,7,7,7.5,4,9,7,7.5,7,8,10.5,6,7.5,8,7,9,6,5.5,6.5,5.5,6,7.5,6,6,7,6,6,8,9,6,7,6,7.25,5.5,7.5,6.5,6,8,6,6,7.5,5,6,6,6,5.5,7.5,5,6,7.5,6.5,6.5,7,6,9,4,6.5,9,8,4,3.5,3,7,6,7,7,6,7,7,7,5.5,6,8,6.5,6,6.5,6.5,7,8,7,9,8,7.5,6.5,6,8,3,6.5,7.5,5,5,7,5,8,7,5,6,8.5,7,7,7,8,8,7,7.5,5,8,6,8,7,8,7,6,6,7,6,6,6,6.5,7.5,8,7,7,7,9,3.5,7.5,6,5,4.5,7,7,6.5,7,7,8,7.5,7,8,8,7,7.5,6.5,7,7.5,7.5,7.75,7,6,6.2,7,8,8,6.5,7,7.5,7.5,8,7,7.5,9,5,7.5,6.5,7.5,8.5,8,7.5,7,7,6,8,6,7.5,8,5,5,5,4,7.5,7,7,7,8,7,8,7,6.5,7.5,6.5,6,8,8,5,6.5,6,7.5,6,6,6,8,7,6,7,8,6,6,6,6.5,6,7,7,6,7,6.5,6,6.5,8,6,8,6,6.5,4.67,8,7,4,6.5,8,7.5,6,6,9,7,6.3,7.5,7,5.5,8,6.5,8,7.5,7.5,7,6,8,9.5,6.5,7,7,7,7,6,6,6,8,6,7,6,7,8,7,7.25,8,7.5,7,7,6,6.5,7,5.5,6.5,6,6,7,6.5,6,10,8,6,7,6,6,6.5,8,5,6.5,7,9,5.5,6,7,8.5,7,7.5,7,6.5,7,8,6,6.5,6.5,8,7,5,6,8,7.5,5,7,7,5,8,7.5,5,6.5,5,5,4,6,8,4,8,7,9,10.5,7.5,8,6.5,5,7,7,4,7,6,7.5,8,8,6,5.5,6,6,8,7,9,6,7,7,6,8,6,8,6,6,8,6,4,8,8,6.5,7,6.5,7,8.5,7.5,7,8,7,5,8,6,8,6,5.5,7.5,8,8,8,8,7,7,8,7,7.5,8,6.5,7.5,8,7,7,7,7.5,7,8,7.5,8,8,6.5,8,6,6,6,8,8,7,7.5,7.5,7,6.5,7,6,7,6,6,7,6.5,7,6.5,9,7,6,6,7,4.5,8,5,8,6,9,7,7,8,8,8,7.5,5)
# amount of sleep variable
# hcp$PSQI_AmtSleep
4.10 Example HCP dataset
- In the human connectome project participants perform an emotion recognition task
- We might be interested in comparing whether people are as accurate with identifying fearful faces as they are identifying angry faces
##
## Paired t-test
##
## data: hcp$ER40ANG and hcp$ER40FEAR
## t = -3.0013, df = 1197, p-value = 0.002744
## alternative hypothesis: true mean difference is not equal to 0
## 95 percent confidence interval:
## -0.2098178 -0.0439385
## sample estimates:
## mean difference
## -0.1268781
##
## Paired t-test
##
## data: hcp$ER40ANG and hcp$ER40FEAR
## t = -3.0013, df = 1197, p-value = 0.001372
## alternative hypothesis: true mean difference is less than 0
## 95 percent confidence interval:
## -Inf -0.05728953
## sample estimates:
## mean difference
## -0.1268781
4.10.1 One-tailed hypotheses
- We can define one tailed hypotheses if we believe the result should be in a particular direction.
- E.g. if my null hypothesis is that the Anger emotion identification is at least as accurate as fearful. \[ H_0: \mathbb{E}A_i - \mathbb{E}F_i = \delta \ge 0 \]
- Because there is not a single value for \(\delta_0\), we choose a value that is most conservative
- This usually occurs at the boundary of the null set, in this case \(\delta_0 = 0\).
- So the test statistic is (same as before). \[ T = \frac{\hat \delta}{ \hat \sigma/\sqrt{n}} \]
- For our test, only smaller values of the test statistic are evidence against the null (values larger than zero imply \(\hat\delta>0\), which is in the null set.
- So we choose the rejection threshold \(t_{\alpha,(n-1)}\), so that \[ \mathbb{P}(T<t_{\alpha,(n-1)}) = \alpha \]
- When our observed test statistics \(T_{obs}<t_{\alpha,(n-1)}\) we reject the null
- Alternatively, we can compute the p-value \[ p = \mathbb{P}(T<T_{obs}) \] and reject when \(p<\alpha\)
4.11 Alternatives to T-test
4.11.1 Z-test
- If the underlying data are not normally distributed, then we use the CLT approximation \[ \frac{\bar X_n - \delta_0}{\hat\sigma/\sqrt{n}} \approx_D Z \sim N(0,1) \]
- Using the Lindeberg-Feller Theorem also allows us to relax the identicality assumption.
- We can then do all the same stuff we did with hypothesis testing for the single mean.
- A natural question is, which of the two tests performs better? We can use simulations to answer this question.
4.11.2 One-sample permutation test
- Permutation testing is a very flexible and popular nonparametric way to test hypotheses that makes minimal assumptions.
- Conceptually, it assumes that if the null hypothesis is true \(H_0: \delta=0\) and the data distribution is symmetric, if you randomly flipped the sign of each data point then that dataset would be as likely to occur as the original data.
4.12 Simulations to evaluate hypothesis testing
What are the metrics we care about, here?
- Type 1 error rate \(\mathbb{P}(\vert T \rvert > t_{1-\alpha/2,(n-1)} )\). By
design it should be \(\alpha=0.05\).
- Another way to check type 1 error rate is using the p-value. \[\begin{align*} \alpha = \mathbb{P}\{\vert T \rvert > t_{1-\alpha/2,(n-1)} \} & = \mathbb{P}(2\times pt(\vert T \rvert, n-1) > 2 \times pt(t_{1-\alpha/2,(n-1)} )\}\\ & = \mathbb{P}(p < \alpha) \end{align*}\]
- What does this probability mean? It is about dataset-to-dataset variability.
- A Z-test is the same thing with the Z-distribution as the reference distribution.
- Power is another metric we care about – we have to pick the alternative.
- How to simulate non-normality? Synthetically sampling data from the gamma distribution.
- What are the parameters?
n
,alpha
,nsim
, the density function I will use for sampling (gamma distribution)
Retain \(H_0\) | Reject \(H_0\) | |
---|---|---|
\(H_0\) True | True negative \(\mathbb{P}(\lvert T \rvert< t_{1-\alpha/2,(n-1)} | H_0)\) | Type 1 error \(\mathbb{P}(\lvert T \rvert \ge t_{1-\alpha/2,(n-1)} | H_0)\) |
\(H_0\) False | Type 2 error \(\mathbb{P}(\lvert T \rvert< t_{1-\alpha/2,(n-1)} | H_a)\) | Power \(\mathbb{P}(\lvert T \rvert\ge t_{1-\alpha/2,(n-1)} | H_a)\) |
4.12.1 Research question for the simulations: Are t-test and z-test affected by data that have a nonnormal distribution?
- \(X_i \sim Gamma(\alpha,\beta)\) (iid).
- T-test assumes \(X_i\sim N(\mu, \sigma^2)\) (violated).
- Z-test is approximation, no assumption on distribution of \(X_i\).
- Which performs better?
Code
set.seed(12345)
# creating blank output table
mu = 0
ns = seq(10, 100, length.out=10)
alphas = c(0.01, 0.05, 0.1)
nsim = 1000
results = expand.grid( n=ns, alpha=alphas)
colNames = c('t-test type 1 error', 'z-test type 1 error')
results[, colNames ] = NA
### Loop through parameter settings and run simulations
# (Potential states of the world)
# loop through "n" only instead
for(n in ns){
# draw one sample of size n from our random variable nsim times
settingResults = matrix(NA, ncol=2, nrow=nsim)
for(sim in 1:nsim){
# generate the sample
simulatedData = mu + (rgamma(n, shape = 1.2, rate = 5) - 1.2/5)/sqrt(1.2)*5
tResult = t.test(simulatedData)
# p-value for the t-test
settingResults[sim, 1] = tResult$p.value
# p-value for the Z-test
settingResults[sim, 2] = 2*pnorm(abs(tResult$statistic), lower.tail = FALSE)
}
# compute type 1 error rate across the simulations
results[results$n==n, c('t-test type 1 error', 'z-test type 1 error')] = t(sapply(results[ results$n==n,'alpha'], function(a) colMeans(settingResults<a, na.rm=TRUE)))
}
Code
# plotting simulation results
cols = gray.colors(length(ns))
# T-test
plot(results$alpha, results$`z-test type 1 error`, type='n', xlab='Target type 1 error', ylab='Observed type 1 error', main='T-test')
trash = by(results, results$n, function(df) points(df$alpha, df$`t-test type 1 error`, type='b', col=cols[which(ns %in% df$n)]))
abline(a=0,b=1)
legend('topleft', fill=cols, legend=ns, bty='n')
Code
# Z-test
plot(results$alpha, results$`z-test type 1 error`, type='n', xlab='Target type 1 error', ylab='Observed type 1 error', main='Z-test')
trash = by(results, results$n, function(df) points(df$alpha, df$`z-test type 1 error`, type='b', col=cols[which(ns %in% df$n)]))
abline(a=0,b=1)
legend('topleft', fill=cols, legend=ns, bty='n')
Code
# Both
results = results[results$n %in% c(10, 50, 100),]
plot(results$alpha, results$`z-test type 1 error`, type='n', xlab='Target type 1 error', ylab='Observed type 1 error', main='Both')
trash = by(results, results$n, function(df) points(df$alpha, df$`t-test type 1 error`, type='b', col=cols[which(ns %in% df$n)]))
trash = by(results, results$n, function(df) points(df$alpha, df$`z-test type 1 error`, type='b', col=cols[which(ns %in% df$n)], lty=2))
abline(a=0,b=1)
legend('topleft', fill=cols, legend=unique(results$n), bty='n')
4.13 (Frequentist) Confidence intervals for two means
- We will study mean differences between two groups.
- We’ll construct some confidence intervals for a difference in means
- The concepts are the same, with a few more complexities
4.13.1 Mean differences in the spinal cord data
Code
histinfo = hist(hip$RIGHTHIPPO, plot = FALSE)
hist(hip$RIGHTHIPPO[ hip$DX=='HC'], breaks = histinfo$breaks, main="Histogram of R hippocampus",
xlab="Volume", col=rgb(1,0,0,.5), border=NA)
hist(hip$RIGHTHIPPO[ hip$DX=='MCI'], breaks=histinfo$breaks, col=rgb(0,0,1,.5), add=TRUE, border=NA)
legend('topright', fill=c(rgb(1,0,0,.5), rgb(0,0,1,.5)), legend=c('HC', 'MCI'), bty='n')
- “Is there a difference in the size of the left/right hippocampus between HC and those with MCI?”
- Let \(\mu_h\) denote the mean for healthy participants and \(\mu_p\) denote the mean for patients.
4.13.2 Mean differences
Here is the context: \[ \begin{align*} Y_i & \sim N(\mu_0, \sigma^2_h) \text{ for $i=1,\ldots,n_h$}\\ Y_i & \sim N(\mu_1, \sigma^2_p) \text{ for $i=n_h+1, \ldots, n_h+n_p$} \end{align*} \] where
\(n_h\) – number in group 0 (controls).
\(n_p\) – number in group 1 (patients).
\(n := n_h + n_p\).
\(\delta := \mu_p - \mu_h\).
We’ll start by assuming equal variances \(\sigma^2_h = \sigma^2_p\).
We will start off simple and relax some assumptions.
4.14 Wald statistic for two means
- The Wald statistic with known variance is defined as \((\hat \delta - \delta)/\sqrt{\text{Var}(\hat\delta)}\).
- We need to find \(\hat \delta\) and \(\widehat{\text{Var}}(\hat\delta)\), just like we did for the single mean example.
- Derive the estimators.
4.14.1 What is the distribution of the Wald statistic?
What is the variance of each mean estimator? \[ \text{Var}(\hat \mu_h) = \frac{\sigma^2_h}{n_h} \]
What is the variance of \(\hat\delta\) \[ \text{Var}(\hat \delta) = n_h^{-1} \sigma^2_h + n_p^{-1} \sigma^2_p = (n_h^{-1} + n_p^{-1})\sigma^2 \] (last line assumes equal variance)
Under normality with known variance then a Z-test statistic is \[ \frac{(\hat \delta - \delta)}{\sqrt{(n_h^{-1} + n_p^{-1})\sigma^2}} \sim N(0, 1) \]
As is usual, the variance is not known.
What is an estimator of variance?
Variance for \(\widehat{\text{Var}}(\hat\mu_h) = \hat\sigma^2_h/n_h\) \[ \hat\sigma^2_h= (n_h-1)^{-1}\sum_{i=1}^{n_h}(X_i - \hat\mu_h)^2 \]
Variance estimator for \(\hat \mu_1\) is the same under equal variance assumption.
Equal variance pooled variance estimator \[ \hat\sigma^2 = \frac{ \sum_{i=1}^{n_h}(X_i - \hat\mu_h)^2 + \sum_{i=1}^{n_p}(X_i - \hat\mu_p)^2}{ (n_h-1) + (n_p-1)} \]
This gives the usual form of the test statistic that we can use for confidence intervals and testing, except now with flavors of two means. \[ \frac{\{ \hat\delta-\delta \}}{\sqrt{(n_h^{-1} + n_p^{-1})\hat\sigma^2}} = \frac{\{ (\hat\mu_h - \hat\mu_p)-(\mu_h - \mu_p)\}}{\sqrt{(n_h^{-1} + n_p^{-1})\hat\sigma^2}} \]
Code
nh = sum(hip$DX=='HC')
np = sum(hip$DX=='MCI')
hc = hip$RIGHTHIPPO[hip$DX=='HC']
mci = hip$RIGHTHIPPO[hip$DX=='MCI']
muhHat = mean(hc)
mupHat = mean(mci)
# variance of the X_i's
pooledVar = (var(hc)*(nh-1) + var(mci)*(np-1))/(nh+np-2)
Tstat = (mupHat-muhHat)/sqrt( (1/nh + 1/np)*pooledVar )
TtestResults = t.test(mci, hc, var.equal = TRUE)
# assuming test statistic under H_0 is t(nf+nm-2)
alpha = 0.05
qt(1-alpha/2, df = nh+np-2)
## [1] 1.975799
## [1] 2.145791e-07
Code
# Under H_0, T (our test statistic) ~ t(nf +nm-2), T_obs=abs(4.2596)
# all do the same thing
#2*(1-pt(, df = nf+nm-2))
#2*(pt(, df = nf+nm-2, lower.tail = FALSE))
#pt(, df = nf+nm-2, lower.tail = TRUE) + pt(, df = nf+nm-2, lower.tail = FALSE)
# assess normality
qqnorm(c(scale(hc), scale(mci)))
abline(a=0, b=1)
4.15 Unequal variances: Welch’s t-test
4.15.1 Unequal variance formula
- When the variances are not equal, the variance of \(\hat\delta\) is \[ \sigma_{\hat\delta}^2 = n_0^{-1} \sigma^2_0 + n_1^{-1} \sigma^2_1 \]
\[ \begin{align*} \hat \sigma^2_0 & = (n_0-1)^{-1} \sum_{i=1}^{n_0}(X_i - \hat\mu_0)^2 \end{align*} \]
Then, plugging-in to get a variance estimator for \(\hat\delta\) \[ \hat\sigma^2 = n_0^{-1} \hat\sigma^2_0 + n_1^{-1} \hat\sigma^2_1 \]
4.15.2 Unequal variances test statistic
- What is the distribution of \[ \frac{(\hat \delta - \delta)}{\sqrt{n_0^{-1} \hat\sigma^2_0 + n_1^{-1} \hat\sigma^2_1}} \]
- Nobody knows the distribution this statistic.
4.15.3 Satterthwaite approximation
- Satterthwaite came-up with a way to approximate the distribution of \(\hat \sigma^2 =n_0^{-1} \hat\sigma^2_0 + n_1^{-1} \hat\sigma^2_1\) with a chi-square distribution
- Note that \(\mathbb{E}\hat\sigma^2 = \sigma^2\)
- This is the general premise:
- \((n_0-1)\hat\sigma^2_0/\sigma_0^2 \sim \chi^2(n_0-1)\), and same for \(\hat\sigma^2_1\)
- Imagine if we could approximate the distribution of \(\nu \frac{\hat\sigma^2}{\sigma^2}\) with a chi-squared distribution on \(\nu\) degrees of freedom, then \[ \frac{(\hat \delta - \delta)/\sigma^2}{ \sqrt{\nu \hat\sigma^2/\sigma^2/\nu}} \sim t(\nu) \text{ (approximately) }, \] because \(\nu \hat\sigma^2/\sigma^2 \sim \chi^2(\nu)\)
- How do we find \(\nu\)?
- Satterthwaite proposed finding \(\nu\) by matching the mean and variance of \(\nu W/\sigma^2\) to a chi-squared distribution and solving for \(\nu\).
- That yields the formula \[ \nu = \frac{ \left(n_0^{-1}\hat\sigma^2_0 + n_1^{-1}\hat\sigma^2_1 \right)}{n_0^{-2}(n_0-1)\hat\sigma^4_0 + n_1^{-2}(n_1-1)\hat\sigma^4_1} \]
- Welch was the one who did it for the t-test.
- It’s a very ugly formula, but the approximation works very well,
such that it is the default t-test in
R
.
4.15.4 Unequal variance t-test in CS C4 data
Code
nh = sum(hip$DX=='HC')
np = sum(hip$DX=='MCI')
hc = hip$RIGHTHIPPO[hip$DX=='HC']
mci = hip$RIGHTHIPPO[hip$DX=='MCI']
muhHat = mean(hc)
mupHat = mean(mci)
# This is no longer the correct variance estimator
pooledVar = (var(hc)*(nh-1) + var(mci)*(np-1))/(nh+np-2)
Tstat = (muhHat-mupHat)/sqrt((1/nh + 1/np)*pooledVar )
satterTstat = (muhHat-mupHat) / sqrt(var(hc)/nh + var(mci)/np)
TtestResults = t.test(hc, mci)
# assuming test statistic under H_0 is t(nf+nm-2)
alpha = 0.05
qt(1-alpha/2, df = nh+np-2)
## [1] 1.975799
4.17 Permutation test for two means
\[ \begin{align*} Y_i & \sim N(\mu_0, \sigma^2_h) \text{ for $i=1,\ldots,n_h$}\\ Y_i & \sim N(\mu_1, \sigma^2_p) \text{ for $i=n_h+1, \ldots, n_h+n_p$} \end{align*} \] where
\(n_h\) – number in group 0 (healthy).
\(n_p\) – number in group 1 (patients).
\(n := n_h + n_p\).
\(\delta := \mu_p - \mu_h\).
We’ll start by assuming equal variances \(\sigma^2_h = \sigma^2_p\).
Our test statistic is \[ T = (\hat\mu_1 - \hat\mu_0)/\sqrt{\hat{\text{Var}}(\hat\mu_1 - \hat\mu_0)} = T(Y) \]
4.17.1 Intuition
- If the null is true \(\mu_1 = \mu_0\).
- If the variances are equal, then all observations are exchangeable, meaning any observation could have been as likely to come from any group.
- We can permute the values \(Y_i\), and the result is equally likely under the null and we could compute a test statistic \[ T^{p} = T(Y^p) \] where \(Y^p\) is the permuted hippocampal data.
- In this case, there are \({n \choose n_1}\) possible pairings. That’s a lot.
- In practice, because there are so many possible permutations, we just randomly choose a large number of permutations
- Permutation tests compute a p-value this way for \(p=1, \ldots, P\) \[ 1/P \sum_{p=1}^P I\{ T(Y^p) \ge T\} \]
- The average number of data sets where the test statistics is equal or larger than the observed value.
Code
permutationTest = function(X, Y, nperm=1000){
Tobs = t.test(Y[X==0], Y[X==1])$statistic
# randomly permutes X
Tperms = replicate(nperm, {Xperm = sample(X, replace=FALSE); t.test(Y[Xperm==0], Y[Xperm==1])$statistic} )
list(Tobs=Tobs, pvalue=mean(abs(c(Tperms, Tobs))>=abs(Tobs)), permutations = Tperms)
}
# In the subset of data
y = hip$RIGHTHIPPO[ hip$DX !='AD' ]
x = ifelse(hip$DX[hip$DX !='AD' ]=='MCI', 1, 0)
permtest = permutationTest(Y=y, X = x, nperm = 5000)
wtest = wilcox.test(y[x==1], y[x==0], conf.int = TRUE)
ttest = t.test(y[x==1], y[x==0])
permtest
## $Tobs
## t
## 5.759984
##
## $pvalue
## [1] 0.00019996
##
## $permutations
## t t t t t t t
## -1.397168106 0.864030027 1.204714130 2.265974335 -0.480148719 -2.154824042 1.007972076
## t t t t t t t
## -1.474486997 0.327115250 0.662620289 0.350411737 -1.497487428 0.282981158 1.193426469
## t t t t t t t
## -1.138476347 -1.181212809 -0.010673391 -1.053248253 -0.931549662 1.070731869 2.569336542
## t t t t t t t
## 0.114240763 0.296824944 -0.961514036 1.422749408 -0.718423568 -0.397820630 0.947291330
## t t t t t t t
## 0.770293241 -0.309972284 0.895050515 -1.343638307 -0.624895933 -0.864139733 1.489653349
## t t t t t t t
## 0.371880009 0.602708693 0.752073071 0.270350523 0.723579533 0.185376517 0.590786838
## t t t t t t t
## 0.632353643 -0.204364879 1.067842584 0.909198423 -0.479829524 -0.585230128 1.103029701
## t t t t t t t
## -0.388203189 -1.601305470 0.192773580 -1.202546714 -0.324091491 -0.205997891 0.075950928
## t t t t t t t
## 1.934799904 -1.179164898 -0.055010981 -1.001440013 -0.807484041 0.422991622 1.016497276
## t t t t t t t
## 0.968070111 0.423895408 -0.811419263 -0.278100104 -0.363727213 -0.739272003 -1.771263708
## t t t t t t t
## -0.174894294 -0.865490879 0.562510816 1.948002788 -0.604269681 -0.872682129 0.794422314
## t t t t t t t
## 1.174908163 0.737905835 1.137277168 0.169781347 1.290296159 -0.614300771 -0.285593948
## t t t t t t t
## -0.156489364 -0.574205282 -1.270282762 1.430899761 -0.854119902 -0.408695051 0.327395649
## t t t t t t t
## -0.006626864 -0.484010432 -0.430382216 0.797350409 0.104047256 -0.370240585 1.066041755
## t t t t t t t
## -1.867019794 -0.847854615 0.643229040 -0.774028386 -0.481820976 0.957613177 -0.311114338
## t t t t t t t
## -0.186531065 0.981850994 1.224353192 1.511693591 -0.482424124 -0.581757932 -0.010707643
## t t t t t t t
## -0.240212912 0.238684015 -0.938652327 -0.467692958 -0.068738430 0.099983543 0.778878114
## t t t t t t t
## -0.999892002 -0.888765314 -0.594038715 -0.096326365 2.566773757 0.846562778 -1.284342327
## t t t t t t t
## -0.740276997 0.498954382 0.478313159 -0.340278832 -0.758658580 -0.449690068 -0.549670853
## t t t t t t t
## -1.290454144 0.173437866 -0.901838213 -0.242786656 -0.241197490 0.498463414 0.127581046
## t t t t t t t
## -0.529679069 0.244591602 0.624911024 -0.771468707 0.297909339 0.003692613 -0.314919521
## t t t t t t t
## -1.517917006 0.007612774 -0.707673369 0.669806795 0.955099916 -1.288301757 0.528078009
## t t t t t t t
## -1.939621915 0.170720679 -0.163161896 0.759226539 -1.552794908 -0.663733458 0.809514640
## t t t t t t t
## -0.306561607 0.143279700 0.257124443 1.173003536 -1.308998005 -1.684585129 0.058775743
## t t t t t t t
## 0.137570094 -0.895294902 -1.687095574 0.365201417 -0.894930192 -1.232991085 -0.715341656
## t t t t t t t
## 0.364096273 0.759716491 0.085930560 0.642973634 0.658332365 -0.710886805 1.097951064
## t t t t t t t
## 2.436683129 -0.647089881 -0.227547038 -0.481862863 2.161591037 0.770534613 0.639630030
## t t t t t t t
## -0.518086548 1.240428375 0.113027460 -0.373710242 -1.194239358 -0.228561289 0.102139240
## t t t t t t t
## 1.005794970 0.130949978 -0.450458747 -0.882583005 0.701153827 -0.619007845 -0.606042044
## t t t t t t t
## -0.572990959 -0.428606203 1.018762774 1.293275259 -1.109601894 -0.169774136 -0.145109047
## t t t t t t t
## -0.209050754 0.069110607 1.020805467 0.652073962 0.488775880 0.913428538 0.817472479
## t t t t t t t
## -0.613717315 1.205971431 0.963379454 0.750832543 0.208118353 0.391184122 -1.321782785
## t t t t t t t
## 0.218926457 -0.660672676 0.051882021 0.049048916 0.356757001 -1.088565762 -0.308538740
## t t t t t t t
## -1.227861022 -0.731744546 -1.454747617 0.894660041 1.305753105 -2.107311787 0.652554632
## t t t t t t t
## 0.934050660 0.412779728 0.529266095 0.703760767 -0.066136858 0.967086530 -0.054159967
## t t t t t t t
## -0.004124374 -2.619989674 -1.172913377 0.091946530 -0.588956412 -0.264325675 -0.344997491
## t t t t t t t
## -0.199818905 -0.505893422 -0.178522473 -0.231908285 -1.287355911 -0.756417327 0.673487291
## t t t t t t t
## -0.510160353 -0.730346114 -0.538484110 0.445797141 0.079978943 -1.092951901 0.947715095
## t t t t t t t
## -0.597706764 -0.642406603 0.107679670 1.319862664 -0.500079449 0.454051522 -0.094271427
## t t t t t t t
## -0.330497808 0.364680817 -0.814228789 0.755867728 1.136097000 -0.219063113 -1.069075555
## t t t t t t t
## 0.790201701 0.490369401 0.149229500 2.059367372 -1.048230508 -0.424825070 -1.121247118
## t t t t t t t
## -0.784258273 -1.672069801 -0.125115043 -1.011586073 -1.551776911 0.572290739 1.487282334
## t t t t t t t
## -0.141945667 0.402278717 -0.551867821 -0.292296042 -1.891753580 0.271427903 -1.376721673
## t t t t t t t
## 1.018997345 -1.041270828 0.742793603 0.159543512 0.955887675 2.428798385 -0.277357914
## t t t t t t t
## -0.101969533 -1.145685488 0.762019073 -0.817647585 0.484434143 0.549293002 0.592091782
## t t t t t t t
## -0.292289348 -0.423496204 -1.790811778 -0.258378987 0.498746649 -0.089429474 -1.967413517
## t t t t t t t
## -0.467804553 1.231025123 -1.188016018 1.113425418 1.590958865 -0.307577656 0.076777377
## t t t t t t t
## -1.014335565 -0.450741563 -0.295711456 -0.507272426 0.285094741 -1.002884938 0.205096132
## t t t t t t t
## -1.407378547 -0.360125755 -1.288623362 -0.448676123 -0.053883801 1.046162043 0.937340466
## t t t t t t t
## 0.773528314 0.270135889 0.392908317 -0.693739175 -0.167949075 -3.056177690 0.492259752
## t t t t t t t
## 0.697030300 1.485190472 1.644509699 -0.297206567 -0.555073190 0.002852853 -0.180906940
## t t t t t t t
## 2.082957299 -0.625621257 -0.009144056 -0.077353208 -0.995242699 -1.388802061 1.691909031
## t t t t t t t
## -1.588246863 -1.195756796 1.864410620 0.961409708 2.192655276 -0.948510791 0.760292066
## t t t t t t t
## -0.344381642 -0.279668274 0.783145174 -1.183502871 -2.496372530 0.349247406 0.528431427
## t t t t t t t
## 2.653017550 -2.656971068 0.376842720 0.945864500 0.340331469 0.352431323 0.388478069
## t t t t t t t
## -1.113924743 -1.219471844 -0.327898123 -1.021863116 2.515335364 -0.674451360 -1.581572833
## t t t t t t t
## -0.188646826 0.176095470 -1.802734068 -2.827755963 -0.298862894 -0.374370024 -0.805916207
## t t t t t t t
## 1.080820615 0.780137080 -0.372639744 0.539052200 0.906441655 0.505640288 -0.288290475
## t t t t t t t
## 0.041218782 0.987874099 -1.820233499 -3.189841266 -1.139354493 0.400911572 -1.365066880
## t t t t t t t
## -0.406609467 -0.697236953 -0.157641019 -1.955163238 0.157923495 -2.261389239 -0.404420591
## t t t t t t t
## -1.891524899 0.521662856 -0.043288515 0.938186226 1.042908183 -1.056439152 -1.354733712
## t t t t t t t
## -1.183112494 -0.050405312 0.907992017 0.764743836 0.227152710 0.465061120 0.856042417
## t t t t t t t
## 0.809423185 -1.797854865 0.460894008 0.136962773 0.268868378 -0.392311796 1.392244754
## t t t t t t t
## -1.078173635 0.295245630 -0.101771272 -0.396784292 -1.199179052 1.727872217 -0.035073196
## t t t t t t t
## 0.085675154 0.881789846 0.650434355 -0.319765055 -2.835554286 -0.630444347 0.965770399
## t t t t t t t
## 0.333561146 -1.207449715 1.172152162 1.026402632 -0.422174035 0.677130245 1.091190209
## t t t t t t t
## -0.198259728 -1.053751705 1.086384568 0.549990758 0.567778459 -0.585588803 0.172565021
## t t t t t t t
## 0.292893587 -0.255859044 -0.339115549 -1.718725628 0.835978305 1.503763460 0.384827906
## t t t t t t t
## -0.070887022 -1.101659005 -0.053331596 -1.478556849 -1.712125243 -0.040727114 -0.729460970
## t t t t t t t
## 1.982984537 -0.091913477 2.109063712 -0.171458204 0.992518748 0.295299317 -0.440009564
## t t t t t t t
## -0.243211739 -1.209818537 1.835961669 0.715304610 -0.158591888 -1.857501407 0.010966871
## t t t t t t t
## -0.263828567 -1.105793921 -0.258598423 -1.108653830 -0.584981625 0.409550459 0.197109843
## t t t t t t t
## 1.075987771 0.307724176 1.092179305 1.592777696 -0.508490292 -0.603707400 -0.809842826
## t t t t t t t
## 0.079559326 0.977588018 0.350461497 -1.771042865 -0.669203842 0.632114604 1.349129810
## t t t t t t t
## 0.418456677 0.345632382 -0.245812188 -0.689683277 -0.219803329 -0.444639992 -0.434432055
## t t t t t t t
## 0.644209825 -0.262316844 0.712164972 0.424966773 0.575306987 -2.061247128 1.928410787
## t t t t t t t
## 0.601724183 1.116415053 -0.068288952 -1.424705712 1.021092668 0.410041406 0.470115101
## t t t t t t t
## -0.711118270 -0.685922734 -0.145504251 0.183657525 1.015378811 0.103722839 1.557804529
## t t t t t t t
## 2.219899791 -0.256369562 -2.404123791 0.251880324 0.645015221 -1.013402710 1.267983890
## t t t t t t t
## -0.557113674 -0.006252062 -0.229359228 0.336979552 -1.374721955 -0.556465181 -1.351370057
## t t t t t t t
## 2.669975713 1.079690932 1.082558305 0.735649579 -0.593810768 1.049679936 -0.842025367
## t t t t t t t
## 1.046022677 -0.479431220 0.875576119 0.518368539 -2.171175692 0.192775540 1.557253450
## t t t t t t t
## 0.768802928 0.059258975 -1.098660134 -1.310978923 -1.268048502 -0.005681598 0.152695490
## t t t t t t t
## 1.512571959 -0.203283120 0.910598181 -0.666567450 -0.641459150 0.288531232 1.268441453
## t t t t t t t
## 1.526937630 -1.842795658 0.192137919 0.227786831 -1.691527434 0.044839301 0.305656792
## t t t t t t t
## -0.341867917 0.674287954 0.217499397 -0.296170950 -0.882937834 -1.885052748 -1.069910528
## t t t t t t t
## -0.356437497 -0.703577795 0.339276537 -1.070299665 -0.379776172 -1.498959924 -0.346933526
## t t t t t t t
## -0.104801822 -0.919917038 -0.478627421 -0.169244518 -0.252309393 -0.793392573 -0.438210889
## t t t t t t t
## -0.526399160 -0.807713755 0.395146638 1.091092927 -0.383952206 1.338678124 -2.543020261
## t t t t t t t
## 0.129921896 0.746822051 -1.163604988 -0.465894669 -1.163816037 -0.593933788 0.494760415
## t t t t t t t
## 0.557039207 -0.240479296 -0.601188536 -1.222633875 -0.923811232 -0.301331394 0.840296128
## t t t t t t t
## 0.993321922 0.271312888 -1.569335084 0.774144508 -0.908622686 -0.610804758 -2.041010379
## t t t t t t t
## -1.391117522 -2.242918384 0.582107833 1.844162242 1.980081625 0.873725731 -1.165228518
## t t t t t t t
## 0.857471070 1.472548989 0.346648823 0.438867887 -1.745048445 0.773698017 1.299109410
## t t t t t t t
## 0.396115017 0.898905558 -0.212277708 -0.658532097 -0.764518566 0.463198156 1.116001987
## t t t t t t t
## 0.488531257 -0.071394190 -1.213764594 0.357582282 -0.222773916 0.702588272 1.065058255
## t t t t t t t
## -1.389547710 0.612625436 -1.942082065 -0.091189333 -1.356095127 -0.915955265 -1.718946724
## t t t t t t t
## 0.779054688 1.533244670 -1.344095631 1.650017096 -0.314658685 0.540619695 0.098464371
## t t t t t t t
## -1.180738298 -0.955543641 0.151859340 0.465971365 0.174853579 -1.078671760 -0.901417644
## t t t t t t t
## -0.536149259 0.064395021 -0.499138082 1.509109488 -1.095474277 -0.781533587 1.778957437
## t t t t t t t
## 0.763292086 0.541517247 -0.093465985 -0.477128004 1.044849835 -0.421440520 -1.729333579
## t t t t t t t
## -0.923975348 0.625035949 1.114355902 -1.602762542 1.274470803 -1.934368312 -0.836654892
## t t t t t t t
## -1.735607076 -0.927060276 1.829266354 -2.144974442 0.087432911 -0.526078295 -0.382802193
## t t t t t t t
## -0.546911997 -0.490524548 -0.957463376 -1.171698721 -1.324446714 0.262267756 0.007149616
## t t t t t t t
## -0.561636756 -0.024503060 -0.336042746 -0.264248779 1.909302006 0.747118968 -0.161527035
## t t t t t t t
## -1.662558396 -1.292872342 -0.413051062 -0.753278340 0.204218486 1.973961647 -1.332769335
## t t t t t t t
## 0.159211194 -2.344960650 0.299038794 0.189221361 0.025140017 -0.378898090 -1.258701703
## t t t t t t t
## 0.258594650 -0.012427161 -1.302470998 -0.271148111 0.392966913 1.201175604 0.584682023
## t t t t t t t
## -0.033408670 -0.556660241 0.699376720 0.510448626 0.352501652 1.358326735 -0.305781680
## t t t t t t t
## -0.902871350 0.751567215 1.932292621 -0.828150184 -0.389813846 1.127527304 0.069254186
## t t t t t t t
## 1.005678361 -0.032583011 -0.708557391 -1.443790539 -0.022681661 1.437681759 0.485446209
## t t t t t t t
## -0.604992882 -1.318589615 2.798770167 -0.385918508 0.751542478 0.256559293 0.734223438
## t t t t t t t
## 0.640261937 -0.547186779 -1.627874380 -0.236464821 0.435700077 2.160936828 0.252339533
## t t t t t t t
## -0.857035622 2.326984767 -0.029182625 1.070696969 0.859458607 0.002774717 0.091663681
## t t t t t t t
## 1.985717887 -0.005358203 -0.956987017 0.693391419 -1.621348099 0.077879891 -0.699538275
## t t t t t t t
## 0.595157589 0.220748099 1.007561657 0.541933796 -0.173098360 -0.138044753 -0.684264267
## t t t t t t t
## -0.831985871 1.401752367 0.233883527 -2.053755643 -1.569329764 -1.089210610 -1.546876670
## t t t t t t t
## -0.637021465 0.929906652 -1.817651512 -0.867435739 1.261817196 -0.202508187 -0.902387779
## t t t t t t t
## 1.667813268 -0.515335502 -0.696646903 -0.572268335 -1.619949841 -0.482348634 0.914381692
## t t t t t t t
## -0.485889663 -0.278997294 0.582852790 1.318825767 -0.161198244 -0.167624705 0.667888907
## t t t t t t t
## 1.968312280 0.306127260 -1.699187028 -1.933327779 0.656881014 0.683007416 -1.296638105
## t t t t t t t
## 0.501078778 0.439618196 0.059700199 0.027060316 -0.648036021 1.798747986 -1.716507538
## t t t t t t t
## 0.098054651 0.528856593 -0.863227529 0.546489513 -0.173464263 0.159634978 -1.984036759
## t t t t t t t
## 0.261003919 -0.448876378 1.138797598 1.649373259 0.108205656 -0.685317982 2.245238202
## t t t t t t t
## -0.967133157 0.481769923 0.154350079 -0.121659889 2.989644923 0.170740476 1.094988450
## t t t t t t t
## -0.273530323 1.311951467 0.418663462 -0.614615889 -0.102793919 1.848307025 0.217583996
## t t t t t t t
## -0.888688333 -1.577694525 -0.867127629 -3.009247908 -0.484819491 0.972800338 0.619024077
## t t t t t t t
## -1.172363107 -1.169413413 0.208925009 -1.325837541 -0.181839726 -0.102271852 1.192810423
## t t t t t t t
## -1.295765632 -0.078979395 0.476565870 2.325596119 0.422706886 0.640648076 0.788513706
## t t t t t t t
## 0.675650587 -1.011317836 -0.658272083 0.576501182 -1.911414738 0.329534185 -0.082285025
## t t t t t t t
## -1.999421158 0.703898909 0.792000972 2.202380883 -1.574813273 0.745456812 1.505969577
## t t t t t t t
## 1.102068753 1.157702837 -0.445744361 0.159484735 0.210335327 1.208732816 -0.002716571
## t t t t t t t
## 0.765080866 2.727644879 -0.123394870 0.764227511 -0.533265150 1.204415691 -0.082114599
## t t t t t t t
## 0.924857067 -1.319666435 -2.029271228 -1.143901525 0.822418406 -1.669003933 -0.969410171
## t t t t t t t
## -0.797946658 0.491055474 1.036381943 0.772204554 1.517183058 -2.130779286 -0.084310494
## t t t t t t t
## -0.810009962 -1.065042117 0.732557440 0.591835789 -0.518466176 0.648092862 -0.677203002
## t t t t t t t
## -0.281916923 0.341441820 -1.224623175 0.778436689 0.309255437 -0.689435931 0.215708032
## t t t t t t t
## -0.774264502 -0.411698101 -0.446854156 -1.525189277 -0.524780799 -1.106635861 -0.220344394
## t t t t t t t
## -2.568843988 -0.353507140 1.009220426 -1.529683170 -0.419940757 -0.987656577 -0.109509547
## t t t t t t t
## 0.091955634 0.453017598 0.190982896 -0.675335049 1.511150798 0.100030875 -0.046507240
## t t t t t t t
## -0.638461180 -1.350852433 0.300610667 -0.480422832 -0.331706994 2.128110923 0.282007159
## t t t t t t t
## 1.074003828 1.035132248 -0.040915191 -0.010112343 0.190567232 -0.530703064 1.464000022
## t t t t t t
## 0.275214803 0.087099247 -0.234402040 0.092981600 -1.065419106 -1.245205970
## [ reached getOption("max.print") -- omitted 4000 entries ]
##
## Wilcoxon rank sum test with continuity correction
##
## data: y[x == 1] and y[x == 0]
## W = 1362, p-value = 7.944e-07
## alternative hypothesis: true location shift is not equal to 0
## 95 percent confidence interval:
## -434.9001 -208.6899
## sample estimates:
## difference in location
## -322.6506
##
## Welch Two Sample t-test
##
## data: y[x == 1] and y[x == 0]
## t = -5.76, df = 124.49, p-value = 6.21e-08
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -424.0599 -207.1636
## sample estimates:
## mean of x mean of y
## 1814.981 2130.593
Code
## $Tobs
## t
## -37.05836
##
## $pvalue
## [1] 0.000999001
##
## $permutations
## t t t t t t t
## -1.337221480 -0.852689801 0.608077107 -0.727835093 1.651929237 -0.869220507 -1.131964611
## t t t t t t t
## 0.590441981 0.400259429 -1.658633944 0.514530074 -0.596186842 -1.131793747 -0.671297921
## t t t t t t t
## 0.773989087 0.419929646 0.080542454 -0.239039291 -2.027707534 -1.756741001 -1.483562526
## t t t t t t t
## 1.351200144 1.432950467 0.682766930 -0.062303917 -0.020056521 0.075503824 -1.815537411
## t t t t t t t
## 1.095739762 0.894148490 0.282346066 1.781715874 0.282707544 -1.163507514 -2.237197274
## t t t t t t t
## 0.067220871 -0.949562477 -0.758103832 -0.326614559 0.597138430 -1.050256768 0.274127380
## t t t t t t t
## -0.528603638 -0.058021130 0.306497311 -1.719323406 -0.902083448 -0.884901051 -0.477792181
## t t t t t t t
## -0.134821788 -0.631231923 1.170470767 -0.665720600 -1.623422462 0.788138733 -0.080758263
## t t t t t t t
## -0.667716736 0.094940203 1.148394238 0.331245234 -0.553797840 0.460553199 -0.701984787
## t t t t t t t
## 0.585604560 -0.969818970 1.035032724 -0.762991195 0.603782674 -0.566264353 -0.111604920
## t t t t t t t
## -1.068355500 0.419148495 0.025533810 -0.986854008 0.632181434 -0.046413044 -0.171454878
## t t t t t t t
## -0.093398193 2.066931185 0.785827436 1.988181181 0.510614278 -1.194923857 -2.263520492
## t t t t t t t
## -0.261866241 0.007864205 -0.374572594 0.152849399 -0.348822897 -1.860640761 -0.409573300
## t t t t t t t
## -0.137151405 0.442045496 2.367008963 -0.805122317 0.370998795 0.574244848 0.719102218
## t t t t t t t
## -0.590212406 0.462858086 0.482214504 -1.274722656 -0.987496879 1.512012629 -1.363123605
## t t t t t t t
## -0.545579957 -0.354178829 0.755704896 0.746113587 3.265743760 -0.442009779 0.532488261
## t t t t t t t
## 0.964449199 0.123109599 0.929804476 1.135087888 -1.272402540 1.542715038 0.449745168
## t t t t t t t
## 0.088926203 -0.137240476 1.076689835 1.415501503 0.935315661 -0.733805789 -0.242509761
## t t t t t t t
## -0.903436463 -1.057319782 -0.798711959 -0.387499748 -0.257784939 -0.454590635 0.496887373
## t t t t t t t
## -0.909995248 -0.628337822 -0.594488342 -0.542450623 -0.612594475 -0.575000219 1.352376083
## t t t t t t t
## -0.349818224 -0.185928639 1.044111746 0.649518085 -0.327715495 -1.935493835 -0.011780014
## t t t t t t t
## -1.233698433 -1.597920845 -0.014755443 0.110312856 -1.081266522 -0.093205100 0.048647357
## t t t t t t t
## -0.213447721 2.226802472 -0.072326487 0.811010894 0.055335569 -1.339338651 1.442356612
## t t t t t t t
## -0.713089993 -0.726739452 -0.223783460 1.336338241 -0.983642773 -0.249026195 0.324487472
## t t t t t t t
## -0.376338309 -0.880501705 1.410163781 -0.894680668 1.809424858 2.142636491 0.326213926
## t t t t t t t
## -1.806511016 0.952942746 -0.243374649 0.639147773 0.910053189 -1.507289410 1.576734141
## t t t t t t t
## -1.216781752 0.188047688 -0.602038984 -1.312741252 0.940240033 -2.084731952 -0.405766357
## t t t t t t t
## 0.667627503 -0.203545547 0.468285471 -0.045849200 1.172390403 0.254124716 -0.789983897
## t t t t t t t
## -1.496888989 -0.036330630 -0.085623966 -1.840840840 -0.246183118 1.093057972 1.749606771
## t t t t t t t
## -0.498149563 0.060316884 -2.214769723 -1.440966517 0.250529581 1.700692162 -0.093616113
## t t t t t t t
## -0.415414976 1.366656261 0.430940331 -0.168342276 3.080766457 -0.446794408 0.935960188
## t t t t t t t
## -1.057585653 0.383356394 0.436328937 -1.172475545 -0.437441348 0.357007986 2.499211761
## t t t t t t t
## -0.861125743 -0.870486217 0.389456177 0.305127070 -1.033785931 -0.828757519 1.278851626
## t t t t t t t
## 0.817868651 0.368783586 -0.784551733 0.652349167 0.778542085 0.012026005 -0.469015977
## t t t t t t t
## 0.884906692 1.022306669 -1.307290748 -0.365896863 0.338594528 1.755647596 -0.568762939
## t t t t t t t
## -1.010262409 1.234690112 -0.657428521 -0.263681600 -0.690793966 2.366767236 0.853904297
## t t t t t t t
## 0.500817102 -0.653412518 3.433588274 -0.916229727 0.487906624 -0.848651153 -0.094007959
## t t t t t t t
## -0.134471646 1.876589345 -2.616242409 -0.245167244 -0.698495666 0.011911074 -0.285265806
## t t t t t t t
## -0.721776285 0.322169298 0.034493017 -0.963967748 -0.753973176 1.590738275 -0.442641821
## t t t t t t t
## -0.415844456 0.344534233 1.959916232 0.812327295 -1.257123339 0.163886579 -0.778790360
## t t t t t t t
## 1.484760880 0.242478261 -1.740146064 -0.242487332 -0.922349363 0.016159611 -1.921828690
## t t t t t t t
## 1.236419912 -1.436814879 -1.177731921 -0.449284134 0.175162081 0.966929178 0.966827692
## t t t t t t t
## -0.673022344 -0.719418362 0.648134939 1.889049656 1.207846451 -1.715140800 -2.096120560
## t t t t t t t
## 0.310002173 -0.957884965 -1.043507183 -0.790644004 0.231401135 1.261188830 -0.167496173
## t t t t t t t
## 0.119770946 -1.094008728 -0.400557653 -0.220665283 -1.138284917 0.064789104 -0.034594123
## t t t t t t t
## -0.009706417 -1.435803943 0.421517466 -1.747406143 -0.111853487 -0.407985430 -1.022323678
## t t t t t t t
## 2.747119948 -0.190367342 -0.311156598 -0.633659972 0.077130595 -0.798624172 -1.142804774
## t t t t t t t
## 0.303938116 0.492492446 -0.440982190 0.560776175 0.293377683 0.838866809 0.895122920
## t t t t t t t
## 2.329493768 -1.148836941 -1.077388726 2.006697158 -0.587694885 0.459153867 2.202290970
## t t t t t t t
## -0.099856059 -0.994896823 1.231991148 0.457733746 0.124317021 0.259712203 -0.461771047
## t t t t t t t
## 0.182770010 -1.422714555 -0.807838886 0.321116607 -1.544989058 0.745635194 -0.864199963
## t t t t t t t
## 0.513367534 -0.177606504 0.630108685 -1.340207012 -0.046092750 -0.590674827 1.023473065
## t t t t t t t
## 0.283654693 -1.185680217 -2.045057307 -0.702759503 -0.010558183 0.815292272 -0.050132194
## t t t t t t t
## 0.031028883 0.850206665 2.100368168 -0.823929431 0.573305300 -0.032343333 0.208790148
## t t t t t t t
## 1.534828242 3.225420917 -1.537782363 -0.084426566 1.464290448 -0.416795012 -0.523234273
## t t t t t t t
## -1.816067124 0.775679579 0.105145648 -1.591845981 -1.744907036 1.466962643 1.770923024
## t t t t t t t
## 0.316090957 0.929870446 1.223206670 -0.289992945 0.015032074 0.732956216 -0.761638836
## t t t t t t t
## -0.566354565 -0.465779689 2.235538234 0.238971266 -0.155366131 -0.258715161 -0.234823703
## t t t t t t t
## -0.854978276 2.262389409 -0.446121465 0.068577445 1.316982257 0.337575591 -1.251744238
## t t t t t t t
## 1.057282234 0.136868936 0.216182187 2.059946532 -0.305474624 0.939904228 -0.697572539
## t t t t t t t
## -0.953189914 -1.454126926 -0.864250893 0.376238353 0.457710067 -1.132803560 1.292629289
## t t t t t t t
## -0.164634990 0.842885495 -1.571452976 -0.011796831 0.255634107 -0.805502713 0.980564116
## t t t t t t t
## -0.588696440 0.291781734 0.132289271 1.307661107 -1.315968960 1.970892951 -2.411109903
## t t t t t t t
## 0.630137541 1.915573197 -0.605979829 0.423399424 -0.035368179 1.474028743 -0.421854194
## t t t t t t t
## -1.394973756 0.511669601 0.485984354 1.848595964 0.132920131 0.039945108 1.853868363
## t t t t t t t
## 1.127771419 -0.482523788 -0.655871689 0.246514736 -1.742063422 0.405875755 0.569549812
## t t t t t t t
## 0.941785579 -0.813964815 0.836896408 -0.644835864 0.498130521 0.428195456 1.569004675
## t t t t t t t
## 0.815995711 0.810974509 0.290892605 2.064413218 1.215815479 -0.191584859 -1.293288505
## t t t t t t t
## 0.710709459 0.523653612 -0.015890827 -0.889756512 1.648556375 0.081844081 -1.102567309
## t t t t t t t
## -0.383753081 -1.267007968 1.720669369 -1.775550210 -0.216241588 -0.267452187 -1.638484168
## t t t t t t t
## 1.165803312 0.328408633 -1.556986307 0.378033082 -0.703346153 1.855409901 -0.943915794
## t t t t t t t
## 2.111805957 0.367366312 0.697688984 0.610585273 1.125582494 0.598838620 0.397010258
## t t t t t t t
## 0.888860984 -0.218666873 1.336137227 -0.310722082 1.038648513 0.581180731 -1.260207219
## t t t t t t t
## 1.468870828 0.746600093 0.565642550 -0.927981689 0.639060503 -1.000333900 -1.441816019
## t t t t t t t
## -0.037793340 0.303090985 -0.108794689 0.125465841 -1.664767746 -0.107247073 -0.345809422
## t t t t t t t
## -1.014154878 0.686698510 1.382129485 0.460192615 1.546925879 0.870307933 -0.355049399
## t t t t t t t
## -0.238286833 0.166493612 -0.056471557 -0.503855441 2.178812180 1.256411541 0.420443687
## t t t t t t t
## 0.140893049 0.081014840 1.014875846 -1.228588766 -0.085875370 0.517468933 -0.480767748
## t t t t t t t
## 1.752357394 0.607152175 0.227093023 0.299391032 0.027667031 -0.620512227 -0.332464772
## t t t t t t t
## -0.807192270 -0.533775140 -1.583984660 -0.171285345 -0.313047430 1.190598311 -0.453488864
## t t t t t t t
## 1.751896572 -1.105992148 -0.651260609 0.568617460 0.616293755 -1.207138888 0.181277953
## t t t t t t t
## 0.491708029 0.844701744 -0.133016779 -0.134004204 0.160856932 0.725124845 0.470342807
## t t t t t t t
## -1.225941913 0.484358467 0.487896848 -0.307755229 0.866682312 -0.302963681 1.614093078
## t t t t t t t
## 0.630851483 1.064588046 -0.688368544 0.926474288 2.370998768 1.798792962 0.923021749
## t t t t t t t
## -0.671002818 0.752934682 -1.796807368 -0.492545850 -0.894731897 0.192600408 0.719615438
## t t t t t t t
## 0.573338788 0.434248975 -0.428629325 -0.895894833 -0.066524469 0.498414554 0.357422068
## t t t t t t t
## -1.197613844 0.479557547 -0.554687479 -0.701964796 -0.273316618 1.010590735 -1.449715414
## t t t t t t t
## -0.904191028 0.212949507 -1.159142105 -1.584847952 0.649559946 -0.977100682 0.627656260
## t t t t t t t
## -1.526356275 -0.108971786 -1.585029332 0.821112630 -1.674096456 0.269426079 0.668732869
## t t t t t t t
## -1.747052990 -2.149312116 0.259333033 0.282580588 -0.492814152 -1.126259082 -0.731714264
## t t t t t t t
## -0.509380044 0.243590778 -0.757900695 0.035854450 1.193306881 1.492247831 2.929645539
## t t t t t t t
## -1.036421706 1.052533675 1.222355290 -1.904160102 -2.469324626 -1.019560944 0.440582349
## t t t t t t t
## 1.869654217 0.109247912 -1.124877986 -0.501122409 1.442663921 -0.004558135 0.563843608
## t t t t t t t
## 1.234266258 0.412644179 1.500733332 -1.449645229 1.481634736 0.967705899 0.598562161
## t t t t t t t
## 0.485871766 -0.993454500 0.712745448 -0.759700631 -0.320346728 0.878772573 -1.813225116
## t t t t t t t
## -0.529225010 -0.314061251 -0.146174004 2.185237032 0.360454604 0.132551200 0.535799087
## t t t t t t t
## -1.966714633 0.273180459 0.024149933 -0.774243659 -1.217611630 -1.093717642 0.545453368
## t t t t t t t
## 0.526281554 0.253455351 -0.010815414 1.641830035 -0.785170534 -0.917565912 1.130326413
## t t t t t t t
## 1.506060057 -0.528255876 0.891840462 -2.450036223 0.534164239 -0.558665921 -0.264768400
## t t t t t t t
## 0.912779172 0.754104949 0.304437709 0.885603330 0.430352586 -1.089219290 0.204426397
## t t t t t t t
## 3.752116311 -0.836848477 -0.113192499 -0.457069703 0.037368569 0.246812346 -0.439446148
## t t t t t t t
## -0.259995287 1.358678538 -1.046591854 -0.849327061 -0.388093857 -2.230290166 -1.063864903
## t t t t t t t
## 1.646498459 0.843686418 -0.225310980 -0.279453893 -0.115959443 0.198110127 0.677838993
## t t t t t t t
## 0.092271651 0.386305810 -0.373799263 0.680908100 -0.852799924 -1.510908984 1.502639474
## t t t t t t t
## -0.695917708 0.025488125 -0.860668970 -1.842705451 0.479552323 0.408256102 0.794412685
## t t t t t t t
## 0.697073215 -0.876331775 0.256238910 0.431354691 0.429913720 0.508547758 0.932571739
## t t t t t t t
## -1.724587615 -0.122680454 0.374080218 -0.154458736 -1.091569299 -0.017797419 0.688051439
## t t t t t t t
## -0.289828189 0.437791187 -1.331981990 0.603318225 0.161596169 -1.118237574 1.317470024
## t t t t t t t
## 0.247851295 -0.822157656 -0.832460465 0.602298470 0.954480906 0.629159837 2.196174062
## t t t t t t t
## 0.578083396 -0.405956216 -0.179308311 -0.143086185 -0.270491790 -0.599363251 -0.063553631
## t t t t t t t
## 2.389658656 -0.350011351 -0.324692434 0.833561725 -0.181405692 -1.317222528 -1.646218129
## t t t t t t t
## -0.726018836 -0.410057359 -0.015094619 -0.657837032 0.937439521 -0.618852242 0.302077973
## t t t t t t t
## -0.402092505 0.308012925 0.726185272 1.077260099 1.113495667 0.634792954 1.176343173
## t t t t t t t
## -0.019050018 -1.203540304 0.677748129 -0.268827616 -0.663547697 0.831008687 -2.010981927
## t t t t t t t
## 0.134833937 -0.012764915 -0.029283565 -0.180914759 0.497417864 0.102347186 0.878691304
## t t t t t t t
## 0.804795931 -0.240120363 1.888559368 0.471993324 -0.907137640 -2.055109093 0.378392691
## t t t t t t t
## 0.661171200 -0.767764977 -0.167714732 0.791580240 0.077130403 -0.932346286 0.338956740
## t t t t t t t
## 1.348916981 1.144376147 0.043405772 -1.283452175 0.705879570 -0.483512919 -1.564654845
## t t t t t t t
## 0.007805965 0.062649691 -0.578659145 0.832325662 -1.703848211 1.005637097 -1.489852103
## t t t t t t t
## 1.365040844 -0.144627973 0.443020141 0.137099625 0.630642511 0.798722509 0.938872680
## t t t t t t t
## 1.018424017 1.119448188 1.159537552 1.260095322 0.191828530 0.738603100 -0.278409725
## t t t t t t t
## -0.440162096 0.297310474 -0.041249447 -1.566672691 0.772405858 -0.542046428 -1.610285054
## t t t t t t t
## 0.773516650 0.099142151 1.186217394 -1.802334509 0.169437847 0.205241689 1.120384091
## t t t t t t t
## -0.357476580 0.521763820 1.293462318 0.459457687 -0.877779111 1.171134459 -0.957845663
## t t t t t t t
## -0.590471145 0.588078742 2.497316258 -0.009743808 1.549877346 0.586214851 -1.408460027
## t t t t t t t
## -0.265315827 -0.612077163 1.247630472 -0.323351631 -1.538577210 1.201732043 -0.220812962
## t t t t t t t
## -0.124268618 -0.591033263 -0.499097137 -0.539057968 -0.897412433 -0.402589749 0.642009415
## t t t t t t t
## 0.422934430 0.796072236 1.203858843 -2.266714883 2.092736373 -1.953112386 1.038513379
## t t t t t t t
## 0.475947985 -0.500569396 0.117216456 0.917777721 -0.045285433 -1.210125165 -0.980342296
## t t t t t t t
## -1.123193593 0.426279202 -0.397325049 -1.113567742 -0.463508989 1.127169756 -1.347754105
## t t t t t t t
## 1.637223579 -1.970318960 -1.239671800 -0.811545443 -0.209258350 -0.671211826 1.225126874
## t t t t t t t
## 0.201244002 -0.804921645 -0.816634041 1.319492152 -2.752125596 -0.758934583 -0.291975915
## t t t t t t t
## -0.203576897 -0.407498932 -0.252220206 -0.698683957 0.674927950 -0.285658647 2.867749129
## t t t t t t t
## 0.117576972 -1.410041761 -0.002453130 0.042586813 1.758400076 -0.643529873 -1.583219641
## t t t t t t t
## 1.022433508 0.313055722 0.813080117 -0.858847502 -0.661907100 0.313390796 0.594089192
## t t t t t t t
## 1.183799712 0.136380788 0.291668887 1.506933222 -0.037063180 0.560141361 -0.448415003
## t t t t t t t
## 0.646166574 0.907093640 0.936211469 -1.366555654 1.320241567 0.667042554 0.227093023
## t t t t t t t
## 0.330484697 1.312684537 0.414391803 -0.031404211 -0.892046557 -0.274213897 -1.007102991
## t t t t t t t
## 1.012946497 -0.358839945 -0.336152953 0.118848284 0.120880643 0.688411948 0.695403301
## t t t t t t t
## -1.950638848 0.050214730 0.371213521 0.711240880 -0.026128194 -1.127541290 -1.121895038
## t t t t t t t
## 0.397448715 -1.321881690 1.662775609 0.379414467 0.884401767 1.394796959 0.205863682
## t t t t t t t
## 0.210738009 0.582305882 -0.334482869 0.251609741 -1.115601258 1.698685962 -1.037647480
## t t t t t t
## 0.029591924 0.763969327 0.477814036 -1.216036530 -0.345255057 1.374507824
##
## Wilcoxon rank sum test with continuity correction
##
## data: puf$iralcfm[puf$mrjmon == "yes"] and puf$iralcfm[puf$mrjmon == "no"]
## W = 226081314, p-value < 2.2e-16
## alternative hypothesis: true location shift is not equal to 0
## 95 percent confidence interval:
## 2.000021 2.000031
## sample estimates:
## difference in location
## 2.000047
##
## Welch Two Sample t-test
##
## data: puf$iralcfm[puf$mrjmon == "yes"] and puf$iralcfm[puf$mrjmon == "no"]
## t = 37.058, df = 7352.6, p-value < 2.2e-16
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 3.691486 4.103838
## sample estimates:
## mean of x mean of y
## 6.926761 3.029099