This guide is intended to support the data analysis work that is an integral part of graduate coursework. It is essential to acquire a firm grasp of both descriptive and inferential statistics since they will be used for a wide array of analytical purposes.

Following presentation of ways to modify data, information specific to various descriptive and inferential statistics is provdided. The information presented in each section provides both context (when to use) and menu paths within SPSS to follow to execute various analyses.


 


Modifying Data

Selecting a subset of a group prior to analysis

Frequently due to the nature of the group that measures have been obtained from, analyses on a subset of the entire group are of interest. When this is the case you first identify the subset (select cases) then proceed with the analysis.

Selecting a subset of a group Step Summary (be certain you are in the data file (rather than output file) when you begin.

Spliting a File

Some of the analyses to be conducted may need to be repeated on all groups that make up a variable (e.g. gender: males/females). For example you may want to look at the correlation between exercise frequency and cholesterol level for men then for women. You could of course use the procedure above first for the males then repeat for females. However, the split file feature lets you do the two analyses at the same time.

Splitting a File - Step Summary

Data Transformations

Regardless of the nature of the variable, it is often useful to condense information before reporting it. For example: Assume you collected information on years of education in 5 categories (< High School, High School, some college, Bachelor’s degree, > Master’s degree) but only wanted to report the proportion of people with no college work and those with at least some college work. You would not want to manipulate the original variable so you would first create a new variable then recode the new variable.

Recode Step Summary

 

Combining information to create a new variable

In situations where you have component information and you need for example a total for each individual, a new variable needs to be created. This is easily done within the transform menu.

Step summary for combining information into a new variable

 

Displaying Data Specifications

To obtain a listing of all variable information (e.g. labels, names) contained in the variable view:

Notice that the information produced in the output file is essentially the same as that in the variable view. The information will be displayed in two parts: the Variable Information and the Variable Values.

 


Descriptive Statistics

Summarizing group information is typically the first step in the search for patterns, highlights, and meaning in a data set. Summary information can be presented both visually with the use of graphs and in the form of summary statistics. This section will focus on:


Review of connection between measurement scales and analytical processes

This table conveys in column 2 what statistics could be used when the data is of the level of measurement listed to the left. This table does NOT convey information about what the level of measurement is for the statistics in the 2nd column. For example, percentiles are NOT interval scaled data.

Measurement Scale Statistics/SPSS procedures
Categorical

Percentages: Frequencies (FDT)
Crossed Percentages: Crosstabs
Bar Charts: Frequencies
Correlation (dichotomous variables): Crosstabs-stats (Phi)
Inferential Stats: Chi Squared

Ordinal

Percentages: Frequencies (FDT)
Crossed Percentages: Crosstabs
Bar Charts: Frequencies
Correlation: Correlate (Kendall)
Inferential Stats: Mann Whitney, Kruskal Wallis, Wilcoxon, Friedman

Interval

Central Tendency: Frequencies-stats; compare means (for sub-groups)
Variability: Frequencies-stats; compare means (for sub-groups)
Percentiles: Frequencies-stats
Histogram: Frequencies
Correlation: Correlate (PPMC)
Scatterplot
Inferential Stats: t-tests, ANOVAs

Ratio Central Tendency: Frequencies-stats; compare means (for sub-groups)
Variability: Frequencies-stats; compare means (for sub-groups)
Percentile & Percentile Ranks: Frequencies-stats
Histogram: Frequencies
Correlation: Correlate (PPMC)
Scatterplot
Inferential Stats: t-tests, ANOVAs

 


Frequency Distribution Tables for Summarizing Group Information

For categorical and ordinal data the construction of frequency distribution tables is an excellent way to summarize group information.

If you were to make a frequency distribution table by hand you would simply list each category/value observed followed by a count (also called absolute frequency) of the number of individuals in that category. An additional column called the relative frequency is often useful since it notes the percentage of the group in a particular category. For example:

Gender   f   rf
Male   28   48%
Female   30   52%

f: absolute frequency - count

rf: relative frequency - count/N (100) - record as %

 

Frequency Distribution Tables Step Summary

To get a frequency distribution table for all cases in the data file:

 

To get a frequency distribution table for a subset of cases in the data file:

With subgroup now selected:

Remember to go back through data menu to reselect all cases before starting analyses where all cases are needed.

 

Note: You would not construct frequency distribution tables for continuous data when the intent is to summarize information. The reason is that such data can take on a great number of values and since each value is listed in a frequency distribution table little summary may accomplished. Measures of Central Tendency and Variability are much more useful in summarizing group information for continuous data.

 


Frequency Distribution Tables for Error Checking

Following entry of data into the SPSS spreadsheet it is important to check for errors. For example, consider the variable GENDER with value labels of 1 for male and 2 for female. It is reasonable to assume that a typing error could result in entries of other than a 1 or 2. One way to detect this error is to have SPSS produce a frequency distribution table for this variable. It might look like this:

Gender   frequency
Male   35
Female   41
3   6
6   2

 

This table makes it clear that 8 of the entries are erroneous. For six subjects the value 3 was entered for gender and for another two subjects the value 6 was entered. With the errors detected, you would use the search feature in SPSS to find these data entry errors and correct them.

Error Checking with Frequency Distribution Tables Step Summary

To get a frequency distribution table for all variables and all cases in the data file:

When data entry errors located, but you cannot correct them then in the variable view of the data identify that number as a missing value so SPSS does not use it in any analyses. If you identify values that appear incorrect but only for select cases, then enter a blank in place of the value you deem inappropriate in the spreadsheet view of the data. For example, consider the situation where you have obtained two heart rates. One resting and the other one minute after jogging in place. If for one of the cases the two values were 128 and 128 that seems likely to be an error since the resting heart rate is quite high and the exercise heart rate is unlikely to be the same as the resting heart rate. If you don't have access to the original data so you can re-enter the correct values then you need to make these values missing. But since 128 may be a legitimate value for some other cases you can't just assign it a missing value. You need to go into the spreadsheet, find this case and delete each 128 leaving blank cells for these two variables for this particular case.

Note: Constructing frequency distribution tables for every variable for the purpose of error checking is important to complete prior to initiating any analytical work.

 

 


Crosstabulation Tables for Summarizing Group Information

For categorical and ordinal data the construction of crosstabulation tables is an excellent way to cross-reference summary information for two or more variables.

If you were to make a crosstabulation table by hand you would in rows list each category/value of one variable and in columns list each category/value of a second variable. The table then would contain a count of the number of individuals in cells representing the various combinations of values for the two variables. For example, you might want to combine in one table gender (categorical) and age group (ordinal).

      Age Group  
    20-25 26-30 31-35
  Male 28 20 15
Gender        
  Female 30 18 20

 

From this table you can see that 28 of the subjects were male and in the youngest age group, and 18 of the subjects were female and in the middle age group.

Crosstabulation Tables for Summarizing Group Information Step Summary

Step Summary to break down by a 3rd variable.

Note: You would not construct crosstabulation tables for continuous data when the intent is to summarize information. The reason is that such data can take on a great number of values and each value would be listed in a crosstabulation table. Therefore little summary may be accomplished. Measures of Central Tendency and Variability are much more useful in summarizing group information for continuous variables.

 

Risk Odds Ratio

Crosstabulation of two dichotomous variables where one represents the presence/absence of a disease or outcome and the other variable represents the presence/absence of a risk factor enables you to obtain the risk odds ratio statistic.

 


Central Tendency & Variability

Measures of central tendency summarize data by identifying where the center of a distribution of scores is. Measures of variability summarize data by quantifying the spread or dispersion of scores around the center.

For categorical and ordinal data with few categories, the Mode (though not an optimal measure) is an acceptable measure of central tendency and the range is an appropriate measure of variability. Frequently however, such data is best summarized with a frequency distribution table.

For data at least interval scaled, the Median and Mean are appropriate measures of central tendency. If the distribution of scores is skewed the Median is the best measure of central tendency. The most common measure of variability is the standard deviation and is appropriate for use with data at least interval scaled.

In addition to being used to summarize a data set, measures of central tendency and variability are critical compoenents of other statistical procedures.

 

Central Tendency & Variability Step Summary

Using the frequencies option in SPSS:

If working with interval or ratio data and the data is normally distributed you can obtain the mean and standard deviation from the descriptions option in SPSS:

REMEMBER, you must check the shape (obtain histogram under graphs) of the distribution of scores to decide what measure of central tendency is appropriate. If the shape is skewed then you need to obtain a median.

To get measures of central tendency and variability for continous measures on subgroups of your sample,

To break the analysis down by a 2nd categorical variable:

REMEMBER, you must check the shape (obtain histograms under explore option) of the distribution of scores for each group to decide what measure of central tendency is appropriate. If the shape is skewed for either group then you need to obtain medians.

 


 

 


Correlation

There are several types of correlation coefficients to choose from. The choice is based on the nature of the data being correlated.

Pearson Product Moment Correlation Use when both variables have continuous data
Phi  Use when both variables have dichotomous data
Kendall's Tau Use when both variables have ordinal data
Point Biserial Correlation Use when one variable has continuous data and the other a true dichotomy

 

Pearson Product Moment Correlation (PPMC)

The PPMC can be used to describe the strength and direction of the linear relationship between two continuous variables. When two variables are not linearly related, the PPMC is likely to underestimate the true strength of the relationship. A graph of the x and y values can show whether or not the relationship is linear.

Correlation Step Summary for PPMC


Kendall's Tau

Kendall's Tau can be used to describe the strength and direction of the relationship between two ordinal variables. It is a rank-order correlation coefficient (as is PPMC) and can convey the extent to which pairs of values (x,y) are in the same rank order.

Correlation Step Summary for Kendall's Tau

  • Under the analyze menu choose correlate then choose bivariate.
  • Select the two ordinal variables and then move them to the variables box.
  • Check the box labeled Kendall's Tau.
  • Then click OK button.

Phi

Phi can be used to describe the strength of the relationship between two dichotomous variables. It can convey the direction of the pattern in the two by two crosstab table of the the two variables.

Correlation Step Summary for Phi


Point Biserial Correlation

The Point Biserial Correlation can be used to describe the strength of the relationship between one continuous variables and one dichotomous variable. The point biserial correlation coefficient is useful in detecting a pattern in group measures (e.g one group's scores tending to be higher than another group).

The computational formula for the point biserial coefficient is




Where:

X0 = mean of x values for those in category 0
X1 = mean of the x values for those in category 1
Sx = standard deviation of all x values
P0 = proportion of people in category 0
P1 = proportion of people in category 1

 

Steps to obtain summary information in order to do point biserial by hand:

 


Using Graphs to Summarize Data

 

Graphs are the visual counterparts to descriptive statistics and are very powerful mechanisms for revealing patterns in a data set. In addition, when used appropriately in a report they can highlight trends and summarize pertinent information in a way no amount of text could.

When summarizing categorical data, pie or bar charts are the most efficient and easy to interpret though line graphs may be more helpful particularly at times when trying to draw attention to trends in the data. For continuous data, histograms are a good choice, easily constructed and simple to interpret. When attempting to represent visually the relationship between two continuous variables a scattergram can be used.

 

Bar Charts

To create simple bar, chart for categorical and ordinal (with few categories) data:

Scattergrams

To create a scattergram (two continous variables)

Histograms

To create a histogram (continuous variable) you can work from the frequencies option

To create histograms for subsets of a group:

To break down by a 2nd categorical variable:


Validity

Validity of Scores

Depending on the type and purpose of a test, criterion-related validity of can be examined from one or more of several perspectives. The two situations covered in this class are:

Concurrent validity of scores

This is examined when you are interested in the extent to which a particular measure is as good as an already established criterion known to provide valid and reliable data. You determine this by correlating your scoress (x is continuous) with scores or classifications from a criterion measure (y).

The process would entail:

Steps for concurrent validity of scores

point biserial

 


Reliability

 

The primary concern here is the accuracy of measures. Reducing sources of measurement error is the key to enhancing the reliability of the data.

Reliability is typically assessed in one of two ways:

To estimate reliability you need 2 or more scores (or classifications) per person.

Note: When interpreting coefficient alpha or the intraclass R, a value > .70 reflects good reliability.

 

 

Internal Consistency of Scores - Continuous data

If multiple cognitive and motor skills/physiological measures collected on one day, the estimate of reliability is referred to as internal consistency. The intraclass coefficients you can use are Cronbach's Alpha and the Intraclass R.

Steps for coefficient alpha

 

Steps for Intraclass R

 


Stability of scores - Continuous Data

If every individual can be measured twice on the variable you're interested in then you readily have data from which reliability can be examined.

Once you have 2 scores per person the question is how consistent overall were the scores.

In many situations reliability has been estimated incorrectly using the Pearson correlation coefficient. This is not appropriate since (1) the PPMC is meant to show the relationship between two different variables - not two measures of the same variable, and (2) the PPMC is not sensitive to fluctuations in test scores. The PPMC is an interclass coefficient; what is needed is an intraclass coefficient. The most commonly used reliability coefficients are the intraclass R calculated from values in an analysis of variance table and coefficient alpha.

Steps for coefficient alpha

Steps for Intraclass R


 


Objectivity

Objectivity of scores - Continuous Data

In motor skill performance settings it is often necessary to collect measures through observation. To examine the objectivity of these measures you look at the consistency of measures across observers (inter-rater consistency). Note: you may also video tape a group and have one person record measures on two occasions (intra-rater consistency).

To assess objectivity, your task, since the measures come from observations, is to examine the objectivity of the measures produced by observers using a rating scale. To do this, have two people observe one group of examinees and evaluate their performance using a rating scale. The measures from the two observers (you could also videotape the group and have one person evaluate the group twice) give you two scores per person to use in the coefficient alpha or intraclass R formulas. The Spearman-Brown formula is not needed in this situation since test length is not manipulated.

Note: When interpreting coefficient alpha or the intraclass R, a value > .70 reflects good objectivity.

 

 

Steps for coefficient alpha

 


 


Inferential Statistics

The branch of statistics concerned with using sample data to make an inference about a population is called inferential statistics. This is generally done through random sampling, followed by inferences made about central tendency, or any of a number of other aspects of a distribution. This section will focus on:

Parametric Tests for Differences

Parametric Tests for Relationships

Non-Parametric Tests for Differences

Non-Parametric Tests for Relationships

 

 

Parametric tests for differences - Dependent t-test

The dependent t-test is a statistical Procedure for testing H0: mean1 = mean2 when the two measures of the dependent variable are related. For example, when one group of subjects is tested twice the two scores are related.

Assumptions of the dependent t-test procedure:

If assumptions met you can proceed and conduct a dependent t-test. If distributional assumptions not met you should conduct a non-parametric test (Wilcoxon)

 

Dependent t-test Step Summary

Conducting dependent t-test


Parametric tests for differences - Independent t-test

To examine whether or not there is a statistically significant difference in means on some dependent variable (continuous) as a function of some independent variable (categorical) you can use the t-test when you have just two levels (unrelated) of the independent variable (ex: gender).

An Independent t-test is a statistical procedure for testing H0: mean1 = mean2 when the two levels of the independent variable are not related.

 

Assumptions of the independent t-test procedure:

If assumptions met you can proceed and conduct an independent t-test. If distributional assumptions not met you should conduct a non-parametric test (Mann-Whitney).

 

Independent t-test Step Summary

Checking homogeneity of variance assumption

Checking normality assumption

To conduct an independent t-test

 

Parametric tests for differences - Repeated Measures Analysis of Variance

The repeated measures ANOVA is an extension of the dependent t-test. It is a statistical pocedure for testing H0: mean1 = mean2 = mean3 = ... when the two or more measures of the dependent variable are related. For example, when one group of subjects is tested three times the three scores are related.

Assumptions of the repeated measures ANOVA procedure:

If assumptions met you can proceed and conduct a repeated measures ANOVA. If distributional assumptions not met you should conduct a non-parametric test (Friedman)

Repeated Measures Analysis of Variance Step Summary

Checking Sphericity assumption

Conducting Repeated Measures Analysis of Variance


Parametric tests for differences - One way analysis of variance

To examine whether or not there is a statistically significant difference in means on some dependent variable (continuous) as a function of some independent variable (categorical) you can use the One way analysis of variance procedure when you have two or more levels (unrelated) of the independent variable.

A One way analysis of variance is a statistical procedure for testing H0: mean1 = mean2 = mean3 .... when the two or more levels of the independent variable are not related.

 

Assumptions of the one way ANOVA procedure:

If assumptions met you can proceed and conduct an independent t-test. If distributional assumptions not met you should conduct a non-parametric test (Kruskal Wallis).

 

One way analysis of variance Step Summary

Checking homogeneity of variance assumption

Checking normality assumption

To conduct a one way analysis of variance


Parametric tests for differences - Two way analysis of variance

To examine whether or not there is a statistically significant difference in means on some dependent variable (continuous) due to the influence of two independent variables (categorical) you can use the two way analysis of variance procedure when you have two or more levels (unrelated) of each independent variable.

A two way analysis of variance can be used to answer three questions: a) is there a difference in means on the dependent variable due to the 1st independent variable, b) is there a difference in means on the dependent variable due to the 2nd independent variable, and c) do the two independent variables interact to affect the dependent variable.

 

Assumptions of the two way ANOVA procedure:

If assumptions met you can proceed and conduct an independent t-test. If distributional assumptions not met you could conduct two non-parametric tests (Kruskal Wallis) to examine the main effects, but, there is no comparable non-parametric test to examine the interaction.

Two way analysis of variance Step Summary

Checking constant variance assumption

Checking normality assumption

 

To conduct a Two way (fixed) analysis of variance


Non-Parametric tests for differences

When the dependent variable is an ordinal variable a non-parametric test should be used to examine group differences. The reason for this is that one of the assumptions associated with parametric tests is that the data is continuous (at least interval scaled).

When parametric distributional assumptions (eg normality, homogeneity of variance) have been violated, even though the dependent variable may be continuous, a non-parametric test should be used to examine group differences.

This excerpt from the SPSS guide to data analysis explains well the application of parametric and non-parametric tests:

"The disadvantage to nonparametric tests is that they are usually not as good at finding differences when there are differences in the population. Another way of saying this is that nonparametric tests are not as powerful as tests that assume an underlying normal distribution, the so-called parametric tests. That’s because nonparametric tests ignore some of the available information. For example, data values are replaced by ranks when using the Wilcoxon test. In general, if the assumptions of a parametric test are plausible, you should use the more powerful parametric test. Nonparametric procedures are most useful for small samples when there are serious departures from the required assumptions. They are also useful when outliers are present, since the outlying cases won’t influence the results as much as they would if you used a test based on an easily influenced statistic like the mean."

 

Non Paramtric tests for differences - Wilcoxon

The Wilcoxon test is the non-parametric counterpart to the dependent t-test. It is a statistical Procedure for testing the null hypothesis that two medians are equivalent when the two measures of the dependent variable are related. For example, when one group of subjects is tested twice the two scores are related.

 

Assumptions of the Wilcoxon procedure:

 

Wilcoxon Step Summary

Checking symmetry

Conducting Wilcoxon test


Non-Parametric Tests for Diferences - Friedman

The Friedman test is the nonparametric counterpart to the Repeated Measures ANOVA. To examine whether or not there is a statistically significant difference in medians from repeadted measures of a dependent variable (continuous) you can use the Wilcoxon test when you have two or more measures of the dependent variable.

Assumptions of the Friedman procedure:

 

Friedman Step Summary

Checking symmetry

 

To conduct a Friedman test


Non-parametric Tests for Differences - Mann-Whitney U

The Mann-Whitney U test is the nonparametric counterpart to the independent t-test. To examine whether or not there is a statistically significant difference in medians on some dependent variable (at least ordinally scaled) as a function of some independent variable (categorical) you can use the Mann-Whitney U test when you have just two levels (unrelated) of the independent variable (ex: gender).

 

Assumptions of the Mann-Whitney U procedure:

 

To conduct a Mann-Whitney U test


Non-Parametric Tests for Diferences - Kruskal-Wallis

The Kruskal-Wallis test is the nonparametric counterpart to the one-way ANOVA. To examine whether or not there is a statistically significant difference in medians on some dependent variable (continuous) as a function of some independent variable (categorical) you can use the Kruskal-Wallis test when you have two or more levels (unrelated) of the independent variable.

Assumptions for the Kruskal-Wallis procedure:

 

To conduct a Kruskal-Wallis test


Parametric Tests for Relationships - Correlation

When testing for the presence of a statistically significant relationship, the null hypothesis under examination is that the correlation between your independent and dependent variable is zero.

 

Assumptions when testing for a significant relationship

If assumptions met continue and test for a significant relationship. If assumptions not met, recode continuous variables to categorical/ordinal data and use the chi square statistic.

Correlation Step Summary

Checking Linearity & Homoscedasticity Assumptions

Conducting correlation test

 


 


Non-parametric tests for relationships - Chi Square

The Chi Square test of independence is used to examine the statistical significance of the relationship between two categorical/ordinal variables.

 

Assumptions associated with the Chi Square test

If assumptions met continue and test for a significant regression. If assumptions not met, no other statistical test available, so, report a measure of practical significance such as Phi or Cramer’s V.

Chi Square Step Summary

 


Determining Power & Sample Size

While SPSS does have some capacity with respect to power estimation, software specifically designed to estimate power and determine a-priori sample size is recommended. The free software used in this course is G-Power. The directions here apply to the G-Power software.

Determining Power for a Differences study after a study/analysis complete: Post-Hoc Power Analysis

Post-hoc power analyses are done after you or someone else conducted an experiment.

You have:

* alpha,
* N (the total sample size),
* and the effect size.

Effect size can be conceived of as measures of the "distance" between H0 and H1.

Hence, effect size refers to the underlying population rather than a specific sample. In specifying an effect size, researchers define the degree of deviation from H0 that they consider important enough to warrant attention. In other words, effects that are smaller than the specified effect size are considered negligible.

You want to know

* the power of a test to detect this effect.

For instance, you tried to replicate a finding that involves a difference between two treatments administered to two different groups of subjects, but failed to find the effect with your sample of 36 subjects (14 in Group 1, and 22 in Group 2).

Suppose you expect a "medium" effect according to Cohen's effect size conventions between the two groups (delta = .50), and you want to have alpha =.05 for a two-tailed test, you

to find out that your test's power to detect the specified effect is ridiculously low: 1-beta = .2954.

However, you might want to draw a graph using the Draw graph option to see how the power changes as a function of the effect size you expect, or as a function of the alpha-level you want to risk.

 

Determining Sample Size for a Differences study at the outset: A-priori Power Analysis

A priori power analyses are done before you conduct an experiment.

You have:

alpha,
the desired power (1-beta),
and the effect size of the effect you want to detect.

You want to know how many subjects you need:

the total sample size.

For instance, if you want to compare the effects of two treatments administered to two different groups of subjects, you choose

Suppose you expect a "large" effect according to Cohen's effect size conventions between the two groups (d = .80), and you want to have alpha = beta = .05 (i.e., power = .95), you

enter these values and click the "Calculate" button to find out that you need N = 84 subjects.

If you think this is too much, you might want to have G*Power draw a graph for you to see how the sample size changes as a function of the power of your test, or as a function of the effect size you expect. Simply click on the Draw Graph button.

 

For an ANOVA you need the same information plus you need to specify the number of groups.

For a Correlational analysis, the effect size is the value of the correlation coefficient. G-Power will need the same information for a correlational analysis as it did with differences: effect size, alpha, power (to determine sample size) and effect size, alpha, sampole size (to determine power).

 


Factor Analysis

This technique can be used for

 

Steps for factor analysis