SPSS Guide

This guide is intended to support the data analysis work that is an integral part of the Measurement and Evaluation Course. It is essential to acquire a firm grasp of the basics (descriptive statistics) since they will be used throughout the course for a wide array of analytical purposes.

Quick Links to sections of this guide:

Creating a Data File

Checking for Errors in a Data File

Data Transformations, Recode

Descriptive Statistics

Graphs

Item Analysis

Validity

Reliability

Objectivity

The information presented in each section provides both context (when to use) and menu paths within SPSS to follow to execute various analyses.

 


"Through and through the world is infested with quantity: To talk sense is to talk quantities. It is no use saying the nation is large - how large?
It is no use saying that radium is scarce - how scarce? You cannot evade quantity.
You may fly to poetry and music, and quantity and number will face you in your rhythms and your octaves."

Alfred North Whitehead


Creating a Data File

When you first open SPSS you will notice that on the bottom of the screen are two tabs. One is the data view the other is the variable view.

Data View. From the data view you enter your data. Each horizontal line is for data pertaining to an individual. Each vertical column is for data pertaining to a variable.

Variable View: From the variable view, you provide information pertaining to each variable in your data set. This will include providing:

Variable name: Should be a short descriptive name - no spaces permitted

Variable label: A longer descriptive phrase to describe the variable.

Value labels: For categorical data (e.g. gender) where the numbers represent categories, the values column is where you specify which category each number represents. For example Male = 0; Female = 1.

Level of Measurement: Categorical (data just represents categories); Ordinal (data represents categories in a meaningful order); Scale (refers to interval and ratio scaled data) - generally meaning you are working data that represent actual scores rather than just categories.

 Adding Case/ID numbers

When you create a new data file it should include a variable named ID.  IDs can be numbers or names (e.g., cereals, pet name in hula file).  To have SPSS create the ID numbers:

You'll be able to see your new ID variable in SPSS's Data View.

Displaying Data File Information

To see a summary of the information in a data file displayed in the output area of SPSS:

Notice that the information produced in the output file is essentially the same as that in the variable view. The information will be displayed in two parts: the Variable Information and the Variable Values.

Checking for Errors in a Data File

Following entry of data into the SPSS spreadsheet it is important to check for errors. For example, consider the variable GENDER with value labels of 1 for male and 2 for female. It is reasonable to assume that a typing error could result in entries of other than a 1 or 2. One way to detect this error is to have SPSS produce a frequency distribution table for this variable. It might look like this:

Gender
  frequency
Male
  35
Female
  41
3
  6
6
  2

 

This table makes it clear that 8 of the entries are erroneous. For six subjects the value 3 was entered for gender and for another two subjects the value 6 was entered. With the errors detected, you have two options:

Error Checking (and correcting with 'missing values' Step Summary

1st, get a frequency distribution table for all variables and all cases in the data file:

2nd, if errors detected that are clearly values outside what is acceptable for a variable:

Note: If you encounter a situation where a value is inappropriate but only for a particular person in the data set you will not be able to use the 'missing values' feature in the variable view section of SPSS. Instead you will need to find the incorrect value(s) in the data view and delete them manually from the data file. For example, consider the situation where you have obtained two heart rates. One resting and the other one minute after jogging in place. If for one of the cases the two values were 128 and 128 that seems likely to be an error since the resting heart rate is quite high and the exercise heart rate is unlikely to be the same as the resting heart rate. If you don't have access to the original data so you can re-enter the correct values then you need to delete these values from the data file. But since 128 may be a legitimate value for some other cases you can't just assign it as a missing value from the variable view. You need to go into the data view, find this case and delete each 128 leaving blank cells for for this particular person/case.

Note: Constructing frequency distribution tables for every variable for the purpose of error checking is important to complete prior to initiating any analytical work.

 


Modifying Data

Data Transformations

Regardless of the nature of the variable, it is often useful to condense information before reporting it. For example: Assume you collected information on years of education in 5 categories (< High School, High School, some college, Bachelor’s degree, > Master’s degree) but only wanted to report the proportion of people with no college work and those with at least some college work. You would not want to manipulate the original variable so you would first create a new variable then recode the new variable.

Recode Step Summary

 

Combining information to create a new variable

In situations where you have component information and you need for example a total for each individual, a new variable needs to be created. This is easily done using the compute feature in SPSS.

Step summary for combining information into a new variable

 

Standardized Scores

There are times when it's useful to transform raw scores to standardized scores with a fixed mean and standard deviation. SPSS can do the tranformation from raw scores to Z scores (which have a mean of 0 and Standard deviation of 1).

Z scores

Z scores are a type of standardized score. Their particular feature is that they have a mean of zero and standard deviation of one. Standard scores tell you how many standard deviation units above or below the mean a value falls.

Z score Step Summary (a 2-step process)

 

 

Selection of Cases for Analysis (rather than whole data set)

There are times when you need to conduct an analysis on a portion of a data set (e.g., just the women) rather than the whole group. When this is the case, you first select the group you want to conduct the analysis on and then proceed to do the analysis (e.g., central tendency) you need for just that group. REMEMBER: when done, undo the selection so all cases are available for subsequent analyses.

Selecting a subset of a group prior to analysis

Frequently due to the nature of the group that measures have been obtained from, analyses on a subset of the entire group are of interest. When this is the case you first identify the subset (select cases) then proceed with the analysis.

Selecting a subset of a group Step Summary

In situations where you would like to conduct the same analysis (e.g., correlation, reliability) on subsets of a group (e.g., males and females) you could use the split file feature in SPSS. When analyses need to be repeated on all groups that make up a variable (e.g. gender: males/females) the split file feature is ideal to use. For example you may want to look at the correlation between exercise frequency and cholesterol level for men then for women. You could of course use the 'select cases' procedure above first for the males then repeat for females. However, the split file feature lets you do the two analyses at the same time.

Spliting a File

Splitting a File - Step Summary

 

 


Descriptive Statistics

Summarizing group information is typically the first step in the search for patterns, highlights, and meaning in a data set. Summary information can be presented both visually with the use of graphs and in the form of summary statistics. This section will focus on:

 

Selection of Descriptive Statistics to Summarize Group Data

The connection between the level of measurement for data and the selection of appropriate statistics to summarize that data is an important one. The table below provides some guidance on what statistics are approriate for each level of measurement.

Level of Measurement Applicable Statistics
Nominal/Categorical Percentages, Mode
Ordinal Percentages, Mode, Median*
Interval Mean, Median, Mode, Standard Deviation, Range
Ratio Mean, Median, Mode, Standard Deviation, Range

*Note: Use of the median for ordinal data should be applied only in situations where the underlying variable can be considered continuous or when you have a wide range of scores and the numbers do not simply represent a few discrete categories.

 


Frequency Distribution Tables for Summarizing Discrete Group Information

For categorical and ordinal data the construction of frequency distribution tables is an excellent way to summarize group information.

If you were to make a frequency distribution table by hand you would simply list each category/value observed followed by a count (also called absolute frequency) of the number of individuals in that category. An additional column called the relative frequency is often useful since it notes the percentage of the group in a particular category. For example:

Gender   f   rf
Male   28   48%
Female   30   52%

f: absolute frequency - count

rf: relative frequency - count/N (100) - record as %

 

Frequency Distribution Tables Step Summary

To get a frequency distribution table for all cases in the data file:

To get a frequency distribution table for a subset of cases in the data file:

With subgroup now selected:

Remember to go back through data menu to reselect all cases before starting analyses where all cases are needed.

 

Note: You would not construct frequency distribution tables for continuous data when the intent is to summarize information. The reason is that such data can take on a great number of values and since each value is listed in a frequency distribution table little summary may be accomplished. Measures of Central Tendency and Variability are much more useful in summarizing group information for interval and ratio scaled data.

 


 

 


Crosstabulation Tables for Summarizing Discrete Group Information

For categorical and ordinal data the construction of crosstabulation tables is an excellent way to cross-reference summary information for two or more variables.

If you were to make a crosstabulation table by hand you would in rows list each category/value of one variable and in columns list each category/value of a second variable. The table then would contain a count of the number of individuals in cells representing the various combinations of values for the two variables. For example, you might want to combine in one table gender (categorical) and age group (ordinal).

      Age Group  
    20-25 26-30 31-35
  Male 28 20 15
Gender        
  Female 30 18 20

 

From this table you can see that 28 of the subjects were male and in the youngest age group, and 18 of the subjects were female and in the middle age group.

Crosstabulation Tables for Summarizing Group Information Step Summary

Step Summary to break down by a 3rd variable (layered cross tabulation)

 

Note: You would not construct crosstabulation tables for continuous data when the intent is to summarize information. The reason is that such data can take on a great number of values and each value would be listed in a crosstabulation table. Therefore little summary may be accomplished. Measures of Central Tendency and Variability are much more useful in summarizing group information for continuous variables.

 


Central Tendency & Variability

Measures of central tendency summarize data by identifying where the center of a distribution of scores is. Measures of variability summarize data by quantifying the spread or dispersion of scores around the center.

For categorical and ordinal data with few categories, the Mode (though not an optimal measure) is an acceptable measure of central tendency however, discrete data is best summarized with a frequency distribution table.

For data at least interval scaled, the Median and Mean are appropriate measures of central tendency. If the distribution of scores is skewed the Median is the best measure of central tendency. The most common measure of variability is the standard deviation and is appropriate for use with data at least interval scaled.

In addition to being used to summarize a data set, measures of central tendency and variability are critical components of other statistical procedures.

Central Tendency & Variability Step Summary

REMEMBER, you must check the shape (obtain histogram under graphs option) of the distribution of scores to decide what measure of central tendency is appropriate. If the shape is clearly skewed then you need to obtain a median.

Central Tendency & Variability for data from one variable:

To get measures of central tendency and variability for data from one interval/ratio scaled variable broken down by one discrete variable:

To break the analysis down by a 2nd categorical variable (layered compare means):

 


 

 

 


Percentiles

Useful for conveying relative information about an individual are percentiles (raw score with specified percentage below it). Specific percentiles can be requested under the statistics option under frequency distribution tables.

Percentiles Step Summary


Correlation

There are several types of correlation coefficients to choose from. The choice is based on the nature of the data being correlated.

Pearson Product Moment Correlation Use when both variables have interval or ratio scaled data
Phi  Use when both variables are discrete and data are dichotomous
Cramer's V Use when both variables are discrete and at least one of the variables has more than two categories
Kendall's Tau Use when both variables have ordinal data
Point Biserial Correlation Use when one variable has interval or ratio scaled data and the other a true dichotomy

 

Pearson Product Moment Correlation (PPMC)

The PPMC can be used to describe the strength and direction of the linear relationship between two continuous variables. When two variables are not linearly related, the PPMC is likely to underestimate the true strength of the relationship. A graph of the x and y values can show whether or not the relationship is linear.

Correlation Step Summary for PPMC


Kendall's Tau

Kendall's Tau can be used to describe the strength and direction of the relationship between two ordinal variables. It is a rank-order correlation coefficient (as is PPMC) and can convey the extent to which pairs of values (x,y) are in the same rank order.

Correlation Step Summary for Kendall's Tau

  • Under the analyze menu choose correlate then choose bivariate.
  • Select the two ordinal variables and then move them to the variables box.
  • Check the box labeled Kendall's Tau.
  • Then click OK button.

Phi (and Cramer's V)

Phi can be used to describe the strength of the relationship between two variables each with data that is dichotomous.

Cramer's V can be used to describe the strength of the relationship between two discrete variables.

Phi and Cramer's V are signed numbers between -1 and 1 where zero represents no relationship.

Correlation Step Summary for Phi and Cramer's V


Point Biserial Correlation

The Point Biserial Correlation can be used to describe the strength of the relationship between one continuous (interval or ratio scaled) variable and one dichotomous variable. The point biserial correlation coefficient is useful in detecting a pattern in group measures (e.g., one group's scores tending to be higher than another group). The sign carries little meaning. It only indicates which group tended to have higher scores. The point biserial coefficient is a signed number between -1 and 1 where zero represents no relationship.

The computational formula for the point biserial coefficient is




Where:

X0 = mean of x values for those in category 0
X1 = mean of the x values for those in category 1
Sx = standard deviation of all x values
P0 = proportion of people in category 0
P1 = proportion of people in category 1

To obtain the components you need from SPSS so you can do Point Biserial by hand, you would use the compare means feature in SPSS:

 

 

 


Using Graphs to Summarize Data

 

Graphs are the visual counterparts to descriptive statistics and are very powerful mechanisms for revealing patterns in a data set. In addition, when used appropriately in a report they can highlight trends and summarize pertinent information in a way no amount of text could.

When summarizing categorical data, pie or bar charts are the most efficient and easy to interpret though line graphs may be more helpful particularly at times when trying to draw attention to trends in the data. For continuous (interval or ratio scaled) data, histograms are a good choice, easily constructed and simple to interpret. When attempting to represent visually the relationship between two continuous variables a scattergram can be used.

Bar Charts

To create simple bar for categorical and ordinal (with few categories) data:

Clustered Bar Charts - From Graphs Menu

Clustered Bar Charts - From Crosstabs Menu

 

Scattergrams

To create a scattergram (two continuous variables)

 

Histograms

To create a histogram (interval or ratio scaled data):

To create histograms (interval or ratio scaled data) for separate groups from a discrete variable:

To break down by a 2nd discrete variable:

 


Item Analysis

Following administration of an exam comprised of multiple choice items, statistical examination of the quality of the items with respect to difficulty and ability to distinguish among ability levels can be done.

Item Difficulty

Of interest is what proportion of the group got the item correct. While SPSS does not provide this information directly, provided you have labeled correct a one and incorrect as zero the proportion can be easily obtained.

Item Difficulty Step Summary

Item Discrimination Step Summary


Validity of Scores

Depending on the type and purpose of a test, evidence of criterion-related validity (e.g.,concurrent, predictive) can be obtained using a correlation coefficient.

Concurrent validity of scores

This is examined when you are interested in the extent to which a particular measure is as good as an already established criterion measure already known to provide valid and reliable data. You determine this by correlating your scoress (x is continuous) with scores or classifications from a criterion measure (y).

The process would entail:

Steps for concurrent validity of scores


Predictive validity of scores

This is examined when you are interested in the extent to which a particular measure is a good predictor of another variable. You determine this by correlating your scoress (x is continuous) with scores or classifications from the measure you are trying to predict (y).

Steps for predictive validity


 

Validity of classifications

Depending on the type and purpose of a test, evidence of criterion-related validity of classifications (e.g., master-nonmaster) can be obtained from a correlation coefficient.

Concurrent validity of classifications

The concurrent validity of classifications is examined when you are interested in the extent to which classifications (master/non master) are correct. You determine this by correlating your classifications (x) with classifications or scores from a criterion measure (y).

Steps for concurrent validity of classifications


Predictive validity of classifications

This is examined when you are interested in the extent to which classifications are good predictors of another set of classifications or scores. You determine this by correlating your classifications (x) with classifications or scores from a variable you are trying to predict (y).

Steps for predictive validity of classifications


Reliability of Scores

 

The primary concern here is the accuracy of measures. Reducing sources of measurement error is the key to enhancing the reliability of the data.

Reliability is typically assessed in one of two ways:

To estimate reliability you need 2 or more scores (or classifications) per person.

Note: When interpreting Cronbach's alpha or the intraclass R, a value > .70 reflects good reliability.

 

 

Internal Consistency and Stability of Scores - Continuous data

If multiple cognitive and motor skills/physiological measures collected at one time or over time, you can use an intraclass coefficient to estimate reliability.

Once you have 2 scores per person the question is how consistent overall were the scores?

NOTE: In many situations reliability has been estimated incorrectly using the Pearson correlation coefficient. This is not appropriate since (1) the PPMC is meant to show the relationship between two different variables - not two measures of the same variable, and (2) the PPMC is not sensitive to fluctuations in test scores. The PPMC is an interclass coefficient; what is needed is an intraclass coefficient.

The most commonly used and appropriate reliability coefficients are the intraclass R calculated from values in an analysis of variance table and Cronbach's alpha.

Steps for Cronbach's alpha

Spearman Brown Prophecy Formula - when test length manipulated

There are situations where you might want to understand how changes in test length may affect reliability. When this is the case, you 1st obtain Cronbach's alpha for the 'original' length test then apply the Spearman Brown Prophecy forumula.

spearman brown

m = amount you need to boost/diminish test length.
R = reliability coefficient (e.g. Cronbach's Alpha)

Note: this can be particularly useful when you administer a test only once and multiple measures are not available. In this case for example with a cognitive test, the most common way of getting 2 scores per person is to split the measures in half - usually by odd/even itmes or first half/second half by time or trials for motor skills tests.

Since test length directly influences reliability it is necessary to boost the reliability coefficient back up to original length/time since in this situation you’ve estimated the reliability of a test half as long as the one you gave yet you set out to establish the reliability of the full length test. So, the statistic to use is the Spearman-Brown Prophecy formula. It can be employed any time you manipulate test length or want to hypothesize what would happen to reliability if you shortened or lengthened a test. Unfortunately, SPSS does not provide an option for the Spearman Brown Statistic, but the calculations are easily managed by hand.

Steps for Cronbach's alpha

You now have the reliability of scores for the 1/2 length test. To get reliability for the full length test, use the spearman brown prophecy formula:

spearman brown



Reliability of Classifications

Stability of Classifications - Dichotomous Data (Mastery Test Classifications)

In this instance you are interested in the consistency of classifications from a mastery test. The two statistics of interest are the proportion of agreement (compute by hand from values in a crosstabulation table) and Kappa.

Steps for Proportion of Agreement

Steps for Kappa

 

 


Objectivity

Objectivity of scores - Continuous Data

In motor skill performance settings it is often necessary to collect measures through observation. To examine the objectivity of these measures you look at the consistency of measures across observers (inter-rater consistency). Note: you may also video tape a group and have one person record measures on two occasions (intra-rater consistency).

To assess objectivity, your task, since the measures come from observations, is to examine the objectivity of the measures produced by observers (likely using a rating scale). To do this, have two people observe one group of examinees and evaluate their performance using a rating scale. The measures from the two observers (you could also videotape the group and have one person evaluate the group twice) give you two scores per person to use in the Cronbach's alpha or intraclass R formulas. The Spearman-Brown formula is not needed in this situation since test length is not manipulated.

Note: When interpreting Cronbach's alpha or the intraclass R, a value > .70 reflects good objectivity.

 

 

Steps for Cronbach's alpha

 


 

Objectivity of Classifications - Dichotomous Data (Mastery Test Classifications)

In this instance you are interested in the consistency of classifications from two observers (or one observer scoring video twice). The two statistics of interest are the proportion of agreement (compute by hand from values in a crosstabulation table) and Kappa.

The data you work with can either be scores that are converted to classifications based on a cut score or direct classifications from observers. To assess objectivity, your task, since the classifications come from observations, is to examine the objectivity of the classifications produced by observers using a rating scale or checklist. To do this, have two people observe one group of examinees and evaluate their performance using a rating scale or checklist. The classifications from the two observers (you could also videotape the group and have one person evaluate the group twice) give you two classifications per person to use in the proportion of agreement and Kappa statistics.

Steps for Proportion of Agreement

Steps for Kappa