TEDS Data Dictionary

Processing the 7 Year Data

Contents of this page:

Introduction

This page describes how the 7 Year analysis dataset is created. The starting point is the raw data, in cleaned and aggregated form. (Prior processes of data collection, data entry, data cleaning and aggregation are taken for granted here.) There are two main sources of data for the 7 Year analysis dataset:

  1. Data collected in the study, including admin data. These are stored in tables in the Access database file called 7yr.accdb.
  2. Background data: twin sexes, zygosity variables, twin birth dates, medical exclusion and overall exclusion variables. Some of these variables are from the 1st Contact dataset, but the source of most of the background variables is the TEDS admin database, where they may occasionally be updated. Rather than exporting the source variables from the admin database then importing them in the creation of every dataset, this is done separately in a reference dataset containing all the background variables. This reference dataset is used here to add the background variables, ready-made.

Converting raw data from these sources into the dataset involves two main processes: firstly, where appropriate, raw data must be "exported" into files that can be used by SPSS; secondly, the data files are combined and restructured, using SPSS, into a form suitable for analysis. The latter involves a lengthy series of steps, which are saved and stored in SPSS scripts (syntax files).

General issues involved in creating TEDS datasets are described in the data processing summary page. The raw 7 year data files are described in more detail on another page.

Exporting raw data

Exporting involves copying the cleaned and aggregated raw data from the Access database where they are stored, into csv files that can be read into SPSS. The process of exporting raw data is described in general terms in the data processing summary page.

The study data, including admin data, stored in the Access 7yr.accdb database file, have been subject to occasional changes, even after the end of data collection. In earlier years, changes were caused by late returns of parent and teacher booklets, and more recently changes have occasionally been caused by data cleaning or data restructuring changes. If these changes have been made, then the data should be re-exported before a new version of the dataset is created using the SPSS scripts. The data stored in the database tables are exported indirectly, by means of saved "queries" (or views), rather than directly from the tables themselves. Each query selects appropriate columns from the relevant tables, excluding inappropriate data such as verbatim text fields. The queries also modify the format of the data values in some columns, so that they are saved in a format that can easily be read by SPSS; examples are date columns (changed to dd.mm.yyyy format) and true/false columns (changed to 1/0 values). The queries used to export the data are as follows:

Query name Source of data Database table(s) involved Exported file name
Export Parent1,
Export Parent2,
Export Parent3
parent booklets Parent1, Parent2, Parent3 Parent1.csv, Parent2.csv, Parent3.csv
Export Child twin telephone interviews Child Child.csv
Export Teacher teacher questionnaires Teacher Teacher.csv
Export 7yr admin 7 year admin data yr7Progress 7yrAdmin.csv

A convenient way of exporting these data files is to run a macro that has been saved for this purpose in the Access database. See the data files summary page and the 7 Year data files page for further information about the storage of these files.

Processing by scripts

Having exported the raw data as above, a new version of the dataset is made by running the scripts described below. The scripts must be run strictly in sequence. To run each script, simply open it in SPSS, select all the text, and click on the Run icon.

Script 1: Merging raw data sources

The main purposes of this script (filename G1_merge.sps) are to import the raw data files into SPSS; to carry out basic item variable formatting such as naming, setting variable levels and recoding; and to merge the various data files together, so as to create a basic dataset with one row of data per twin. This script also double-enters the twin-specific items from the parent booklet. The script carries out these tasks in order:

  1. There are 4 files of family-based raw data: the 3 files of parent booklet data, and the file of 7 year admin data such as return dates. These raw data files all start in csv format. For each of these 4 files in turn, carry out the following actions:
    1. Import into SPSS
    2. Sort in ascending order of family identifier FamilyID
    3. Recode default values of -99 (missing) and -77 (not applicable) to SPSS "system missing" values
    4. For each variable, change the name, set the displayed width and number of decimal places, and set the SPSS variable level (nominal/ordinal/scale)
    5. Carry out basic recoding of categorical variables where necessary
    6. Drop raw data variables that are not to be retained in the datasets.
    7. Save as an SPSS data file.
  2. In the 3 parent booklet data files, in addition to the steps mentioned above, transform and derive further variables as follows:
    1. Add reversed versions of behaviour items, where needed.
    2. Convert raw twin-pair neither/elder/younger/both items to twin-specific yes/no items
    3. In some cases (in the twin health section of the booklet), the conversion of twin-pair items to twin-specific items also involves combining the coding with the follow-up item, hence reducing the number of item variables. For example, the raw items about hearing problems (twin-pair neither/elder/younger/both item, followed by items asking about categories of hearing problems for each twin) are combined into a single pair of twin-specific items (incorporating both yes/no coding and the category coding if yes).
    4. Derive standardised twin-specific items from the raw elder twin responses and younger twin differences (parental feelings and discipline)
  3. Merge the 4 files of family-based data together using FamilyID as the key variable.
  4. Double enter the family-based data as follows:
    1. Compute twin identifier gtempid2 for the elder twin by appending 1 to the FamilyID. Save as the elder twin part of the family data.
    2. Re-compute gtempid2 for the younger twin by appending 2 to the FamilyID. Reverse the values of the Random variable. Swap over elder and younger twin values in any twin-specific variables in the family data (do this by renaming variables). Save as the younger twin part of the family data.
    3. Combine the elder and younger twin parts together by adding cases. Sort in ascending order of gtempid2 and save as the double entered family data file.
  5. There are 3 files of twin-based raw data: the file of child phone interview test data, the file of teacher questionnaire data, plus the admin data file containing twin IDs and birth orders. These raw data files all start in csv format. For each of these files in turn, carry out the following actions:
    1. Import into SPSS
    2. Sort in ascending order of twin identifier TwinID
    3. Recode default values of -99 (missing) and -77 (not applicable) to SPSS "system missing" values
    4. For each variable, change the name, set the displayed width and number of decimal places, and set the SPSS variable level (nominal/ordinal/scale)
    5. Carry out basic recoding of categorical variables where necessary (and add reversed versions of behaviour variables where needed)
    6. In the file of twin birth orders and sexes, compute the alternative twin identifier gtempid2 as the FamilyID followed by the twin order (1 or 2).
    7. Drop raw data variables that are not to be retained in the datasets.
    8. Save as an SPSS data file.
  6. Merge the 3 twin data files together using TwinID as the key variable.
  7. Double enter the twin and teacher data flags, as follows:
    1. Sort in ascending order of gtempid2 and save as the twin 1 part. (Note that by this stage the twin variables already have names ending in 1.)
    2. Change the flag variable names by changing the ending from 1 to 2. Change the values of gtempid2 to match the co-twin (change the final digit from 1 to 2 or vice versa). Re-sort in ascending order of gtempid2 and save with just the double entered variables as the twin 2 part.
    3. Merge the twin 1 and twin 2 parts using gtempid2 as the key variable. The double entered data flags can now be used to select twin pairs having data.
  8. Merge this twin data file with the double entered parent data file, using gtempid2 as the key variable. This dataset now contains all the raw data.
  9. Use the parent data flag, and the double entered twin and teacher data flags, to filter the dataset and delete any cases without any 7 Year data. Add the overall 7 Year data flag variable gsevenyr.
  10. Anonymise the family and twin IDs; the algorithm for scrambling IDs is described on another page.
  11. Sort in ascending order of scrambled twin ID id_twin.
  12. Save and drop the raw ID variables.
  13. Merge in essential background variables, from a separate reference dataset, using id_twin as the key variable. These include twin birth dates (for deriving ages), 1st Contact reference variables, twin sexes and zygosities, autism, medical exclusions and overall exclusion variables, all of which are already double entered where appropriate.
  14. Use variable gsevenyr to filter the dataset and delete cases added from the reference dataset that do not have 7 Year data.
  15. Save a working SPSS data file ready for the next script (filename g1merge in the \working files\ subdirectory).

Script 2: Deriving new variables

The purpose of this script (filename G2_derive.sps) is to derive scale and composite variables and twin ages from the item data. See derived 7 Year variables for full details of derivation of individual variables. The script carries out these tasks in order:

  1. Open the data file saved at the end of the previous script.
  2. Derive best estimates of the dates when the parent, twin and teacher data were completed or returned, using the full range of available dates in the raw data. Use these dates to derive the ages of the twins when the parent, twin and teacher data were completed.
  3. Derive a total score variable for each of the twin telephone tests. Where appropriate, "don't know" and "timed out" responses are first (temporarily) recoded so that they make zero contribution to the total score.
  4. Derive a flag variable to flag valid or invalid TOWRE test results.
  5. Create age-standardised versions of the two TOWRE test scores (word and non-word), using the following steps:
    1. Import the raw data files (from csv into SPSS), containing lookup tables for corrected TOWRE scores based on age. There are two such tables: one for the 'word' test, and one for the 'non-word' test. Sort each of these files by the key variable, which is a number derived from the relevant test score and the age. Save the two files.
    2. Re-open the working version of the 7 Year dataset. Derive a new variable for the twin age, appropriately grouped to match the values in the TOWRE lookup tables. From this, and from the TOWRE 'word' and 'non-word' scores, derive key variables to match those used in the TOWRE lookup tables.
    3. Merge with each of the two TOWRE lookup table data files, in each case using the relevant key variable derived from the TOWRE score and the age. This has now added the age-standardized TOWRE scores to the dataset. Re-sort back into id_twin order.
  6. Derive temporary variables to identify the biological mother and father of the twins from the respondent and respondent's partner in the parent booklet. Use these to derive new variables for educational qualifications and employment categories for the twins' mother and father.
  7. Derive standardized TOWRE, cognitive, academic achievement and SES composites as follows:
    1. Apply a filter (exclude1=0 & exclude2=0) to remove exclusions
    2. Standarise the necessary component items and scores
    3. Compute the mean of the appropriate standardised items/scores
    4. Standardise the mean, to make the final version of each composite
    5. Remove the filter
  8. Compute various scales from the behaviour items, from both the parent booklet and the teacher questionnaire. The behaviour scales include SDQ, PSD and ASD scales.
  9. Derive standardised parental feelings and discipline composites, using means of the standardised twin-specific versions of the items that were derived in the previous script.
  10. Drop any temporary variables that have been used to derive the new variables. Date variables are dropped at this point, having been used to derive ages.
  11. Save a working SPSS data file ready for the next script (filename G2derive), dropping all temporary variables that had been used during derivation of scales.

Script 3: Label variables

The purpose of this script (filename G3_label.sps) is simply to label all the variables added to the dataset so far, and to add value labels for categorical variables. The script carries out these tasks in order:

  1. Open the data file saved at the end of the previous script.
  2. Label all the variables.
  3. Add value labels for every integer-valued categorical variable (whether nominal or ordinal) having 3 or more different categories.
  4. Save a working SPSS data file ready for the next script (filename G3label).

Script 4: Double enter the twin data

The purpose of this script (filename G4_double.sps) is to double-enter all the twin-specific data in the dataset. Note that twin-specific item variables from the parent booklet are already correctly double-entered at this stage (this was achieved in script 1). The variables to be double entered in the current script are all items and scales from the twin interviews and teacher questionnaires. The script carries out these tasks in order:

  1. Open the data file saved at the end of the previous script.
  2. Create the twin 2 part (for the co-twin) as follows:
    1. Rename the variables (from the twin interviews and teacher questionnaire) by changing the suffix from 1 to 2.
    2. Modify the id_twin values so they will match the co-twin (change the final digit from 1 to 2 or vice versa).
    3. Re-sort in ascending order of id_twin and save as the twin 2 part, keeping only the renamed variables.
  3. Re-open the data file saved at the end of the previous script: this already serves as the twin 1 part of the dataset.
  4. Merge in the twin 2 part, using id_twin as the key variable. The dataset is now double entered.
  5. Place the dataset variables into a logical and systematic order (do this using a KEEP statement when saving the dataset).
  6. Save an SPSS data file (filename G4double in the \working files\ subdirectory).
  7. Save another copy as the main 7 Year dataset, with filename gdb9456.