TitleBodyTechnical Expertise RequiredCostAdditional Information
Decide what data to preserve

The process of science generates a variety of products that are worthy of preservation. Researchers should consider all elements of the scientific process in deciding what to preserve:

  • Raw data
  • Tables and databases of raw or cleaned observation records and measurements
  • Intermediate products, such as partly summarized or coded data that are the input to the next step in an analysis
  • Documentation of the protocols used
  • Software or algorithms developed to prepare data (cleaning scripts) or perform analyses
  • Results of an analysis, which can themselves be starting points or ingredients in future analyses, e.g. distribution maps, population trends, mean measurements
  • Any data sets obtained from others that were used in data processing
  • Multimedia: documented procedures, or standalone data

When deciding on what data products to preserve, researchers should consider the costs of preserving data:

  • Raw data are usually worth preserving
  • Consider space requirements when deciding on whether to preserve data
  • If data can be easily or automatically re-created from raw data, consider not preserving. E.g. if data that have undergone quality control processes and were analyzed, consider preserving since reproduction might be costly
  • Algorithms and software source code cost very little to preserve
  • Results of analyses may be particularly valuable for future discovery and cost very little to preserve

Researchers should consider the following goals and benefits of preservation:

  • Enabling re-analysis of the same products to determine whether the same conclusions are reached
  • Enabling re-use of the products for new analysis and discovery
  • Enabling restoration of original products in the case that working datasets are lost
Identify data with long-term value

As part of the data life cycle, research data will be contributed to a repository to support preservation and discovery. A research project may generate many different iterations of the same dataset - for example, the raw data from the instruments, as well as datasets which already include computational transformations of the data.

In order to focus resources and attention on these core datasets, the project team should define these core data assets as early in the process as possible, preferably at the conceptual stage and in the data management plan. It may be helpful to speak with your local data archivist or librarian in order to determine which datasets (or iterations of datasets) should be considered core, and which datasets should be discarded. These core datasets will be the basis for publications, and require thorough documentation and description.

  • Only the datasets which have significant long-term value should be contributed to a repository, requiring decisions about which datasets need to be kept.
  • If data cannot be recreated or it is costly to reproduce, it should be saved.
  • Four different categories of potential data to save are observational, experimental, simulation, and derived (or compiled).
  • Your funder or institution may have requirements and policies governing contribution to repositories.

Given the amount of data produced by scientific research, keeping everything is neither practical nor economically feasible.

Store data with appropriate precision

Data should not be entered with higher precision than they were collected in (e.g if a device collects data to 2dp, an Excel file should not present it to 5 dp). If the system stores data in higher precision, care needs to be taken when exporting to ASCII. E.g. calculation in excel will be done to the highest possible precision of the system, which is not related to the precision of the original data.