Information submitted through the support site is private but is not hosted within your secure CDD Vault. Please do not include sensitive intellectual property in your support requests.

Protocol Data [Post] - i.e. Bulk Import

Bulk import allows you to programmatically import data into CDD Vault. Unlike importing data through the CDD Vault web application, there is no way to interactively map individual columns via the API, so the use of an existing mapping template is required. Mapping templates are created through the CDD Vault web interface. Once a file has been uploaded through the API, data from the import is committed immediately unless there are errors or warnings. Any import errors or warnings must be resolved using the Data Import tab within the CDD Vault web interface.


Step 1: Upload a Data File and Assign Mapping Parameters

POST /api/v1/ vaults/<vault_id>/slurps

This call will initiate the import. To import data, both a data file and a JSON object are required and should be provided under the ‘file’ and ‘json’ parameters.



project required The name or id of a single project.
mapping_template optional The name or id of a mapping template that matches the attached file. If not provided, a mapping template that matches the file will be used. If there is more than one matching template, an error will be raised.
runs optional Either a single run detail object which will be applied to all new runs, or an array of run detail objects that will be applied to new runs in the same order as specified by the mapping template.
  Run details  
  run_date optional
Use YYYY-MM-DD. Default is today’s date.
  place optional
This field is called “Lab” within the CDD Vault web interface. No default value provided.
  person optional
Default is user’s full name.
  conditions optional
No default value provided.


Curl Example

 curl -H X-CDD-Token:$TOKEN --form-string ‘json={“project”: “34888”, ”mapping_template”: “36708”,”runs”:[{“run_date”: “2017-05-26”,”place”: “basement lab”}, {“run_date”: “2017-05-28”,”person”:”Lab assistant”}]’ -F ‘file=@path/to/file.csv’ ‘’

 Caveat when using curl:

Curl appears to have a bug when using large files where, if the JSON comes after the file, it can be cutoff and repeated, leading to an error response of an unexpected token.
We recommend you ensure placement of the JSON prior to the file to avoid this.

Ruby REST Client Example '', {:file =>"path/to/file.csv"), :json => '{"project":"34888", "mapping_template":"36708", "runs":{"run_date":"2010-10-26"}}'}, {:"X-CDD-Token" => ‘<API_TOKEN>’ }

Python Requests Example

url = ''
headers = {"X-CDD-Token": api_token}
data = {
'project': 34888,
'runs': {'run_date': '2001-01-01', 'conditions': '33C'}
files = {'file': open('file.csv', 'rb')}
response =, headers=headers, data={'json': json.dumps(data)}, files=files)

R Example

Please visit the R scripting language page for a Bulk Import example script.


"id": 845005082,
"class": "slurp",
"state": "queued_for_processing",
"api_url": ""



Step 2: Check Import Status

GET /api/v1/ vaults/<vault_id>/slurps/<slurp_id>

 It is likely that Step #1 above will suffice for most bulk imports since the data is committed automatically if there are no errors or warnings. If Step #1 is not successful, the user will receive an email notification alerting them of any errors or warnings. However, programmatically checking the status of a bulk import is an option.

 Once a file has been uploaded for import, you can check the import status using the ‘slurp’ id. A JSON representation of the slurp will be returned, including the slurp’s current state, number of records processed, and number of errors and warnings. If there are any errors or warnings, the response will also include a link to the import summary and a message letting you know to go and resolve them.

The “state” of a bulk import typically progresses from “mapping” to “committed” in the order listed below. The “mapping” state is completed by the time a non-error response is returned from creating a slurp.

All possible slurp states:

  • mapping
  • queued_for_processing
  • processing
  • processed
  • queued_for_committing
  • committing
  • committed
  • canceled
  • rejected
  • invalid

API bulk imports will automatically commit if there are no Errors nor Suspicious Events. In the case where the API import generates Errors or Suspicious Events, these will need to be resolved within the CDD Vault web interface.


Using curl

curl -H X-CDD-Token:$Token ‘’


"id": 845005082,
"class": "slurp",
"state": "committed",
"api_url": "",
"total_records": 1,
"records_processed": 1,
"records_committed": 1,
"import_warnings": 0,
"import_errors": 0

Response with Import Errors

  "id": 845005095,
  "class": "slurp",
  "state": "processed",
  "api_url": "",
  "total_records": 1,
  "records_processed": 1,
  "records_committed": 0,
  "import_warnings": 0,
  "import_errors": 1,
  "message": "This slurp has generated import errors or warnings. Please use web application to resolve them.",
  "web_url": ""


Step 3: Resolve Any Errors via the CDD Vault Data Import Tab

Log into CDD Vault, click the Data Import tab and locate the Report link for your API import file (or simply use the web_url from the Import Status JSON). Review the details and either Commit or Reject the import. For information on handling Errors and Suspicious Events, please see the Import Validation Knowledge Base article.