Set-up and synchronization
The qDrive package can be used to synchronize live data into the cloud. This document describes how to set up the synchronization process.
Setting Up The Synchronization
The qDrive package manages data synchronization via a separate process that starts automatically when the package is imported in Python,
i.e., when you run import qdrive
.
Note
This means the synchronization process will not start until qDrive is imported in python after a system startup. We aim to automate this process in future releases.
Tip
When working on a server, with no graphical environment, you can log in using an API-Token. Instructions on how to do this can be found here.
Launching the Synchronization GUI
The simplest way to manage synchronization sources is through the GUI. To launch the GUI, run the following command:
python -c "import qdrive; qdrive.launch_GUI()"
This will open the qDrive user interface:

From the GUI, click the Add source button to add new synchronization sources. The available source types are: FileBase, QCoDeS, Quantify (QMI) and Core-Tools
Setting Up a Synchronization for a FileBase Source
This synchronization agent works well for synchronizing (arbitrary) file structures. For example :
main_folder
├── 20240101
│ ├── 20240101-211245-165-731d85-experiment_1 <-- This is a dataset
│ │ ├── my_metadata.json
│ │ ├── my_data.hdf5
├── 20240102
│ ├── 20240102-220655-268-455d85-experiment_2 <-- This is a dataset
│ │ ├── my_metadata.json
│ │ ├── my_data.hdf5
│ │ ├── analysis
│ │ │ ├── analysis_metadata.json
│ │ │ ├── analysis_data.hdf5
├── some_other_folder <-- This is a dataset
│ ├── my_data.json
Here we see that datasets can be found at different levels in the folder structure.
To synchronize this data, a file called _QH_dataset_info.yaml
can be placed in every folder from which you want to create a dataset.
In this file you can also specify specific metadata and methods to convert files (if needed).
More information on how to create these can be found here.
You can set up the synchronization using this method in the GUI by:
Selecting the scope to which the data should be synchronized.
Selecting the folder to synchronize (e.g.,
main_folder
in this example).Choosing whether the location is on a local or network drive. Note that performance may suffer on a network drive, so you might want to try both options to see which works best.
Once these settings are configured, the synchronization agent will start looking for _QH_dataset_info.yaml
files in the folders.
Setting Up QCoDeS Synchronization
To add a QCoDeS database for synchronization:
Open the Add Source menu.
Define a name that indicates what database is being synchronized, the set-up at which your measurement were measured and select your QCoDeS database, e.g.,
mydatabase.db
,
The synchronization should begin immediately once the database is selected.
Note
It is also to add the QCoDeS database programmatically by running the following code.
import pathlib
from etiket_client.sync.backends.qcodes.qcodes_sync_class import QCoDeSSync, QCoDeSConfigData
from etiket_client.sync.backends.sources import add_sync_source
from etiket_client.python_api.scopes import get_scope_by_name
data_path = pathlib.Path('/path/to/my/database.db')
scope = get_scope_by_name('scope_name')
# optional: add extra attributes
extra_attributes = {
'attribute_name': 'attribute_value'
}
configData = QCoDeSConfigData(database_directory=data_path, set_up = "my_setup", extra_attributes = extra_attributes)
add_sync_source('my_sync_source_name', QCoDeSSync, configData, scope)
Setting Up Quantify (QMI) Synchronization
For Quantify data, the expected folder structure should resemble the following format:
main_folder
├── 20240101
│ ├── 20240101-211245-165-731d85-experiment_1
│ │ ├── 01-01-2024_01-01-01.json
│ │ ├── 01-01-2024_01-01-01.hdf5
├── 20240102
│ ├── 20240102-220655-268-455d85-experiment_2
│ │ ├── 02-01-2024_02-02-02.json
│ │ ├── 02-01-2024_02-02-02.hdf5
To set up synchronization for Quantify data:
Open the Add Source menu.
Define a name that indicates what database is being synchronized, the set-up at which your measurement were measured and select the folder containing your Quantify data, e.g.,
main_folder
in this example.
The synchronization should start automatically after the folder is selected.
Setting Up Core-Tools Synchronization
To configure synchronization with Core-Tools, you’ll need the credentials for the Core-Tools database.
These credentials are usually stored in the ct_config.yaml
file or initialized within the core-tools setup, for example:
from core_tools.data.SQL.connect import SQL_conn_info_local
SQL_conn_info_local(dbname = 'dbname', user = "user_name", passwd="password",
host = "localhost", port = 5432)
Warning
Please avoid syncing data from the host vanvliet.qutech.tudelft.nl to the cloud.
Managing Datasets and Files in the DataQruiser App
In addition to browsing and visualizing your data, you can also create datasets and upload files directly within the DataQruiser app.
Using API-Tokens for Authentication
When working on a server, with no graphical environment, you can log in using an API-Token.
An API-Token can be created in the DataQruiser app:
Open the DataQruiser app.
Click on the account icon (👤) in the top right corner.
Navigate to the “API-Tokens” section.
Click the “+” button to generate a new token.
Enter a descriptive name for your token (e.g., “Synchronization server token”).
Copy the generated token immediately - it will only be shown once.
Now it is possible to authenticate on the server using your API-Token:
from qdrive import login_with_api_token
login_with_api_token('your_api_token@qharborserver.nl')
# Verify the authentication was successful
from qdrive.scopes import get_scopes
print(f"Successfully authenticated. You have access to {len(get_scopes())} scopes.")
Tip
The API-Token is a secret key that should be kept confidential. We do not recommend storing it in any files. If you suspect your API-Token has been compromised, immediately revoke it in the DataQruiser app.