GreptimeDB Export & Import Tools
This guide describes how to use GreptimeDB's Export and Import tools for database backup and restoration.
The Export and Import tools provide functionality for backing up and restoring GreptimeDB databases. These tools can handle both schema and data, allowing for complete or selective backup and restoration operations.
Export Tool
Command Syntax
greptime cli export [OPTIONS]
Options
Option | Required | Default | Description |
---|---|---|---|
--addr | Yes | - | Server address to connect |
--output-dir | Yes | - | Directory to store exported data |
--database | No | all databasses | Name of the database to export |
--export-jobs, -j | No | 1 | Number of parallel export jobs(multiple databases can be exported in parallel) |
--max-retry | No | 3 | Maximum retry attempts per job |
--target, -t | No | all | Export target (schema/data/all) |
--start-time | No | - | Start of time range for data export |
--end-time | No | - | End of time range for data export |
--auth-basic | No | - | Use the <username>:<password> format |
--timeout | No | 0 | The timeout for a single call to the DB, default is 0 which means never timeout (e.g., 30s , 10min 20s ) |
Export Targets
schema
: Exports table schemas only (SHOW CREATE TABLE
)data
: Exports table data only (COPY DATABASE TO
)all
: Exports both schemas and data (default)
Output Directory Structure
<output-dir>/
└── greptime/
└── <database>/
├── create_database.sql
├── create_tables.sql
├── copy_from.sql
└── <data files>
Import Tool
Command Syntax
greptime cli import [OPTIONS]
Options
Option | Required | Default | Description |
---|---|---|---|
--addr | Yes | - | Server address to connect |
--input-dir | Yes | - | Directory containing backup data |
--database | No | all databases | Name of the database to import |
--import-jobs, -j | No | 1 | Number of parallel import jobs (multiple databases can be imported in parallel) |
--max-retry | No | 3 | Maximum retry attempts per job |
--target, -t | No | all | Import target (schema/data/all) |
--auth-basic | No | - | Use the <username>:<password> format |
Import Targets
schema
: Imports table schemas onlydata
: Imports table data onlyall
: Imports both schemas and data (default)
Common Usage Scenarios
Full Databases Backup
# Export all databases backup
greptime cli export --addr localhost:4000 --output-dir /tmp/backup/greptimedb
# Import all databases
greptime cli import --addr localhost:4000 --input-dir /tmp/backup/greptimedb
Schema-Only Operations
# Export only schemas
greptime cli export --addr localhost:4000 --output-dir /tmp/backup/schemas --target schema
# Import only schemas
greptime cli import --addr localhost:4000 --input-dir /tmp/backup/schemas --target schema
Time-Range Based Backup
# Export data within specific time range
greptime cli export --addr localhost:4000 \
--output-dir /tmp/backup/timerange \
--start-time "2024-01-01 00:00:00" \
--end-time "2024-01-31 23:59:59"
Specific Database Backup
# To export a specific database
greptime cli export --addr localhost:4000 --output-dir /tmp/backup/greptimedb --database '{my_database_name}'
# The same applies to import tool
greptime cli import --addr localhost:4000 --input-dir /tmp/backup/greptimedb --database '{my_database_name}'
Best Practices
-
Parallelism Configuration
- Adjust
--export-jobs
/--import-jobs
based on available system resources - Start with a lower value and increase gradually
- Monitor system performance during operations
- Adjust
-
Backup Strategy
- Incremental data backups using time ranges
- Periodic backups for disaster recovery
-
Error Handling
- Use
--max-retry
for handling transient failures - Keep logs for troubleshooting
- Use
Troubleshooting
Common Issues
-
Connection Errors
- Verify server address and port
- Check network connectivity
- Ensure authentication credentials are correct
-
Permission Issues
- Verify read/write permissions on output/input directories
-
Resource Constraints
- Reduce parallel jobs
- Ensure sufficient disk space
- Monitor system resources during operations
Performance Tips
-
Export Performance
- Use time ranges for large datasets
- Adjust parallel jobs based on CPU/memory
- Consider network bandwidth limitations
-
Import Performance
- Monitor database resources