In the ever-evolving landscape of managed database services (DBaaS), automation has become a key player in streamlining operations and ensuring the security of critical data. The GrapheneDB API emerges as a powerful tool, providing our users with the ability to automate deployment-related tasks and enhance overall database management processes.
Automating Disaster Recovery
In this blog post, we delve into the capabilities of the GrapheneDB API, focusing on how it can transform your Disaster Recovery plan. The GrapheneDB API allows for the automation of essential operations such as creating snapshots, failovers, and data restoration, providing a seamless and efficient process even in the face of the worst-case scenarios.
This level of automation brings peace of mind, knowing that your data is secure and recoverable, reducing the RTO (Recovery time objective) to restore normal operations following an outage or data loss, and allowing you to prepare a plan for the acceptable RPO (Recovery Point Objective) for your Organization.
If your aim is to reduce RPO to the maximum possible, you should have a good snapshot policy, for example the Extra Frequency Snapshot policy. That particular policy creates a snapshot every 3 hours, with a retention of 2 days. You can even combine policies to have better retention and all relevant details around this topic can be found in this article.
Security Measures
One of the standout features of the GrapheneDB API is its commitment to data security. Robust authentication measures are in place to safeguard your sensitive information. Access to the GrapheneDB API is exclusively available through HTTPS, ensuring that data encryption is maintained during transit, adding an extra layer of protection to your valuable data.
Disaster Recovery plan including database cloning
This Disaster Recovery example plan includes cloning database, which will create a snapshot and a database clone, and switching connection URL from origin database to a cloned database. Important to note is that in a case of a disaster like zonal outage, when you create a new database it will land in a healthy availability zone.
Let’s break down this example Disaster Recovery flow facilitated by the GrapheneDB API.
-
Get Access Token
curl https://api.db.graphenedb.com/organizations/oauth/token \ -d "client_secret"="SECRET" \ -d "client_id"="ID" \ -d "grant_type"="client_credentials"
-
Get Environments (to get
environmentId
)curl https://api.db.graphenedb.com/deployments/environments \ -H "authorization: Bearer ACCESS_TOKEN" \ -H "content-type: application/json"
-
Get Database ID
curl https://api.db.graphenedb.com/deployments/environments/{environmentId}/databases \ -H "authorization: Bearer ACCESS_TOKEN" \ -H "Content-Type: application/json"
-
Clone Database
environmentId
: the ID of the Environment you want the cloned database to be deployed into.curl -X POST https://api.db.graphenedb.com/deployments/databases/graphneo/{databaseId}/clone \ -H "Content-Type: application/json" \ -H "authorization: Bearer ACCESS_TOKEN" \ -d "{"name": "string", "environmentId": "{environmentId}"}"
-
Switch Connection URL
databaseId
: The ID of the origin database. Its connection urls will be switched to the target database.
target database (databaseId)
: The ID of the target database.curl -X POST https://api.db.graphenedb.com/deployments/databases/graphneo/{databaseId}/connections/switch \ -H "Content-Type: application/json" \ -H "authorization: Bearer ACCESS_TOKEN" \ -d "{"targetDatabase": "{databaseId}"}"
Finally, you can delete the origin database, however, this should be reviewed with caution and decided when the deletion can be done, to ensure that you don’t lose the snapshots related to origin database. You can do it manually like this, or with the API request.
curl -X DELETE https://api.db.graphenedb.com/deployments/databases/graphneo/{databaseId} \
-H "authorization: Bearer ACCESS_TOKEN" \
-H "Content-Type: application/json"
What if cloning doesn’t work, what would be my Disaster Recovery plan then?
Although not likely, an outage might also affect the ability of creating new snapshots, thus affecting the cloning operation, so there is another approach you can prepare and have a Disaster Recovery plan ready. This Disaster Recovery example plan includes creating a new database out of a Snapshot, and switching Connection URL from origin database to the new database. Keep reading below to find out the steps.
Important to take into account for this Distater Recovery plan is that you should have a good snapshot policy enabled, and even combine policies to have a good frequency and retention. For example, good snapshot policy for this situation would be the Extra Frequency one, that creates a snapshot every 3 hours, with a retention of 2 days.
-
Get Access Token
curl https://api.db.graphenedb.com/organizations/oauth/token \ -d "client_secret"="SECRET" \ -d "client_id"="ID" \ -d "grant_type"="client_credentials"
-
Get Environments (to get
environmentId
)curl https://api.db.graphenedb.com/deployments/environments \ -H "authorization: Bearer ACCESS_TOKEN" \ -H "content-type: application/json"
-
Get database ID
curl https://api.db.graphenedb.com/deployments/environments/{environmentId}/databases \ -H "authorization: Bearer ACCESS_TOKEN" \ -H "Content-Type: application/json"
-
Get Snapshot ID
curl https://api.db.graphenedb.com/deployments/databases/graphneo/{databaseId}/snapshots/scheduled \ -H "authorization: Bearer ACCESS_TOKEN" \ -H "Content-Type: application/json"
-
Create database from Snapshot
snapshotEnvironmentID
: The ID of the Environment, where the database is deployed from which the snapshot was taken.
environmentID
: The ID of the Environment where the new database will get deployed.All other configuration for the database like plan, version, etc. will be retrieved from the Snapshot itself, and cannot be changed.
curl -X POST https://api.db.graphenedb.com/deployments/databases/graphneo/restore \ -H "authorization: Bearer ACCESS_TOKEN" \ -H "content-type: application/json" \ -d "{"snapshotId": "string", "snapshotEnvironmentId": "{environmentId}", "environmentId": "{environmentId}", "name": "string"}"
-
Switch DNS
databaseId
: The ID of the origin database. Its connection urls will be switched to the target database.
target database (databaseId
): The ID of the target database.curl -X POST https://api.db.graphenedb.com/deployments/databases/graphneo/{databaseId}/connections/switch \ -H "Content-Type: application/json" \ -H "authorization: Bearer ACCESS_TOKEN" \ -d "{"targetDatabase": "{databaseId}"}"
Finally, you can delete the origin database, however, this should be reviewed with caution and decided when the deletion can be done, to ensure that you don’t lose the snapshots related to origin database. You can do it manually like this, or with the API request.
curl -X DELETE https://api.db.graphenedb.com/deployments/databases/graphneo/{databaseId} \
-H "authorization: Bearer ACCESS_TOKEN" \
-H "Content-Type: application/json"
What if someone from my Organization deletes all the data, what can I do?
If we think of some of the worst case scenarios to happen, one of them could be that someone from your Organization deletes all the data. There could be two possible disasters here:
There are ways to prepare a Disaster Recovery plan to minimize data loss and reduce time of getting back to the operational state. Let’s talk about what are the actual Disaster Recovery plans for these scenarios.
Someone deleting the database
First thing to ensure that you assign roles to your Organization members, enabling you to provide controlled access to different areas of the Organization. Roles can greatly help by just providing minimum access possible to manage production databases. Assignment to Environment is done via the Environment Type.
If possible, good practice here would be to use clones of production databases in different Environments, for providing access to data to different users, while minimizing access to production Environment to only few Admins for example.
Someone deleting data with a request
For this scenario where someone deletes a data with a request, you should have a good snapshot policy enabled and in place, and even combine policies to have a good frequency and retention. For example, good snapshot policy for this situation would be the Extra Frequency one, that creates a snapshot every 3 hours, combined with Extra Retention policy, which has a retention of 3 years.
Scheduled Snapshots cannot be deleted and will disappear from the system when expired following the expiration date depending on the Policy. On-demand Snapshots can be used before executing dangerous operations to be able to reduce the RPO as much as possible in case of a problem.
This Disaster Recovery plan would include the restore of the snapshot into the same database.
-
Get Access token
curl https://api.db.graphenedb.com/organizations/oauth/token \ -d "client_secret"="SECRET" \ -d "client_id"="ID" \ -d "grant_type"="client_credentials"
-
Get Environments (to get environmentId)
curl https://api.db.graphenedb.com/deployments/environments \ -H "authorization: Bearer ACCESS_TOKEN" \ -H "content-type: application/json"
-
Get database ID
curl https://api.db.graphenedb.com/deployments/environments/{environmentId}/databases \ -H "authorization: Bearer ACCESS_TOKEN" \ -H "Content-Type: application/json"
-
Get Snapshot ID (example for scheduled snapshots)
curl https://api.db.graphenedb.com/deployments/databases/graphneo/{databaseId}/snapshots/scheduled \ -H "authorization: Bearer ACCESS_TOKEN" \ -H "Content-Type: application/json"
-
Restore database from Snapshot
databaseId
: The ID of the target database
snapshotEnvironmentID
: The ID of the Environment, where the database is deployed from which the snapshot was taken.curl -X POST https://api.db.graphenedb.com/deployments/databases/graphneo/{databaseId}/restore \ -H "Content-Type: application/json" \ -H "authorization: Bearer ACCESS_TOKEN" \ -d "{"snapshotId": "string","snapshotEnvironmentId": "{environmentId}"}"
Achieving More with GrapheneDB API
Beyond Disaster Recovery, the GrapheneDB API opens up a world of possibilities. Our users can optimize costs, maintain consistent environments across different application stages, and capitalize on the flexibility and efficiency that automated operations offer. With its secure authentication and protective measures, the GrapheneDB API is a trusted ally in the dynamic realm of database management.