Getting started

Migrating from other providers or from local hosting

Overview

Whether you’re moving from local hosting or a from a different provider, our Import tool enables you to transfer your data without hassle, as long as you have access to the files. This guide will provide the necessary steps to ensure a successful migration, minimizing downtime and preserving your data’s integrity.

The general approach here is to make sure your origin database is stopped, backup created and downloaded, and imported to the new database with GrapheneDB. It has to meet the requirements of the same Neo4j version, database plan big enough for the dataset that gets imported and to contain the expected folder structure.

Ensure that you have expected folder structure

It can happen that different platforms provide compressed export file with incompatible folder structure to GrapheneDB’s requirements. It’s not a problem that cannot be fixed. However, you’ll need to manually create missing files and/or move certain files into respective needed folders so that the expected folder structure is honored.

For context, our expected folder structure is:

data/
├── databases
│ ├── graph.db
│ └── system
└── transactions
├── graph.db
└── system

If your folder structure is the same as our requirement above, that’s great, and you can just proceed to this section of our article, to understand what would be the process of migrating to GrapheneDB. Keep in mind that you need to ensure that your Neo4j version is compatible with GrapheneDB database, and that the database size will fit your dataset.

How can I create the expected folder structure?

We will give one example of incompatible folder structure, and guide you through steps needed to make it compatible, however, please keep in mind that your case may be different from our example. If you are having any trouble when it comes to this, please feel free to Open a Support Case and we’ll gladly assist.

Let’s say your folder structure looks like this:

graph.db
├── profiles
│   ├── schema
│   │   └── index
│   │       ├── native-3.0
│   │       │   ├── 1
│   │       │   │   └── btree-1.0
│   │       │   │       └── index-1.cache
│   │       │   ├── 11
│   │       │   │   └── btree-1.0
│   │       │   │       └── index-11.cache
│   │       │   ├── 13
│   │       │   │   └── btree-1.0
│   │       │   │       └── index-13.cache
│   │       │   └── (other index folders)
│   │       └── btree-1.0
│   │           ├── 1051
│   │           │   └── index-1051.cache
│   │           └── 27
│   │               └── index-27.cache
│   ├── neostore.cache
│   ├── neostore.counts.db.cache
│   ├── neostore.indexstats.db.cache
│   ├── neostore.labelscanstore.db.cache
│   ├── (other neostore files with various suffixes)
├── schema
│   └── index
│       ├── native-3.0
│       │   ├── 1
│       │   │   ├── lucene-2.0
│       │   │   │   └── 1
│       │   │   │       ├── file1.cfe
│       │   │   │       ├── file1.cfs
│       │   │   │       ├── file1.si
│       │   │   │       ├── segments_file
│       │   │   │       └── write.lock
│       │   │   └── btree-1.0
│       │   │       └── index-1
│       │   ├── 11
│       │   │   ├── lucene-2.0
│       │   │   │   └── 1
│       │   │   │       ├── file2.cfe
│       │   │   │       ├── file2.cfs
│       │   │   │       ├── file2.si
│       │   │   │       ├── segments_file
│       │   │   │       └── write.lock
│       │   │   └── btree-1.0
│       │   │       └── index-11
│       │   ├── (other index folders)
│       └── btree-1.0
│           ├── 1051
│           │   └── index-1051
│           └── 27
│               └── index-27
├── tools
│   └── database.id
├── checkpoint.43
├── neostore
├── neostore.counts.db
├── neostore.indexstats.db
├── neostore.labelscanstore.db
├── neostore.labeltokenstore.db
│   ├── id
│   ├── names
│   │   └── id
├── neostore.nodestore.db
│   ├── id
│   ├── labels
│   │   └── id
├── neostore.propertystore.db
│   ├── arrays
│   │   └── id
│   ├── id
│   ├── index
│   │   ├── id
│   │   └── keys
│   │       └── id
│   ├── strings
│   │   └── id
├── neostore.relationshipgroupstore.db
│   ├── id
│   ├── degrees.db
├── neostore.relationshipstore.db
│   ├── id
├── neostore.relationshiptypestore.db
│   ├── id
│   ├── names
│   │   └── id
├── neostore.schemastore.db
│   ├── id
├── neostore.transaction.db.195
└── neostore.transaction.db.196

If you look into it closely and compare to expected folder structure, you will understand that databases > graph.db is the folder that is compressed, and it is missing the parent data/ folder. Additionally, this example folder structure does not include system database, which is the database that holds the credentials, meaning that users would need to be recreated eventually. Finally, we can also see that there are some transaction and checkpoint files, but they are not placed in expected folders.

Now, you simply need to recreate the missing folders and place the existing files into those folders. Here’s an example on how to do it with mkdir and mv commands:

~$ mkdir data/databases 
~$ cp -R graph.db data/databases/neo4j 
~$ mkdir data/transactions 
~$ mkdir data/transactions/neo4j 
~$ mv data/databases/neo4j/neostore.transaction.db.* data/transactions/neo4j/ 
~$ mv data/databases/neo4j/checkpoint.* data/transactions/neo4j/

Ensure that GDB database is same version as your dataset

We offer support for Neo4j versions 4.4 and 5. It’s important to ensure that your dataset is compatible with the major and minor version of Neo4j that you intend to deploy in our Console.

If your dataset corresponds to any other version within the 4.x series, apart from 4.4, please Open a Support Case. Our Support Team can then assess your specific situation and determine if further assistance or accommodations can be provided.

Ensure that GDB database plan can fit your dataset size

In addition to ensuring compatibility with the Neo4j version, it’s important to consider the size of your dataset when deploying the database on GrapheneDB. Ideally, you would assess the size of the uncompressed dataset file and compare it against the capacities offered by our database plans. This check allows you to determine the appropriate database plan that can accommodate your dataset size.

For comprehensive information regarding our database plans, including their specifications, you can refer to the following article that can be found here.

Migration process

I need to recreate folder structure

For this migration process we are taking our example from this section, where the required folder structure needs to be created. Before you start, it is crucial to stop your origin database, to ensure data consistency and avoid process failing.

Step 1: stop the database
Step 2: export the backup file
Step 3. recreate the required folder structure

~$ mkdir data/databases 
~$ cp -R graph.db data/databases/neo4j 
~$ mkdir data/transactions 
~$ mkdir data/transactions/neo4j 
~$ mv data/databases/neo4j/neostore.transaction.db.* data/transactions/neo4j/ 
~$ mv data/databases/neo4j/checkpoint.* data/transactions/neo4j/

Step 4. compress the files

~$ tar -cjf graph.tar.bz2 data/

Step 5. import to GrapheneDB using regular Import process. You can find deatiled guide around import in this article.
Step 6. once you import to GrapheneDB database, in our example new system database will be created, because it was missing. That means that on first login in Neo4j Browser, you’ll need to login with default credentials (neo4j/neo4j). Then you’ll be able to recreate the users. You can find details on user management in this article.

My folder structure corresponds to expected one

If your folder structure corresponds to our expected folder structure, you just need to follow the steps below. Before you start, it is crucial to stop your origin database, to ensure data consistency and avoid process failing.

Step 1: stop the database
Step 2: export the backup file
Step 3: compress the files

~$ tar -cjf graph.tar.bz2 data/

Step 4: import to GrapheneDB using regular Import process. You can find detailed guide around import in this article.

Database credentials

The system database is the database that holds the credentials.

In a case where your system database is present in the backup file and within the expected folder structure, the credentilas should be migrated as well, meaning that you should be able to connect to the database with same credentials used as in origin database.

In a case where system database is not present, such as in our previous example, the users will need to be recreated upon first login. For context, when system database is missing, Neo4j creates new system database at startup, with default user and password (neo4j/neo4j). After you login with default credentials, you can recreate the users. You can find details on user management in this article.

What to do if something goes wrong?

In case your import process fails, it’s usually due to one of the below mentioned reasons:

The store files were copied while Neo4j is still running. Please make sure your local database is stopped.

There are store files missing within the compressed file. Make sure the archive contains the full data directory and all files inside.

The store files correspond to a different version of Neo4j than the one on GrapheneDB. Please make sure you’re importing data with the same version as your GrapheneDB instance.

The compressed file is not a supported format. Make sure you use one of our supported formats, which include zip, tar, cpio, gz, bz2 and xz.

Try out today and get $50 in CreditsTry out today and get $50
Check
Evaluate for free
Check
Pay as you go
Check
No hidden costs