Complete Backup and Recovery under Linux¶
Caution - version specific commands
The exact commands for the backup and recovery depend on the Elastic Stack version.
We strongly recommend you take the commands from the documentation of the version corresponding to the system on which you wish to execute them.
You need a repository for both, creating and restoring backups. You have to set up the repository in advance. The procedure for a complete backup divides into the following steps:
-
Source system: Creating backups
-
Register a repository for the backups or determine information about an existing backup reposistory.
-
Set up a snapshot policy for a complete backup, if not already existing.
-
Create a manual snapshot/backup independent of the configured schedule.
-
-
Target system: Full Recovery of the database from a complete backup
-
Remove the existing database and create a new "empty" database.
-
Register the repository.
-
Restore the configuration from the latest snapshot/backup.
-
Activate the security, if required.
-
Restore the data.
-
Grant access to Elasticsearch from outside again (if you blocked it for this update).
-
Source System: Creating Complete Snapshots/Backups¶
Registering a Repository¶
See also Snapshot Repositories
-
On the management server, stop Elasticsearch and Kibana:
-
Kibana:
sudo systemctl stop seal-kibana
-
Elasticsearch:
sudo systemctl stop seal-elasticsearch
-
-
Customize the Elasticsearch configuration:
-
In and editor, open the Elasticsearch configuration file:
/opt/seal/etc/seal-elasticsearch/elasticsearch.yml
-
Enter the path of the repository for the backups, e.g.
backups
.The first two lines belong to the default configuration and are used for orientation.
path.data: /opt/seal/data/seal-elasticsearch path.logs: /var/log/seal/seal-elasticsearch path.repo: /opt/seal/data/seal-elasticsearch/backups
-
Save the file and exit.
-
-
On the management server, restart Elasticsearch and Kibana:
-
Elasticsearch:
sudo systemctl start seal-elasticsearch
-
Kibana:
sudo systemctl start seal-kibana
-
-
Register a repository for storing snapshots:
-
In a browser, open the DevTools Console of Kibana:
http://localhost:5601/app/dev_tools#/console
Hint - copy the commmands
-
Copy the following sample commands one by one.
You can use the browser cache to copy, modify and save the commands.
-
Paste the commands in the DevTools Console.
-
Replace the sample names by your own names.
-
Execute the commands by clicking the green arrow on the right in the first line of the command.
-
Register the repository:
The following command registers a repository named
my_repository
.This means, a subdirectory named
my_repository_location
will be created in thebackups
repository path you have defined above in thepath.repo
key in theelasticsearch.yml
file.Alternatively, you can specify an absolute path here, which has to start with the repository path.
PUT /_snapshot/my_repository { "type": "fs", "settings": { "location": "my_repository_location" } }
-
Verify the repository to grant access to the linked file system:
This verifies access to the linked file system.
POST /_snapshot/my_repository/_verify
-
Setting up a Snapshot Policy¶
Set up a snapshot policy for a full backup.
-
In a browser, open the DevTools Console of Kibana:
http://localhost:5601/app/dev_tools#/console
-
Set up a snapshot policy for a full backup.
With this snapshot policy, you back up all data and associated database configurations (global state). The snapshots created are suitable for completely restoring the last backed up state.
PUT /_slm/policy/nightly-global-snapshots { "schedule": "0 30 1 * * ?", "name": "<nightly-global-snap-{now/d}>", "repository": "my_repository", "config": { "indices": "*", "include_global_state": true }, "retention": { "expire_after": "30d", "min_count": 5, "max_count": 50 } }
Or: Determining the Repository Information¶
If repository and snapshot policies have been set up some time ago, you can find out the repository name and path.
-
Determine the name of the repository:
-
Determine the configuration of the
nightly-global-snapshots
snapshot policy:GET /_slm/policy/nightly-global-snapshots
-
Or determine all snapshot policies:
GET /_slm/policy
In the following sample response, you find the name of the associated repository in the
policy.repository
key:{ "nightly-global-snapshots" : { "version" : 1, "modified_date_millis" : 1668622663378, "policy" : { "name" : "<nightly-global-snap-{now/d}>", "schedule" : "0 30 1 * * ?", "repository" : "my_repository", ... } } }
-
-
Determine the path of the repository in the file system:
Use the the repository name determined above to request the associated configuration:
GET /_snapshot/my_repository
In the following sample response, you find the name of the associated repository path in the
settings.location
key:{ "my_repository" : { "type" : "fs", "uuid" : "d0Wq2tr6Ssm1Mv_wDC8dNg", "settings" : { "location" : "my_repository_location" } } }
Creating an Unscheduled Snapshot/Backup¶
You can create snapshots/backups independent of the configured schedule.
-
You have two options to create a snapshot:
-
In the DevTools Console of Kibana:
-
In a browser, open the DevTools Console of Kibana:
http://localhost:5601/app/dev_tools#/console
-
Configure an unscheduled snapshot/backup:
POST /_slm/policy/nightly-global-snapshots/_execute
-
-
In the Kibana user interface:
-
In a browser, open the the Kibana user interface:
http://localhost:5601/app/management/data/snapshot_restore/policies
-
In the line with the snapshot policy set up, click the arrow on the right. Its tooltip shows
run now
.
-
-
-
Check the result:
-
In a browser, open the view of available snapshots in the Kibana user interface:
http://localhost:5601/app/management/data/snapshot_restore/snapshots
You will find newly created snapshot in the left column of the list.
-
Click the snapshot.
You will finde information about the snapshot displayed on the right, and the list of saved indices below.
-
Target system: Complete Recovery from a Full Backup¶
Caution - new, fresh database required
The existing Elasticsearch database needs to be discarded!
You do not need to merge any data, you have to restore the Elasticsearch database from the latest backed up state. Ideally, the database's state is as fresh as possible.
Cleaning an Existing Database¶
You have to clean your existing database and then include the data configuration from your backup.
-
On the PLOSSYS server, stop Filebeat:
sudo systemctl stop seal-filebeat
-
On the management server, stop Elasticsearch and Kibana:
-
Kibana:
sudo systemctl stop seal-kibana
-
Elasticsearch:
sudo systemctl stop seal-elasticsearch
-
-
Adjust the Elasticsearch configuration:
Create a new, empty database, into which you can import your backup.
-
In and editor, open the Elasticsearch configuration file:
/opt/seal/etc/seal-elasticsearch/elasticsearch.yml
-
Enter the path of the repository for the backups, e.g.
backups
.The first two lines belong to the default configuration and are used for orientation.
path.data: /opt/seal/data/seal-elasticsearch path.logs: /var/log/seal/seal-elasticsearch path.repo: /opt/seal/data/seal-elasticsearch/backups
-
Deactivate the access from outside:
Any data Filebeat tries to upload to the Elasticsearch database during the recovery process will be lost.
Instead of stopping every single Filebeat you have installed on remote systems, you can deactivate the access to the database from outside:
Replace
network.host: 0.0.0.0
by
network.host: localhost
Caution - later reactivation
Keep in mind that you must reactivate the access from outside after finishing the restauration!
-
Allow the deletion of indices using wildcards
If the target system already contains SEAL-specific data, you must delete these old SEAL indices before restoring the databsse.
If you do this via the DevTools console, you have to explicitly allow the use of wildcards when deleting indices.
Insert the following line into the configuration file:
action.destructive_requires_name: false
Caution - later resetting
Remember to remove this setting later!
-
Save the file and exit.
-
-
On the management server, restart Elasticsearch and Kibana:
-
Elasticsearch:
sudo systemctl start seal-elasticsearch
-
Kibana:
sudo systemctl start seal-kibana
-
-
Make sure the database ist empty before you start restoring the backup.
If the target system already contains SEAL-specific data, you must delete old SEAL indices before the data restauration.
-
You can delete the SEAL indices via the Kibana User Interface:
http://localhost:5601/app/management/data/index_management/indices?filter=seal.
Select the SEAL indices and delete them to empty the database.
-
Alternatively, you can also delete them via the DevTools console using the following command:
DELETE seal-*
You can check the result under the URL mentioned above.
Hint - wildcards
If this command does not work as desired, check whether the
action.destructive_requires_name
cluster parameter is set properly, see above.
Caution - data loss
If your Elasticsearch database already contains SEAL-specific data, you must deleted them first. These old data will be lost. They are supposed to be part of your backup data.
-
Registering the Backup Repository¶
-
Register the repository that contains the snapshots to be restored:
-
In a browser, open the DevTools Console of Kibana:
http://localhost:5601/app/dev_tools#/console
-
Register the repository:
The following command registers a repository named
my_repository
.This is a subdirectory of
my_repository_location
in thebackups
repository path you have defined in thepath.repo
key in theelasticsearch.yml
file before.Alternatively, you can specify an absolute path here, which has to start with the repository path.
PUT /_snapshot/my_repository { "type": "fs", "settings": { "location": "my_repository_location" } }
-
Verify the repository to grant access to the linked file system:
This verifies access to the linked file system.
POST /_snapshot/my_repository/_verify
-
Restoring the Configuration¶
Restore the system configuration, i. e. users, index patterns, index templates, ... from the latest snapshot/backup.
Hint - restoring only data
If you whish to restore just the data, You can skip this part.
In this case, make sure the required configuration has already been set up by calling the load-config
configuration script
cmp. https://seal-elasticstack.docs.sealsystems.de/linux/configuration/config_elastic_stack_scripts.html
and continue with Refreshing the Kibana Connection Settings.
-
In a browser, open the DevTools Console of Kibana:
http://localhost:5601/app/dev_tools#/console
Hint - copy the commmands
-
Copy the following sample commands one by one.
You can use the browser cache to copy, modify and save the commands.
-
Paste the commands in the DevTools Console.
-
Replace the sample names by your own names.
-
Execute the commands by clicking the green arrow on the right in the first line of the command.
-
-
Find the latest snapshot:
GET /_snapshot/my_repository/nightly-global-snap*?size=1&sort=start_time&order=desc
Or search for the 10 most recent backups of various snapshot policies in descending order:
GET /_snapshot/my_repository/*?size=10&sort=start_time&order=desc
-
Select the latest snapshot from the list, e. g.
nightly-global-snap-2022.1-16-8fn7qrfsrguogphyhciynw
fromsnapshots[0].snapshot
:{ "snapshots" : [ { "snapshot" : "nightly-global-snap-2022.1-16-8fn7qrfsrguogphyhciynw", "uuid" : "2765_TIYRvywZ-UUT1yHUg", "repository" : "my_repository", ... } ] }
-
Start the restauration of the configuration excluding the SEAL indices:
Caution - configuration only
You must disable the restauration of the aliases here. The system aliases have already been created when the new, empty Elasticsearch database has been built. These aliases must not be overwritten.
POST /_snapshot/my_repository/nightly-global-snap-2022.1-16-8fn7qrfsrguogphyhciynw/_restore { "indices": "-*", "include_global_state": true, "include_aliases": false }
Refreshing the Kibana Connection Settings¶
Hint - access to Kibana
After the configuration has been restored from the backup, the previous access setting from Kibana to Elasticsearch are lost and you have to reconfigure it. You may recognize this by an error the Kibana User interface might report.
If the target system has been set up with the automatic security configuration, you have to execute some commands again to reestablish the connection from Kibana to Elasticsearch.
-
On the management server, stop Kibana:
sudo systemctl stop seal-kibana
-
Create a new enrollment token:
sudo ES_PATH_CONF=/opt/seal/etc/seal-elasticsearch /opt/seal/seal-elasticsearch/bin/elasticsearch-create-enrollment-token --scope kibana --force
-
Import the new enrollment token into Kibana:
sudo KBN_PATH_CONF=/opt/seal/etc /opt/seal/seal-kibana/bin/kibana-setup --silent --enrollment-token <enrollment-token>
-
On the management server, restart Kibana:
sudo systemctl start seal-kibana
-
In a browser, open the DevTools Console of Kibana:
http://localhost:5601/app/dev_tools#/console
Enter user and password from the restored configuration.
-
On the management server, restart Elasticsearch:
sudo systemctl start seal-elasticsearch
Restoring the Data¶
Restore the SEAL indices with the same snapshot as the configuration.
-
In a browser, open the DevTools Console of Kibana:
http://localhost:5601/app/dev_tools#/console
-
Start the restauration of the data including the aliases:
Caution - with aliases
You must enable the restauration of the aliases here. Otherwise the index lifecycle policies can no longer work properly and indices can become too large because they are not rolled over anymore.
POST /_snapshot/my_repository/nightly-global-snap-2022.1-16-8fn7qrfsrguogphyhciynw/_restore { "indices": "seal-*", "include_global_state": false, "include_aliases": true }
Enabling Access to Elasticsearch¶
-
On the management server, stop Elasticsearch and Kibana:
-
Kibana:
sudo systemctl stop seal-kibana
-
Elasticsearch:
sudo systemctl stop seal-elasticsearch
-
-
In and editor, open the Elasticsearch configuration file:
/opt/seal/etc/seal-elasticsearch/elasticsearch.yml
-
Reactivate the access from outside:
Replace
network.host: localhost
by
network.host: 0.0.0.0
-
Save the file and exit.
-
On the management server, restart Elasticsearch and Kibana:
-
Elasticsearch:
sudo systemctl start seal-elasticsearch
-
Kibana:
sudo systemctl start seal-kibana
-
-
On the PLOSSYS server, check the connection data:
-
In the filebeat.yml, update the access data, i. e. host, user, password, API key, ...
-
Check the connection to Elasticsearch:
sudo -u seal /opt/seal/seal-filebeat/filebeat -c /opt/seal/etc/filebeat.yml test output
-
-
On the PLOSSYS server, restart Filebeat:
sudo systemctl start seal-filebeat