Skip to content

Complete Backup and Recovery under Linux


Caution - version specific commands

The exact commands for the backup and recovery depend on the Elastic Stack version.

We strongly recommend you take the commands from the documentation of the version corresponding to the system on which you wish to execute them.

You need a repository for both, creating and restoring backups. You have to set up the repository in advance. The procedure for a complete backup divides into the following steps:

  1. Source system: Creating backups

    1. Register a repository for the backups or determine information about an existing backup reposistory.

    2. Set up a snapshot policy for a complete backup, if not already existing.

    3. Create a manual snapshot/backup independent of the configured schedule.

  2. Target system: Full Recovery of the database from a complete backup

    1. Remove the existing database and create a new "empty" database.

    2. Register the repository.

    3. Restore the configuration from the latest snapshot/backup.

    4. Activate the security, if required.

    5. Restore the data.

    6. Grant access to Elasticsearch from outside again (if you blocked it for this update).


Source System: Creating Complete Snapshots/Backups

Registering a Repository

See also Snapshot Repositories

  1. On the management server, stop Elasticsearch and Kibana:

    • Kibana:

      sudo systemctl stop seal-kibana
      
    • Elasticsearch:

      sudo systemctl stop seal-elasticsearch
      
  2. Customize the Elasticsearch configuration:

    1. In and editor, open the Elasticsearch configuration file:

      /opt/seal/etc/seal-elasticsearch/elasticsearch.yml
      
    2. Enter the path of the repository for the backups, e.g. backups.

      The first two lines belong to the default configuration and are used for orientation.

      path.data: /opt/seal/data/seal-elasticsearch
      path.logs: /var/log/seal/seal-elasticsearch
      path.repo: /opt/seal/data/seal-elasticsearch/backups
      
    3. Save the file and exit.

  3. On the management server, restart Elasticsearch and Kibana:

    • Elasticsearch:

      sudo systemctl start seal-elasticsearch
      
    • Kibana:

      sudo systemctl start seal-kibana
      
  4. Register a repository for storing snapshots:

    1. In a browser, open the DevTools Console of Kibana:

      http://localhost:5601/app/dev_tools#/console
      

    Hint - copy the commmands

    1. Copy the following sample commands one by one.

      You can use the browser cache to copy, modify and save the commands.

    2. Paste the commands in the DevTools Console.

    3. Replace the sample names by your own names.

    4. Execute the commands by clicking the green arrow on the right in the first line of the command.

    1. Register the repository:

      The following command registers a repository named my_repository.

      This means, a subdirectory named my_repository_location will be created in the backups repository path you have defined above in the path.repo key in the elasticsearch.yml file.

      Alternatively, you can specify an absolute path here, which has to start with the repository path.

      PUT /_snapshot/my_repository
      {
        "type": "fs",
        "settings": {
          "location": "my_repository_location"
        }
      }
      
    2. Verify the repository to grant access to the linked file system:

      This verifies access to the linked file system.

      POST /_snapshot/my_repository/_verify
      

Setting up a Snapshot Policy

Set up a snapshot policy for a full backup.

  1. In a browser, open the DevTools Console of Kibana:

    http://localhost:5601/app/dev_tools#/console
    
  2. Set up a snapshot policy for a full backup.

    With this snapshot policy, you back up all data and associated database configurations (global state). The snapshots created are suitable for completely restoring the last backed up state.

    PUT /_slm/policy/nightly-global-snapshots
    {
      "schedule": "0 30 1 * * ?",
      "name": "<nightly-global-snap-{now/d}>",
      "repository": "my_repository",
      "config": {
        "indices": "*",
        "include_global_state": true
      },
      "retention": {
        "expire_after": "30d",
        "min_count": 5,
        "max_count": 50
      }
    }
    

Or: Determining the Repository Information

If repository and snapshot policies have been set up some time ago, you can find out the repository name and path.

  1. Determine the name of the repository:

    • Determine the configuration of the nightly-global-snapshots snapshot policy:

      GET /_slm/policy/nightly-global-snapshots
      
    • Or determine all snapshot policies:

      GET /_slm/policy
      

    In the following sample response, you find the name of the associated repository in the policy.repository key:

    {
      "nightly-global-snapshots" : {
        "version" : 1,
        "modified_date_millis" : 1668622663378,
        "policy" : {
          "name" : "<nightly-global-snap-{now/d}>",
          "schedule" : "0 30 1 * * ?",
          "repository" : "my_repository",
        ...
        }
      }
    }
    
  2. Determine the path of the repository in the file system:

    Use the the repository name determined above to request the associated configuration:

    GET /_snapshot/my_repository
    

    In the following sample response, you find the name of the associated repository path in the settings.location key:

    {
      "my_repository" : {
        "type" : "fs",
        "uuid" : "d0Wq2tr6Ssm1Mv_wDC8dNg",
        "settings" : {
          "location" : "my_repository_location"
        }
      }
    }
    

Creating an Unscheduled Snapshot/Backup

You can create snapshots/backups independent of the configured schedule.

  1. You have two options to create a snapshot:

    1. In the DevTools Console of Kibana:

      1. In a browser, open the DevTools Console of Kibana:

        http://localhost:5601/app/dev_tools#/console
        
      2. Configure an unscheduled snapshot/backup:

        POST /_slm/policy/nightly-global-snapshots/_execute
        
    2. In the Kibana user interface:

      1. In a browser, open the the Kibana user interface:

        http://localhost:5601/app/management/data/snapshot_restore/policies
        
      2. In the line with the snapshot policy set up, click the arrow on the right. Its tooltip shows run now.

  2. Check the result:

    1. In a browser, open the view of available snapshots in the Kibana user interface:

      http://localhost:5601/app/management/data/snapshot_restore/snapshots
      

      You will find newly created snapshot in the left column of the list.

    2. Click the snapshot.

      You will finde information about the snapshot displayed on the right, and the list of saved indices below.


Target system: Complete Recovery from a Full Backup

Caution - new, fresh database required

The existing Elasticsearch database needs to be discarded!

You do not need to merge any data, you have to restore the Elasticsearch database from the latest backed up state. Ideally, the database's state is as fresh as possible.


Cleaning an Existing Database

You have to clean your existing database and then include the data configuration from your backup.

  1. On the PLOSSYS server, stop Filebeat:

    sudo systemctl stop seal-filebeat
    
  2. On the management server, stop Elasticsearch and Kibana:

    • Kibana:

      sudo systemctl stop seal-kibana
      
    • Elasticsearch:

      sudo systemctl stop seal-elasticsearch
      
  3. Adjust the Elasticsearch configuration:

    Create a new, empty database, into which you can import your backup.

    1. In and editor, open the Elasticsearch configuration file:

      /opt/seal/etc/seal-elasticsearch/elasticsearch.yml
      
    2. Enter the path of the repository for the backups, e.g. backups.

      The first two lines belong to the default configuration and are used for orientation.

      path.data: /opt/seal/data/seal-elasticsearch
      path.logs: /var/log/seal/seal-elasticsearch
      path.repo: /opt/seal/data/seal-elasticsearch/backups
      
    3. Deactivate the access from outside:

      Any data Filebeat tries to upload to the Elasticsearch database during the recovery process will be lost.

      Instead of stopping every single Filebeat you have installed on remote systems, you can deactivate the access to the database from outside:

      Replace

      network.host: 0.0.0.0
      

      by

      network.host: localhost
      

      Caution - later reactivation

      Keep in mind that you must reactivate the access from outside after finishing the restauration!

    4. Allow the deletion of indices using wildcards

      If the target system already contains SEAL-specific data, you must delete these old SEAL indices before restoring the databsse.

      If you do this via the DevTools console, you have to explicitly allow the use of wildcards when deleting indices.

      Insert the following line into the configuration file:

      action.destructive_requires_name: false
      

      Caution - later resetting

      Remember to remove this setting later!

    5. Save the file and exit.

  4. On the management server, restart Elasticsearch and Kibana:

    • Elasticsearch:

      sudo systemctl start seal-elasticsearch
      
    • Kibana:

      sudo systemctl start seal-kibana
      
  5. Make sure the database ist empty before you start restoring the backup.

    If the target system already contains SEAL-specific data, you must delete old SEAL indices before the data restauration.

    • You can delete the SEAL indices via the Kibana User Interface:

      http://localhost:5601/app/management/data/index_management/indices?filter=seal.
      

      Select the SEAL indices and delete them to empty the database.

    • Alternatively, you can also delete them via the DevTools console using the following command:

      DELETE seal-*
      

      You can check the result under the URL mentioned above.

      Hint - wildcards

      If this command does not work as desired, check whether the action.destructive_requires_name cluster parameter is set properly, see above.

    Caution - data loss

    If your Elasticsearch database already contains SEAL-specific data, you must deleted them first. These old data will be lost. They are supposed to be part of your backup data.


Registering the Backup Repository

  1. Register the repository that contains the snapshots to be restored:

    1. In a browser, open the DevTools Console of Kibana:

      http://localhost:5601/app/dev_tools#/console
      
    2. Register the repository:

      The following command registers a repository named my_repository.

      This is a subdirectory of my_repository_location in the backups repository path you have defined in the path.repo key in the elasticsearch.yml file before.

      Alternatively, you can specify an absolute path here, which has to start with the repository path.

      PUT /_snapshot/my_repository
      {
        "type": "fs",
        "settings": {
          "location": "my_repository_location"
        }
      }
      
    3. Verify the repository to grant access to the linked file system:

      This verifies access to the linked file system.

      POST /_snapshot/my_repository/_verify
      

Restoring the Configuration

Restore the system configuration, i. e. users, index patterns, index templates, ... from the latest snapshot/backup.

Hint - restoring only data

If you whish to restore just the data, You can skip this part.

In this case, make sure the required configuration has already been set up by calling the load-config configuration script

cmp. https://seal-elasticstack.docs.sealsystems.de/linux/configuration/config_elastic_stack_scripts.html

and continue with Refreshing the Kibana Connection Settings.

  1. In a browser, open the DevTools Console of Kibana:

    http://localhost:5601/app/dev_tools#/console
    

    Hint - copy the commmands

    1. Copy the following sample commands one by one.

      You can use the browser cache to copy, modify and save the commands.

    2. Paste the commands in the DevTools Console.

    3. Replace the sample names by your own names.

    4. Execute the commands by clicking the green arrow on the right in the first line of the command.

  2. Find the latest snapshot:

    GET /_snapshot/my_repository/nightly-global-snap*?size=1&sort=start_time&order=desc
    

    Or search for the 10 most recent backups of various snapshot policies in descending order:

    GET /_snapshot/my_repository/*?size=10&sort=start_time&order=desc
    
  3. Select the latest snapshot from the list, e. g. nightly-global-snap-2022.1-16-8fn7qrfsrguogphyhciynw from snapshots[0].snapshot:

    {
      "snapshots" : [
        {
          "snapshot" : "nightly-global-snap-2022.1-16-8fn7qrfsrguogphyhciynw",
          "uuid" : "2765_TIYRvywZ-UUT1yHUg",
          "repository" : "my_repository",
          ...
        }
      ]
    }
    
  4. Start the restauration of the configuration excluding the SEAL indices:

    Caution - configuration only

    You must disable the restauration of the aliases here. The system aliases have already been created when the new, empty Elasticsearch database has been built. These aliases must not be overwritten.

    POST /_snapshot/my_repository/nightly-global-snap-2022.1-16-8fn7qrfsrguogphyhciynw/_restore
    {
      "indices": "-*",
      "include_global_state": true,
      "include_aliases": false
    }
    

Refreshing the Kibana Connection Settings

Hint - access to Kibana

After the configuration has been restored from the backup, the previous access setting from Kibana to Elasticsearch are lost and you have to reconfigure it. You may recognize this by an error the Kibana User interface might report.

If the target system has been set up with the automatic security configuration, you have to execute some commands again to reestablish the connection from Kibana to Elasticsearch.

  1. On the management server, stop Kibana:

    sudo systemctl stop seal-kibana
    
  2. Create a new enrollment token:

    sudo ES_PATH_CONF=/opt/seal/etc/seal-elasticsearch /opt/seal/seal-elasticsearch/bin/elasticsearch-create-enrollment-token --scope kibana --force
    
  3. Import the new enrollment token into Kibana:

    sudo KBN_PATH_CONF=/opt/seal/etc /opt/seal/seal-kibana/bin/kibana-setup --silent --enrollment-token <enrollment-token>    
    
  4. On the management server, restart Kibana:

    sudo systemctl start seal-kibana
    
  5. In a browser, open the DevTools Console of Kibana:

    http://localhost:5601/app/dev_tools#/console
    

    Enter user and password from the restored configuration.

  6. On the management server, restart Elasticsearch:

    sudo systemctl start seal-elasticsearch
    

Restoring the Data

Restore the SEAL indices with the same snapshot as the configuration.

  1. In a browser, open the DevTools Console of Kibana:

    http://localhost:5601/app/dev_tools#/console
    
  2. Start the restauration of the data including the aliases:

    Caution - with aliases

    You must enable the restauration of the aliases here. Otherwise the index lifecycle policies can no longer work properly and indices can become too large because they are not rolled over anymore.

    POST /_snapshot/my_repository/nightly-global-snap-2022.1-16-8fn7qrfsrguogphyhciynw/_restore
    {
      "indices": "seal-*",
      "include_global_state": false,
      "include_aliases": true
    }
    

Enabling Access to Elasticsearch

  1. On the management server, stop Elasticsearch and Kibana:

    • Kibana:

      sudo systemctl stop seal-kibana
      
    • Elasticsearch:

      sudo systemctl stop seal-elasticsearch
      
  2. In and editor, open the Elasticsearch configuration file:

    /opt/seal/etc/seal-elasticsearch/elasticsearch.yml
    
  3. Reactivate the access from outside:

    Replace

    network.host: localhost
    

    by

    network.host: 0.0.0.0
    
  4. Save the file and exit.

  5. On the management server, restart Elasticsearch and Kibana:

    • Elasticsearch:

      sudo systemctl start seal-elasticsearch
      
    • Kibana:

      sudo systemctl start seal-kibana
      
  6. On the PLOSSYS server, check the connection data:

    1. In the filebeat.yml, update the access data, i. e. host, user, password, API key, ...

    2. Check the connection to Elasticsearch:

      sudo -u seal /opt/seal/seal-filebeat/filebeat -c /opt/seal/etc/filebeat.yml test output
      
  7. On the PLOSSYS server, restart Filebeat:

    sudo systemctl start seal-filebeat
    

Back to top