[DOCS] Overhaul snapshot and restore docs (#79081)
Makes several changes to consolidate snapshot and backup-related docs. Highlights: * Adds info about supported ESS snapshot repository types * Adds docs for Kibana's Snapshot and Restore feature * Combines tutorial pages related to taking and managing snapshots * Consolidates explanations of the snapshot process * Incorporates SLM into the snapshot tutorial * Removes duplicate "back up a cluster" pages
This commit is contained in:
parent
987010db47
commit
659e0d3fd3
|
@ -68,3 +68,7 @@ ifeval::["{source_branch}"=="7.x"]
|
|||
:apm-server-ref-v: {apm-server-ref-m}
|
||||
:apm-overview-ref-v: {apm-overview-ref-m}
|
||||
endif::[]
|
||||
|
||||
// Max recommended snapshots in a snapshot repo.
|
||||
// Used in the snapshot/restore docs.
|
||||
:max-snapshot-count: 200
|
||||
|
|
|
@ -215,9 +215,9 @@ include a port. For example, the endpoint may be `172.17.0.2` or
|
|||
`172.17.0.2:9000`. You may also need to set `s3.client.CLIENT_NAME.protocol` to
|
||||
`http` if the endpoint does not support HTTPS.
|
||||
|
||||
https://minio.io[Minio] is an example of a storage system that provides an
|
||||
https://minio.io[MinIO] is an example of a storage system that provides an
|
||||
S3-compatible API. The `repository-s3` plugin allows {es} to work with
|
||||
Minio-backed repositories as well as repositories stored on AWS S3. Other
|
||||
MinIO-backed repositories as well as repositories stored on AWS S3. Other
|
||||
S3-compatible storage systems may also work with {es}, but these are not
|
||||
covered by the {es} test suite.
|
||||
|
||||
|
|
|
@ -21,12 +21,10 @@ to achieve high availability despite failures.
|
|||
to serve searches from nearby clients.
|
||||
|
||||
* The last line of defence against data loss is to take
|
||||
<<backup-cluster,regular snapshots>> of your cluster so that you can restore
|
||||
a completely fresh copy of it elsewhere if needed.
|
||||
<<snapshots-take-snapshot,regular snapshots>> of your cluster so that you can
|
||||
restore a completely fresh copy of it elsewhere if needed.
|
||||
--
|
||||
|
||||
include::high-availability/cluster-design.asciidoc[]
|
||||
|
||||
include::high-availability/backup-cluster.asciidoc[]
|
||||
|
||||
include::ccr/index.asciidoc[]
|
||||
|
|
|
@ -1,163 +0,0 @@
|
|||
[role="xpack"]
|
||||
[[security-backup]]
|
||||
=== Back up a cluster's security configuration
|
||||
++++
|
||||
<titleabbrev>Back up the security configuration</titleabbrev>
|
||||
++++
|
||||
|
||||
Security configuration information resides in two places:
|
||||
<<backup-security-file-based-configuration,files>> and
|
||||
<<backup-security-index-configuration,indices>>.
|
||||
|
||||
[discrete]
|
||||
[[backup-security-file-based-configuration]]
|
||||
==== Back up file-based security configuration
|
||||
|
||||
{es} {security-features} are configured using the <<security-settings,
|
||||
`xpack.security` namespace>> inside the `elasticsearch.yml` and
|
||||
`elasticsearch.keystore` files. In addition there are several other
|
||||
<<security-files, extra configuration files>> inside the same `ES_PATH_CONF`
|
||||
directory. These files define roles and role mappings and configure the
|
||||
<<file-realm,file realm>>. Some of the
|
||||
settings specify file paths to security-sensitive data, such as TLS keys and
|
||||
certificates for the HTTP client and inter-node communication and private key files for
|
||||
<<ref-saml-settings, SAML>>, <<ref-oidc-settings, OIDC>> and the
|
||||
<<ref-kerberos-settings, Kerberos>> realms. All these are also stored inside
|
||||
`ES_PATH_CONF`; the path settings are relative.
|
||||
|
||||
IMPORTANT: The `elasticsearch.keystore`, TLS keys and SAML, OIDC, and Kerberos
|
||||
realms private key files require confidentiality. This is crucial when files
|
||||
are copied to the backup location, as this increases the surface for malicious
|
||||
snooping.
|
||||
|
||||
To back up all this configuration you can use a <<backup-cluster-configuration,
|
||||
conventional file-based backup>>, as described in the previous section.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
* File backups must run on every cluster node.
|
||||
* File backups will store non-security configuration as well. Backing-up
|
||||
only {security-features} configuration is not supported. A backup is a
|
||||
point in time record of state of the complete configuration.
|
||||
|
||||
====
|
||||
|
||||
[discrete]
|
||||
[[backup-security-index-configuration]]
|
||||
==== Back up index-based security configuration
|
||||
|
||||
{es} {security-features} store system configuration data inside a
|
||||
dedicated index. This index is named `.security-6` in the {es} 6.x versions and
|
||||
`.security-7` in the 7.x releases. The `.security` alias always points to the
|
||||
appropriate index. This index contains the data which is not available in
|
||||
configuration files and *cannot* be reliably backed up using standard
|
||||
filesystem tools. This data describes:
|
||||
|
||||
* the definition of users in the native realm (including hashed passwords)
|
||||
* role definitions (defined via the <<security-api-put-role,create roles API>>)
|
||||
* role mappings (defined via the
|
||||
<<security-api-put-role-mapping,create role mappings API>>)
|
||||
* application privileges
|
||||
* API keys
|
||||
|
||||
The `.security` index thus contains resources and definitions in addition to
|
||||
configuration information. All of that information is required in a complete
|
||||
{security-features} backup.
|
||||
|
||||
Use the <<modules-snapshots, standard {es} snapshot functionality>> to backup
|
||||
`.security`, as you would for any <<backup-cluster-data, other data index>>.
|
||||
For convenience, here are the complete steps:
|
||||
|
||||
. Create a repository that you can use to backup the `.security` index.
|
||||
It is preferable to have a <<backup-security-repos, dedicated repository>> for
|
||||
this special index. If you wish, you can also snapshot the system indices for other {stack} components to this repository.
|
||||
+
|
||||
--
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
PUT /_snapshot/my_backup
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "my_backup_location"
|
||||
}
|
||||
}
|
||||
-----------------------------------
|
||||
|
||||
The user calling this API must have the elevated `manage` cluster privilege to
|
||||
prevent non-administrators exfiltrating data.
|
||||
|
||||
--
|
||||
|
||||
. Create a user and assign it only the built-in `snapshot_user` role.
|
||||
+
|
||||
--
|
||||
The following example creates a new user `snapshot_user` in the
|
||||
<<native-realm,native realm>>, but it is not important which
|
||||
realm the user is a member of:
|
||||
|
||||
[source,console]
|
||||
--------------------------------------------------
|
||||
POST /_security/user/snapshot_user
|
||||
{
|
||||
"password" : "secret",
|
||||
"roles" : [ "snapshot_user" ]
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TEST[skip:security is not enabled in this fixture]
|
||||
|
||||
--
|
||||
|
||||
. Create incremental snapshots authorized as `snapshot_user`.
|
||||
+
|
||||
--
|
||||
The following example shows how to use the create snapshot API to backup
|
||||
the `.security` index to the `my_backup` repository:
|
||||
|
||||
[source,console]
|
||||
--------------------------------------------------
|
||||
PUT /_snapshot/my_backup/snapshot_1
|
||||
{
|
||||
"indices": ".security",
|
||||
"include_global_state": true <1>
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TEST[continued]
|
||||
|
||||
<1> This parameter value captures all the persistent settings stored in the
|
||||
global cluster metadata as well as other configurations such as aliases and
|
||||
stored scripts. Note that this includes non-security configuration and that it complements but does not replace the
|
||||
<<backup-cluster-configuration, filesystem configuration files backup>>.
|
||||
|
||||
--
|
||||
|
||||
IMPORTANT: The index format is only compatible within a single major version,
|
||||
and cannot be restored onto a version earlier than the version from which it
|
||||
originated. For example, you can restore a security snapshot from 6.6.0 into a
|
||||
6.7.0 cluster, but you cannot restore it to a cluster running {es} 6.5.0 or 7.0.0.
|
||||
|
||||
[discrete]
|
||||
[[backup-security-repos]]
|
||||
===== Controlling access to the backup repository
|
||||
|
||||
The snapshot of the security index will typically contain sensitive data such
|
||||
as user names and password hashes. Because passwords are stored using
|
||||
<<hashing-settings, cryptographic hashes>>, the disclosure of a snapshot would
|
||||
not automatically enable a third party to authenticate as one of your users or
|
||||
use API keys. However, it would disclose confidential information.
|
||||
|
||||
It is also important that you protect the integrity of these backups in case
|
||||
you ever need to restore them. If a third party is able to modify the stored
|
||||
backups, they may be able to install a back door that would grant access if the
|
||||
snapshot is loaded into an {es} cluster.
|
||||
|
||||
We recommend that you:
|
||||
|
||||
* Snapshot the `.security` index in a dedicated repository, where read and write
|
||||
access is strictly restricted and audited.
|
||||
* If there are indications that the snapshot has been read, change the passwords
|
||||
of the users in the native realm and revoke API keys.
|
||||
* If there are indications that the snapshot has been tampered with, do not
|
||||
restore it. There is currently no option for the restore process to detect
|
||||
malicious tampering.
|
|
@ -1,60 +0,0 @@
|
|||
[[backup-cluster-configuration]]
|
||||
=== Back up a cluster's configuration
|
||||
++++
|
||||
<titleabbrev>Back up the cluster configuration</titleabbrev>
|
||||
++++
|
||||
|
||||
In addition to backing up the data in a cluster, it is important to back up its configuration--especially when the cluster becomes large and difficult to
|
||||
reconstruct.
|
||||
|
||||
Configuration information resides in
|
||||
<<config-files-location, regular text files>> on every cluster node. Sensitive
|
||||
setting values such as passwords for the {watcher} notification servers, are
|
||||
specified inside a binary secure container, the
|
||||
<<secure-settings, elasticsearch.keystore>> file. Some setting values are
|
||||
file paths to the associated configuration data, such as the ingest geo ip
|
||||
database. All these files are contained inside the `ES_PATH_CONF` directory.
|
||||
|
||||
NOTE: All changes to configuration files are done by manually editing the files
|
||||
or using command line utilities, but *not* through APIs. In practice, these
|
||||
changes are infrequent after the initial setup.
|
||||
|
||||
We recommend that you take regular (ideally, daily) backups of your {es} config
|
||||
(`$ES_PATH_CONF`) directory using the file backup software of your choice.
|
||||
|
||||
TIP: We recommend that you have a configuration management plan for these
|
||||
configuration files. You may wish to check them into version control, or
|
||||
provision them though your choice of configuration management tool.
|
||||
|
||||
Some of these files may contain sensitive data such as passwords and TLS keys,
|
||||
therefore you should investigate whether your backup software and/or storage
|
||||
solution are able to encrypt this data.
|
||||
|
||||
Some settings in configuration files might be overridden by
|
||||
<<cluster-update-settings,cluster settings>>. You can capture these settings in
|
||||
a *data* backup snapshot by specifying the `include_global_state: true` (default)
|
||||
parameter for the snapshot API. Alternatively, you can extract these
|
||||
configuration values in text format by using the
|
||||
<<cluster-get-settings, get settings API>>:
|
||||
|
||||
[source,console]
|
||||
--------------------------------------------------
|
||||
GET _cluster/settings?pretty&flat_settings&filter_path=persistent
|
||||
--------------------------------------------------
|
||||
|
||||
You can store the output of this as a file together with the rest of
|
||||
configuration files.
|
||||
|
||||
[NOTE]
|
||||
====
|
||||
|
||||
* Transient settings are not considered for backup.
|
||||
* {es} {security-features} store configuration data such as role definitions and
|
||||
API keys inside a dedicate special index. This "system" data,
|
||||
complements the <<secure-settings, security settings>> configuration and should
|
||||
be <<backup-security-index-configuration, backed up as well>>.
|
||||
* Other {stack} components, like Kibana and {ml-cap}, store their configuration
|
||||
data inside other dedicated indices. From the {es} perspective these are just data
|
||||
so you can use the regular <<backup-cluster-data, data backup>> process.
|
||||
|
||||
====
|
|
@ -1,27 +0,0 @@
|
|||
[[backup-cluster-data]]
|
||||
=== Back up a cluster's data
|
||||
++++
|
||||
<titleabbrev>Back up the data</titleabbrev>
|
||||
++++
|
||||
|
||||
To back up your cluster's data, you can use the <<modules-snapshots,snapshot API>>.
|
||||
|
||||
include::../snapshot-restore/index.asciidoc[tag=snapshot-intro]
|
||||
|
||||
[TIP]
|
||||
====
|
||||
If your cluster has {es} {security-features} enabled, when you back up your data
|
||||
the snapshot API call must be authorized.
|
||||
|
||||
The `snapshot_user` role is a reserved role that can be assigned to the user
|
||||
who is calling the snapshot endpoint. This is the only role necessary if all the user
|
||||
does is periodic snapshots as part of the backup procedure. This role includes
|
||||
the privileges to list all the existing snapshots (of any repository) as
|
||||
well as list and view settings of all indices, including the `.security` index.
|
||||
It does *not* grant privileges to create repositories, restore snapshots, or
|
||||
search within indices. Hence, the user can view and snapshot all indices, but cannot
|
||||
access or modify any data.
|
||||
|
||||
For more information, see <<security-privileges>>
|
||||
and <<built-in-roles>>.
|
||||
====
|
|
@ -1,18 +0,0 @@
|
|||
[[backup-cluster]]
|
||||
== Back up a cluster
|
||||
|
||||
include::../snapshot-restore/index.asciidoc[tag=backup-warning]
|
||||
|
||||
To have a complete backup for your cluster:
|
||||
|
||||
. <<backup-cluster-data,Back up the data>>
|
||||
. <<backup-cluster-configuration,Back up the cluster configuration>>
|
||||
. <<security-backup,Back up the security configuration>>
|
||||
|
||||
To restore your cluster from a backup, see <<restore-entire-cluster>>.
|
||||
|
||||
|
||||
|
||||
include::backup-cluster-data.asciidoc[]
|
||||
include::backup-cluster-config.asciidoc[]
|
||||
include::backup-and-restore-security-config.asciidoc[]
|
|
@ -20,8 +20,8 @@ default policies are configured automatically.
|
|||
image:images/ilm/index-lifecycle-policies.png[]
|
||||
|
||||
[TIP]
|
||||
To automatically back up your indices and manage snapshots,
|
||||
use <<getting-started-snapshot-lifecycle-management,snapshot lifecycle policies>>.
|
||||
To automatically back up your indices and manage snapshots, use
|
||||
<<automate-snapshots-slm,snapshot lifecycle policies>>.
|
||||
|
||||
* <<overview-index-lifecycle-management>>
|
||||
* <<ilm-concepts>>
|
||||
|
|
|
@ -9,7 +9,7 @@ You can stop {ilm} to suspend management operations for all indices.
|
|||
For example, you might stop {ilm} when performing scheduled maintenance or making
|
||||
changes to the cluster that could impact the execution of {ilm-init} actions.
|
||||
|
||||
IMPORTANT: When you stop {ilm-init}, <<snapshot-lifecycle-management, {slm-init}>>
|
||||
IMPORTANT: When you stop {ilm-init}, <<automate-snapshots-slm,{slm-init}>>
|
||||
operations are also suspended.
|
||||
No snapshots will be taken as scheduled until you restart {ilm-init}.
|
||||
In-progress snapshots are not affected.
|
||||
|
|
|
@ -3,6 +3,60 @@
|
|||
|
||||
The following pages have moved or been deleted.
|
||||
|
||||
// [START] Snapshot and restore
|
||||
|
||||
[role="exclude",id="snapshot-lifecycle-management"]
|
||||
=== {slm-cap} ({slm-init})
|
||||
|
||||
See <<automate-snapshots-slm>>.
|
||||
|
||||
[role="exclude",id="getting-started-snapshot-lifecycle-management"]
|
||||
=== Tutorial: Automate backups with {slm-init}
|
||||
|
||||
See <<automate-snapshots-slm>>.
|
||||
|
||||
[role="exclude",id="slm-and-security"]
|
||||
=== Security and {slm-init}
|
||||
|
||||
See <<slm-security>>.
|
||||
|
||||
[role="exclude",id="slm-retention"]
|
||||
=== Snapshot retention
|
||||
|
||||
See <<slm-retention-task>>.
|
||||
|
||||
[role="exclude",id="delete-snapshots"]
|
||||
=== Delete a snapshot
|
||||
|
||||
See <<delete-snapshot>>.
|
||||
|
||||
[role="exclude",id="snapshots-monitor-snapshot-restore"]
|
||||
=== Monitor snapshot progress
|
||||
|
||||
See <<monitor-snapshot>>.
|
||||
|
||||
[role="exclude",id="backup-cluster"]
|
||||
=== Back up a cluster
|
||||
|
||||
See <<snapshots-take-snapshot>>.
|
||||
|
||||
[role="exclude",id="backup-cluster-data"]
|
||||
=== Back up a cluster's data
|
||||
|
||||
See <<snapshots-take-snapshot>>.
|
||||
|
||||
[role="exclude",id="backup-cluster-configuration"]
|
||||
=== Back up a cluster's configuration
|
||||
|
||||
See <<back-up-config-files>>.
|
||||
|
||||
[role="exclude",id="security-backup"]
|
||||
=== Back up a cluster's security configuration
|
||||
|
||||
See <<back-up-config-files>> and <<cluster-state-snapshots>>.
|
||||
|
||||
// [END] Snapshot and restore
|
||||
|
||||
[role="exclude",id="configuring-tls-docker"]
|
||||
== Encrypting communications in an {es} Docker Container
|
||||
|
||||
|
@ -844,7 +898,7 @@ See <<snapshot-restore>>.
|
|||
[role="exclude",id="_repository_plugins"]
|
||||
==== Repository plugins
|
||||
|
||||
See <<snapshots-repository-plugins>>.
|
||||
See <<self-managed-repo-types>>.
|
||||
|
||||
[role="exclude",id="_changing_index_settings_during_restore"]
|
||||
==== Change index settings during restore
|
||||
|
@ -1727,3 +1781,5 @@ See <<security-api-kibana-enrollment>>.
|
|||
|
||||
See the <<sql-search-api-request-body,request body parameters>> for the
|
||||
<<sql-search-api,SQL search API>>.
|
||||
|
||||
|
||||
|
|
|
@ -84,7 +84,7 @@ Use any of the following repository types with searchable snapshots:
|
|||
|
||||
You can also use alternative implementations of these repository types, for
|
||||
instance
|
||||
{plugins}/repository-s3-client.html#repository-s3-compatible-services[Minio],
|
||||
{plugins}/repository-s3-client.html#repository-s3-compatible-services[MinIO],
|
||||
as long as they are fully compatible. Use the <<repo-analysis-api>> API
|
||||
to analyze your repository's suitability for use with searchable snapshots.
|
||||
// end::searchable-snapshot-repo-types[]
|
||||
|
@ -279,7 +279,7 @@ multiple clusters and use <<modules-cross-cluster-search,{ccs}>> or
|
|||
[[back-up-restore-searchable-snapshots]]
|
||||
=== Back up and restore {search-snaps}
|
||||
|
||||
You can use <<snapshot-lifecycle-management,regular snapshots>> to back up a
|
||||
You can use <<snapshots-take-snapshot,regular snapshots>> to back up a
|
||||
cluster containing {search-snap} indices. When you restore a snapshot
|
||||
containing {search-snap} indices, these indices are restored as {search-snap}
|
||||
indices again.
|
||||
|
|
|
@ -1,15 +1,22 @@
|
|||
[role="xpack"]
|
||||
[[slm-settings]]
|
||||
=== {slm-cap} settings in {es}
|
||||
[subs="attributes"]
|
||||
++++
|
||||
<titleabbrev>{slm-cap} settings</titleabbrev>
|
||||
++++
|
||||
[[snapshot-settings]]
|
||||
=== Snapshot and restore settings
|
||||
|
||||
These are the settings available for configuring
|
||||
<<snapshot-lifecycle-management, {slm}>> ({slm-init}).
|
||||
The following cluster settings configure <<snapshot-restore,snapshot and
|
||||
restore>>.
|
||||
|
||||
==== Cluster-level settings
|
||||
[[snapshot-max-concurrent-ops]]
|
||||
`snapshot.max_concurrent_operations`::
|
||||
(<<dynamic-cluster-setting,Dynamic>>, integer) Maximum number of concurrent
|
||||
snapshot operations. Defaults to `1000`.
|
||||
+
|
||||
This limit applies in total to all ongoing snapshot creation, cloning, and
|
||||
deletion operations. {es} will reject any operations that would exceed this
|
||||
limit.
|
||||
|
||||
==== {slm-init} settings
|
||||
|
||||
The following cluster settings configure <<snapshot-lifecycle-management,{slm}
|
||||
({slm-init})>>.
|
||||
|
||||
[[slm-history-index-enabled]]
|
||||
`slm.history_index_enabled`::
|
|
@ -94,7 +94,7 @@ include::settings/security-settings.asciidoc[]
|
|||
|
||||
include::modules/indices/request_cache.asciidoc[]
|
||||
|
||||
include::settings/slm-settings.asciidoc[]
|
||||
include::settings/snapshot-settings.asciidoc[]
|
||||
|
||||
include::settings/transform-settings.asciidoc[]
|
||||
|
||||
|
|
|
@ -3,7 +3,10 @@
|
|||
==== Cluster backups
|
||||
|
||||
In a disaster, <<snapshot-restore,snapshots>> can prevent permanent data loss.
|
||||
<<snapshot-lifecycle-management,{slm-cap}>> is the easiest way to take regular
|
||||
backups of your cluster. For more information, see <<backup-cluster>>.
|
||||
<<automate-snapshots-slm,{slm-cap}>> is the easiest way to take regular
|
||||
backups of your cluster. For more information, see <<snapshots-take-snapshot>>.
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
include::../../snapshot-restore/index.asciidoc[tag=backup-warning]
|
||||
====
|
||||
|
|
|
@ -3,8 +3,9 @@
|
|||
== {slm-cap} APIs
|
||||
|
||||
You use the following APIs to set up policies to automatically take snapshots and
|
||||
control how long they are retained.
|
||||
For more information about {slm} ({slm-init}), see <<snapshot-lifecycle-management>>.
|
||||
control how long they are retained.
|
||||
|
||||
For more information about {slm} ({slm-init}), see <<automate-snapshots-slm>>.
|
||||
|
||||
[discrete]
|
||||
[[slm-api-policy-endpoint]]
|
||||
|
|
|
@ -58,7 +58,7 @@ If successful, this request returns the generated snapshot name:
|
|||
|
||||
The snapshot is taken in the background. You can use the
|
||||
<<snapshot-lifecycle-management-api,snapshot APIs>> to
|
||||
<<snapshots-monitor-snapshot-restore,monitor the status of the snapshot>>.
|
||||
<<monitor-snapshot,monitor the status of the snapshot>>.
|
||||
|
||||
To see the status of a policy's most recent snapshot, you can use the
|
||||
<<slm-api-get-policy,get snapshot lifecycle policy API>>.
|
||||
|
|
|
@ -11,9 +11,9 @@ information about the latest snapshot attempts.
|
|||
[[slm-api-get-request]]
|
||||
==== {api-request-title}
|
||||
|
||||
`GET /_slm/policy/<policy-id>`
|
||||
`GET _slm/policy/<policy-id>`
|
||||
|
||||
`GET /_slm/policy`
|
||||
`GET _slm/policy`
|
||||
|
||||
[[slm-api-get-lifecycle-prereqs]]
|
||||
==== {api-prereq-title}
|
||||
|
@ -44,44 +44,44 @@ Comma-separated list of snapshot lifecycle policy IDs.
|
|||
|
||||
////
|
||||
[source,console]
|
||||
--------------------------------------------------
|
||||
PUT /_slm/policy/daily-snapshots
|
||||
----
|
||||
PUT _slm/policy/daily-snapshots
|
||||
{
|
||||
"schedule": "0 30 1 * * ?", <1>
|
||||
"name": "<daily-snap-{now/d}>", <2>
|
||||
"repository": "my_repository", <3>
|
||||
"config": { <4>
|
||||
"indices": ["data-*", "important"], <5>
|
||||
"schedule": "0 30 1 * * ?",
|
||||
"name": "<daily-snap-{now/d}>",
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"indices": ["data-*", "important"],
|
||||
"ignore_unavailable": false,
|
||||
"include_global_state": false
|
||||
},
|
||||
"retention": { <6>
|
||||
"expire_after": "30d", <7>
|
||||
"min_count": 5, <8>
|
||||
"max_count": 50 <9>
|
||||
"retention": {
|
||||
"expire_after": "30d",
|
||||
"min_count": 5,
|
||||
"max_count": 50
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
----
|
||||
// TEST[setup:setup-repository]
|
||||
////
|
||||
|
||||
Get the `daily-snapshots` policy:
|
||||
|
||||
[source,console]
|
||||
--------------------------------------------------
|
||||
GET /_slm/policy/daily-snapshots?human
|
||||
--------------------------------------------------
|
||||
----
|
||||
GET _slm/policy/daily-snapshots?human
|
||||
----
|
||||
// TEST[continued]
|
||||
|
||||
This request returns the following response:
|
||||
|
||||
[source,console-result]
|
||||
--------------------------------------------------
|
||||
----
|
||||
{
|
||||
"daily-snapshots" : {
|
||||
"version": 1, <1>
|
||||
"modified_date": "2019-04-23T01:30:00.000Z", <2>
|
||||
"modified_date_millis": 1556048137314,
|
||||
"daily-snapshots": {
|
||||
"version": 1, <1>
|
||||
"modified_date": "2099-05-06T01:30:00.000Z", <2>
|
||||
"modified_date_millis": 4081757400000,
|
||||
"policy" : {
|
||||
"schedule": "0 30 1 * * ?",
|
||||
"name": "<daily-snap-{now/d}>",
|
||||
|
@ -104,12 +104,16 @@ This request returns the following response:
|
|||
"snapshots_deleted": 0,
|
||||
"snapshot_deletion_failures": 0
|
||||
},
|
||||
"next_execution": "2019-04-24T01:30:00.000Z", <3>
|
||||
"next_execution_millis": 1556048160000
|
||||
"next_execution": "2099-05-07T01:30:00.000Z", <3>
|
||||
"next_execution_millis": 4081843800000
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[s/"modified_date": "2019-04-23T01:30:00.000Z"/"modified_date": $body.daily-snapshots.modified_date/ s/"modified_date_millis": 1556048137314/"modified_date_millis": $body.daily-snapshots.modified_date_millis/ s/"next_execution": "2019-04-24T01:30:00.000Z"/"next_execution": $body.daily-snapshots.next_execution/ s/"next_execution_millis": 1556048160000/"next_execution_millis": $body.daily-snapshots.next_execution_millis/]
|
||||
----
|
||||
// TESTRESPONSE[s/"version": 1/"version": $body.daily-snapshots.version/]
|
||||
// TESTRESPONSE[s/"modified_date": "2099-05-06T01:30:00.000Z"/"modified_date": $body.daily-snapshots.modified_date/]
|
||||
// TESTRESPONSE[s/"modified_date_millis": 4081757400000/"modified_date_millis": $body.daily-snapshots.modified_date_millis/]
|
||||
// TESTRESPONSE[s/"next_execution": "2099-05-07T01:30:00.000Z"/"next_execution": $body.daily-snapshots.next_execution/]
|
||||
// TESTRESPONSE[s/"next_execution_millis": 4081843800000/"next_execution_millis": $body.daily-snapshots.next_execution_millis/]
|
||||
<1> The version of the snapshot policy, only the latest version is stored and incremented when the policy is updated
|
||||
<2> The last time this policy was modified.
|
||||
<3> The next time this policy will be executed.
|
||||
|
@ -119,7 +123,7 @@ This request returns the following response:
|
|||
===== Get all policies
|
||||
|
||||
[source,console]
|
||||
--------------------------------------------------
|
||||
GET /_slm/policy
|
||||
--------------------------------------------------
|
||||
----
|
||||
GET _slm/policy
|
||||
----
|
||||
// TEST[continued]
|
||||
|
|
|
@ -15,9 +15,9 @@ Creates or updates a snapshot lifecycle policy.
|
|||
[[slm-api-put-prereqs]]
|
||||
==== {api-prereq-title}
|
||||
|
||||
If the {es} {security-features} are enabled, you must have the
|
||||
If the {es} {security-features} are enabled, you must have the
|
||||
`manage_slm` cluster privilege and the `manage` index privilege
|
||||
for any included indices to use this API.
|
||||
for any included indices to use this API.
|
||||
For more information, see <<security-privileges>>.
|
||||
|
||||
[[slm-api-put-desc]]
|
||||
|
@ -63,13 +63,13 @@ include::{es-repo-dir}/snapshot-restore/apis/create-snapshot-api.asciidoc[tag=sn
|
|||
(Required, string)
|
||||
Name automatically assigned to each snapshot created by the policy.
|
||||
<<date-math-index-names,Date math>> is supported.
|
||||
To prevent conflicting snapshot names, a UUID is automatically appended to each
|
||||
To prevent conflicting snapshot names, a UUID is automatically appended to each
|
||||
snapshot name.
|
||||
|
||||
`repository`::
|
||||
(Required, string)
|
||||
Repository used to store snapshots created by this policy. This repository must
|
||||
exist prior to the policy's creation. You can create a repository using the
|
||||
Repository used to store snapshots created by this policy. This repository must
|
||||
exist prior to the policy's creation. You can create a repository using the
|
||||
<<modules-snapshots,snapshot repository API>>.
|
||||
|
||||
[[slm-api-put-retention]]
|
||||
|
@ -86,6 +86,7 @@ Time period after which a snapshot is considered expired and eligible for
|
|||
deletion. {slm-init} deletes expired snapshots based on the
|
||||
<<slm-retention-schedule,`slm.retention_schedule`>>.
|
||||
|
||||
// To update {max-snapshot-count}, see docs/Version.asciidoc
|
||||
`max_count`::
|
||||
(Optional, integer)
|
||||
Maximum number of snapshots to retain, even if the snapshots have not yet
|
||||
|
@ -94,9 +95,8 @@ policy retains the most recent snapshots and deletes older snapshots. This limit
|
|||
only includes snapshots with a <<get-snapshot-api-response-state,`state`>> of
|
||||
`SUCCESS`.
|
||||
+
|
||||
NOTE: The maximum number of snapshots in a repository should not exceed `200`. This ensures that the snapshot repository metadata does not
|
||||
grow to a size which might destabilize the master node. If the `max_count` setting is not set, this limit should be enforced by configuring
|
||||
other retention rules such that the repository size does not exceed `200` snapshots.
|
||||
NOTE: This value should not exceed {max-snapshot-count}. See
|
||||
<<snapshot-retention-limits>>.
|
||||
|
||||
`min_count`::
|
||||
(Optional, integer)
|
||||
|
|
|
@ -1,184 +0,0 @@
|
|||
[role="xpack"]
|
||||
[[getting-started-snapshot-lifecycle-management]]
|
||||
=== Tutorial: Automate backups with {slm-init}
|
||||
|
||||
This tutorial demonstrates how to automate daily backups of {es} data streams and indices using an {slm-init} policy.
|
||||
The policy takes <<modules-snapshots, snapshots>> of all data streams and indices in the cluster
|
||||
and stores them in a local repository.
|
||||
It also defines a retention policy and automatically deletes snapshots
|
||||
when they are no longer needed.
|
||||
|
||||
To manage snapshots with {slm-init}, you:
|
||||
|
||||
. <<slm-gs-register-repository, Register a repository>>.
|
||||
. <<slm-gs-create-policy, Create an {slm-init} policy>>.
|
||||
|
||||
To test the policy, you can manually trigger it to take an initial snapshot.
|
||||
|
||||
[discrete]
|
||||
[[slm-gs-register-repository]]
|
||||
==== Register a repository
|
||||
|
||||
To use {slm-init}, you must have a snapshot repository configured.
|
||||
The repository can be local (shared filesystem) or remote (cloud storage).
|
||||
Remote repositories can reside on S3, HDFS, Azure, Google Cloud Storage,
|
||||
or any other platform supported by a {plugins}/repository.html[repository plugin].
|
||||
Remote repositories are generally used for production deployments.
|
||||
|
||||
For this tutorial, you can register a local repository from
|
||||
{kibana-ref}/snapshot-repositories.html[{kib} Management]
|
||||
or use the create or update repository API:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
PUT /_snapshot/my_repository
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "my_backup_location"
|
||||
}
|
||||
}
|
||||
-----------------------------------
|
||||
|
||||
[discrete]
|
||||
[[slm-gs-create-policy]]
|
||||
==== Set up a snapshot policy
|
||||
|
||||
Once you have a repository in place,
|
||||
you can define an {slm-init} policy to take snapshots automatically.
|
||||
The policy defines when to take snapshots, which data streams or indices should be included,
|
||||
and what to name the snapshots.
|
||||
A policy can also specify a <<slm-retention,retention policy>> and
|
||||
automatically delete snapshots when they are no longer needed.
|
||||
|
||||
TIP: Don't be afraid to configure a policy that takes frequent snapshots.
|
||||
Snapshots are incremental and make efficient use of storage.
|
||||
|
||||
You can define and manage policies through {kib} Management or with the create
|
||||
or update policy API.
|
||||
|
||||
For example, you could define a `nightly-snapshots` policy
|
||||
to back up all of your data streams and indices daily at 1:30AM UTC.
|
||||
|
||||
A create or update policy request defines the policy configuration in JSON:
|
||||
|
||||
[source,console]
|
||||
--------------------------------------------------
|
||||
PUT /_slm/policy/nightly-snapshots
|
||||
{
|
||||
"schedule": "0 30 2 * * ?", <1>
|
||||
"name": "<nightly-snap-{now/d}>", <2>
|
||||
"repository": "my_repository", <3>
|
||||
"config": { <4>
|
||||
"indices": ["*"] <5>
|
||||
},
|
||||
"retention": { <6>
|
||||
"expire_after": "30d", <7>
|
||||
"min_count": 5, <8>
|
||||
"max_count": 50 <9>
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TEST[continued]
|
||||
<1> When the snapshot should be taken in
|
||||
<<schedule-cron,Cron syntax>>: daily at 1:30AM UTC
|
||||
<2> How to name the snapshot: use
|
||||
<<date-math-index-names,date math>> to include the current date in the snapshot name
|
||||
<3> Where to store the snapshot
|
||||
<4> The configuration to be used for the snapshot requests (see below)
|
||||
<5> Which data streams or indices to include in the snapshot: all data streams and indices
|
||||
<6> Optional retention policy: keep snapshots for 30 days,
|
||||
retaining at least 5 and no more than 50 snapshots regardless of age
|
||||
|
||||
You can specify additional snapshot configuration options to customize how snapshots are taken.
|
||||
For example, you could configure the policy to fail the snapshot
|
||||
if one of the specified data streams or indices is missing.
|
||||
For more information about snapshot options, see <<snapshots-take-snapshot,snapshot requests>>.
|
||||
|
||||
[discrete]
|
||||
[[slm-gs-test-policy]]
|
||||
==== Test the snapshot policy
|
||||
|
||||
A snapshot taken by {slm-init} is just like any other snapshot.
|
||||
You can view information about snapshots in {kib} Management or
|
||||
get info with the <<snapshots-monitor-snapshot-restore, snapshot APIs>>.
|
||||
In addition, {slm-init} keeps track of policy successes and failures so you
|
||||
have insight into how the policy is working. If the policy has executed at
|
||||
least once, the <<slm-api-get-policy, get policy>> API returns additional metadata
|
||||
that shows if the snapshot succeeded.
|
||||
|
||||
You can manually execute a snapshot policy to take a snapshot immediately.
|
||||
This is useful for taking snapshots before making a configuration change,
|
||||
upgrading, or to test a new policy.
|
||||
Manually executing a policy does not affect its configured schedule.
|
||||
|
||||
Instead of waiting for the policy to run, tell {slm-init} to take a snapshot
|
||||
using the configuration right now instead of waiting for 1:30 a.m..
|
||||
|
||||
[source,console]
|
||||
--------------------------------------------------
|
||||
POST /_slm/policy/nightly-snapshots/_execute
|
||||
--------------------------------------------------
|
||||
// TEST[skip:we can't easily handle snapshots from docs tests]
|
||||
|
||||
|
||||
After forcing the `nightly-snapshots` policy to run,
|
||||
you can retrieve the policy to get success or failure information.
|
||||
|
||||
[source,console]
|
||||
--------------------------------------------------
|
||||
GET /_slm/policy/nightly-snapshots?human
|
||||
--------------------------------------------------
|
||||
// TEST[continued]
|
||||
|
||||
Only the most recent success and failure are returned,
|
||||
but all policy executions are recorded in the `.slm-history*` indices.
|
||||
The response also shows when the policy is scheduled to execute next.
|
||||
|
||||
NOTE: The response shows if the policy succeeded in _initiating_ a snapshot.
|
||||
However, that does not guarantee that the snapshot completed successfully.
|
||||
It is possible for the initiated snapshot to fail if, for example, the connection to a remote
|
||||
repository is lost while copying files.
|
||||
|
||||
[source,console-result]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"nightly-snapshots" : {
|
||||
"version": 1,
|
||||
"modified_date": "2019-04-23T01:30:00.000Z",
|
||||
"modified_date_millis": 1556048137314,
|
||||
"policy" : {
|
||||
"schedule": "0 30 1 * * ?",
|
||||
"name": "<nightly-snap-{now/d}>",
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"indices": ["*"],
|
||||
},
|
||||
"retention": {
|
||||
"expire_after": "30d",
|
||||
"min_count": 5,
|
||||
"max_count": 50
|
||||
}
|
||||
},
|
||||
"last_success": { <1>
|
||||
"snapshot_name": "nightly-snap-2019.04.24-tmtnyjtrsxkhbrrdcgg18a", <2>
|
||||
"time_string": "2019-04-24T16:43:49.316Z",
|
||||
"time": 1556124229316
|
||||
} ,
|
||||
"last_failure": { <3>
|
||||
"snapshot_name": "nightly-snap-2019.04.02-lohisb5ith2n8hxacaq3mw",
|
||||
"time_string": "2019-04-02T01:30:00.000Z",
|
||||
"time": 1556042030000,
|
||||
"details": "{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [important]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"important\",\"index_uuid\":\"_na_\",\"index\":\"important\",\"stack_trace\":\"[important] IndexNotFoundException[no such index [important]]\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:762)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:714)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:670)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:163)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:142)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:102)\\n\\tat org.elasticsearch.snapshots.SnapshotsService$1.execute(SnapshotsService.java:280)\\n\\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\\n\\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687)\\n\\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310)\\n\\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210)\\n\\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\\n\\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\\n\\tat java.base/java.lang.Thread.run(Thread.java:834)\\n\"}"
|
||||
} ,
|
||||
"next_execution": "2019-04-24T01:30:00.000Z", <4>
|
||||
"next_execution_millis": 1556048160000
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[skip:the presence of last_failure and last_success is asynchronous and will be present for users, but is untestable]
|
||||
|
||||
<1> Information about the last time the policy successfully initated a snapshot
|
||||
<2> The name of the snapshot that was successfully initiated
|
||||
<3> Information about the last time the policy failed to initiate a snapshot
|
||||
<4> The next time the policy will execute
|
|
@ -1,20 +0,0 @@
|
|||
[role="xpack"]
|
||||
[[snapshot-lifecycle-management]]
|
||||
== {slm-init}: Manage the snapshot lifecycle
|
||||
|
||||
You can set up snapshot lifecycle policies to automate the timing, frequency, and retention of snapshots.
|
||||
Snapshot policies can apply to multiple data streams and indices.
|
||||
|
||||
The {slm} ({slm-init}) <<snapshot-lifecycle-management-api, CRUD APIs>> provide
|
||||
the building blocks for the snapshot policy features that are part of {kib} Management.
|
||||
{kibana-ref}/snapshot-repositories.html[Snapshot and Restore] makes it easy to
|
||||
set up policies, register snapshot repositories, view and manage snapshots, and restore data streams or indices.
|
||||
|
||||
You can stop and restart {slm-init} to temporarily pause automatic backups while performing
|
||||
upgrades or other maintenance.
|
||||
|
||||
include::getting-started-slm.asciidoc[]
|
||||
|
||||
include::slm-security.asciidoc[]
|
||||
|
||||
include::slm-retention.asciidoc[]
|
|
@ -1,91 +0,0 @@
|
|||
[role="xpack"]
|
||||
[[slm-retention]]
|
||||
=== Snapshot retention
|
||||
|
||||
You can include a retention policy in an {slm-init} policy to automatically delete old snapshots.
|
||||
Retention runs as a cluster-level task and is not associated with a particular policy's schedule.
|
||||
The retention criteria are evaluated as part of the retention task, not when the policy executes.
|
||||
For the retention task to automatically delete snapshots,
|
||||
you need to include a <<slm-api-put-retention,`retention`>> object in your {slm-init} policy.
|
||||
|
||||
To control when the retention task runs, configure
|
||||
<<slm-retention-schedule,`slm.retention_schedule`>> in the cluster settings.
|
||||
You can define the schedule as a periodic or absolute <<schedule-cron, cron schedule>>.
|
||||
The <<slm-retention-duration,`slm.retention_duration`>> setting limits how long
|
||||
{slm-init} should spend deleting old snapshots.
|
||||
|
||||
You can update the schedule and duration dynamically with the
|
||||
<<cluster-update-settings, update settings>> API.
|
||||
You can run the retention task manually with the
|
||||
<<slm-api-execute-retention, execute retention >> API.
|
||||
|
||||
The retention task only considers snapshots initiated through {slm-init} policies,
|
||||
either according to the policy schedule or through the
|
||||
<<slm-api-execute-lifecycle, execute lifecycle>> API.
|
||||
Manual snapshots are ignored and don't count toward the retention limits.
|
||||
|
||||
To retrieve information about the snapshot retention task history,
|
||||
use the <<slm-api-get-stats, get stats>> API:
|
||||
|
||||
////
|
||||
[source,console]
|
||||
--------------------------------------------------
|
||||
PUT /_slm/policy/daily-snapshots
|
||||
{
|
||||
"schedule": "0 30 1 * * ?",
|
||||
"name": "<daily-snap-{now/d}>",
|
||||
"repository": "my_repository",
|
||||
"retention": { <1>
|
||||
"expire_after": "30d", <2>
|
||||
"min_count": 5, <3>
|
||||
"max_count": 50 <4>
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TEST[setup:setup-repository]
|
||||
<1> Optional retention configuration
|
||||
<2> Keep snapshots for 30 days
|
||||
<3> Always keep at least 5 successful snapshots
|
||||
<4> Keep no more than 50 successful snapshots
|
||||
////
|
||||
|
||||
[source,console]
|
||||
--------------------------------------------------
|
||||
GET /_slm/stats
|
||||
--------------------------------------------------
|
||||
// TEST[continued]
|
||||
|
||||
The response includes the following statistics:
|
||||
|
||||
[source,js]
|
||||
--------------------------------------------------
|
||||
{
|
||||
"retention_runs": 13, <1>
|
||||
"retention_failed": 0, <2>
|
||||
"retention_timed_out": 0, <3>
|
||||
"retention_deletion_time": "1.4s", <4>
|
||||
"retention_deletion_time_millis": 1404,
|
||||
"policy_stats": [
|
||||
{
|
||||
"policy": "daily-snapshots",
|
||||
"snapshots_taken": 1,
|
||||
"snapshots_failed": 1,
|
||||
"snapshots_deleted": 0, <5>
|
||||
"snapshot_deletion_failures": 0 <6>
|
||||
}
|
||||
],
|
||||
"total_snapshots_taken": 1,
|
||||
"total_snapshots_failed": 1,
|
||||
"total_snapshots_deleted": 0, <7>
|
||||
"total_snapshot_deletion_failures": 0 <8>
|
||||
}
|
||||
--------------------------------------------------
|
||||
// TESTRESPONSE[skip:this is not actually running retention]
|
||||
<1> Number of times retention has been run
|
||||
<2> Number of times retention failed while running
|
||||
<3> Number of times retention hit the `slm.retention_duration` time limit and had to stop before deleting all eligible snapshots
|
||||
<4> Total time spent deleting snapshots by the retention process
|
||||
<5> Number of snapshots created by the "daily-snapshots" policy that have been deleted
|
||||
<6> Number of snapshots that failed to be deleted
|
||||
<7> Total number of snapshots deleted across all policies
|
||||
<8> Total number of snapshot deletion failures across all policies
|
|
@ -1,58 +0,0 @@
|
|||
[[slm-and-security]]
|
||||
=== Security and {slm-init}
|
||||
|
||||
The following cluster privileges control access to the {slm-init} actions when
|
||||
{es} {security-features} are enabled:
|
||||
|
||||
`manage_slm`:: Allows a user to perform all {slm-init} actions, including creating and updating policies
|
||||
and starting and stopping {slm-init}.
|
||||
|
||||
`read_slm`:: Allows a user to perform all read-only {slm-init} actions,
|
||||
such as getting policies and checking the {slm-init} status.
|
||||
|
||||
`cluster:admin/snapshot/*`:: Allows a user to take and delete snapshots of any
|
||||
index, whether or not they have access to that index.
|
||||
|
||||
You can create and manage roles to assign these privileges through {kib} Management.
|
||||
|
||||
To grant the privileges necessary to create and manage {slm-init} policies and snapshots,
|
||||
you can set up a role with the `manage_slm` and `cluster:admin/snapshot/*` cluster privileges
|
||||
and full access to the {slm-init} history indices.
|
||||
|
||||
For example, the following request creates an `slm-admin` role:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
POST /_security/role/slm-admin
|
||||
{
|
||||
"cluster": ["manage_slm", "cluster:admin/snapshot/*"],
|
||||
"indices": [
|
||||
{
|
||||
"names": [".slm-history-*"],
|
||||
"privileges": ["all"]
|
||||
}
|
||||
]
|
||||
}
|
||||
-----------------------------------
|
||||
// TEST[skip:security is not enabled here]
|
||||
|
||||
To grant read-only access to {slm-init} policies and the snapshot history,
|
||||
you can set up a role with the `read_slm` cluster privilege and read access
|
||||
to the {slm} history indices.
|
||||
|
||||
For example, the following request creates a `slm-read-only` role:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
POST /_security/role/slm-read-only
|
||||
{
|
||||
"cluster": ["read_slm"],
|
||||
"indices": [
|
||||
{
|
||||
"names": [".slm-history-*"],
|
||||
"privileges": ["read"]
|
||||
}
|
||||
]
|
||||
}
|
||||
-----------------------------------
|
||||
// TEST[skip:security is not enabled here]
|
|
@ -5,7 +5,8 @@
|
|||
++++
|
||||
|
||||
Triggers the review of a snapshot repository's contents and deletes any stale
|
||||
data that is not referenced by existing snapshots.
|
||||
data that is not referenced by existing snapshots. See
|
||||
<<snapshots-repository-cleanup>>.
|
||||
|
||||
////
|
||||
[source,console]
|
||||
|
@ -37,27 +38,6 @@ POST /_snapshot/my_repository/_cleanup
|
|||
* If the {es} {security-features} are enabled, you must have the `manage`
|
||||
<<privileges-list-cluster,cluster privilege>> to use this API.
|
||||
|
||||
[[clean-up-snapshot-repo-api-desc]]
|
||||
==== {api-description-title}
|
||||
|
||||
Over time, snapshot repositories can accumulate stale data that is no longer
|
||||
referenced by existing snapshots.
|
||||
|
||||
While this unreferenced data does not negatively impact the performance or
|
||||
safety of a snapshot repository, it can lead to more storage use than necessary.
|
||||
|
||||
You can use the clean up snapshot repository API to detect and delete this
|
||||
unreferenced data.
|
||||
|
||||
[TIP]
|
||||
====
|
||||
Most cleanup operations performed by this API are performed automatically when
|
||||
a snapshot is deleted from a repository.
|
||||
|
||||
If you regularly delete snapshots, calling this API may only reduce your storage
|
||||
slightly or not at all.
|
||||
====
|
||||
|
||||
[[clean-up-snapshot-repo-api-path-params]]
|
||||
==== {api-path-parms-title}
|
||||
|
||||
|
|
|
@ -4,8 +4,8 @@
|
|||
<titleabbrev>Create snapshot</titleabbrev>
|
||||
++++
|
||||
|
||||
Takes a <<snapshot-restore,snapshot>> of a cluster or specified data streams and
|
||||
indices.
|
||||
<<snapshots-take-snapshot,Takes a snapshot>> of a cluster or specified data
|
||||
streams and indices.
|
||||
|
||||
////
|
||||
[source,console]
|
||||
|
@ -41,33 +41,6 @@ PUT /_snapshot/my_repository/my_snapshot
|
|||
`create_snapshot` or `manage` <<privileges-list-cluster,cluster privilege>> to
|
||||
use this API.
|
||||
|
||||
[[create-snapshot-api-desc]]
|
||||
==== {api-description-title}
|
||||
|
||||
You can use the create snapshot API to create a <<snapshot-restore,snapshot>>, which is a
|
||||
backup taken from a running {es} cluster.
|
||||
|
||||
By default, a snapshot includes all data streams and open indices in the
|
||||
cluster, as well as the cluster state. You can change this behavior by
|
||||
specifying a list of data streams and indices to back up in the body of the
|
||||
snapshot request.
|
||||
|
||||
NOTE: You must register a snapshot repository before performing snapshot and
|
||||
restore operations. Use the <<put-snapshot-repo-api,create or update snapshot
|
||||
repository API>> to register new repositories and update existing ones.
|
||||
|
||||
The snapshot process is incremental. When creating a snapshot, {es} analyzes the list of files that are already stored in the repository and copies only files that were created or changed since the last snapshot. This process allows multiple snapshots to be preserved in the repository in a compact form.
|
||||
|
||||
The snapshot process is executed in non-blocking fashion, so all indexing and searching operations can run concurrently against the data stream or index that {es} is snapshotting.
|
||||
|
||||
A snapshot represents a point-in-time view of the moment when the snapshot was created. No records that were added to a data stream or index after the snapshot process started will be present in the snapshot.
|
||||
|
||||
For primary shards that have not been started and are not currently relocating, the snapshot process starts immediately. If shards are in the process of starting or relocating, {es} waits for these processes to complete before taking a snapshot.
|
||||
|
||||
IMPORTANT: While a snapshot of a particular shard is being created, this shard cannot be moved to another node. Relocating a shard during the snapshot process can interfere with rebalancing and allocation filtering. {es} can move a shard to another node (according to the current allocation filtering settings and rebalancing algorithm) only after the snapshot process completes.
|
||||
|
||||
Besides creating a copy of each data stream and index, the snapshot process can also store global cluster metadata, including persistent cluster settings and templates. The transient settings and registered snapshot repositories are not stored as part of the snapshot.
|
||||
|
||||
[[create-snapshot-api-path-params]]
|
||||
==== {api-path-parms-title}
|
||||
|
||||
|
@ -148,9 +121,9 @@ To exclude all data streams and indices, use `-*` or `none`.
|
|||
[id="{page-id}-feature-states"]
|
||||
`feature_states`::
|
||||
(Optional, array of strings)
|
||||
Feature states to include in the snapshot. To get a list of possible feature
|
||||
state values and their descriptions, use the <<get-features-api,get features
|
||||
API>>. Each feature state includes one or more system indices.
|
||||
<<feature-state,Feature states>> to include in the snapshot. To get a list of
|
||||
possible values and their descriptions, use the <<get-features-api,get features
|
||||
API>>.
|
||||
+
|
||||
If `include_global_state` is `true`, the snapshot includes all feature states by
|
||||
default. If `include_global_state` is `false`, the snapshot includes no feature
|
||||
|
|
|
@ -42,19 +42,6 @@ DELETE /_snapshot/my_repository/my_snapshot
|
|||
* If the {es} {security-features} are enabled, you must have the `manage`
|
||||
<<privileges-list-cluster,cluster privilege>> to use this API.
|
||||
|
||||
[[delete-snapshot-api-desc]]
|
||||
==== {api-description-title}
|
||||
|
||||
Use the delete snapshot API to delete a <<snapshot-restore,snapshot>>, which is a backup taken from a running {es} cluster.
|
||||
|
||||
When deleting a snapshot from a repository, {es} deletes all files that are associated with the snapshot and not used by any other snapshots. All files that are shared with at least one other existing snapshot are left intact.
|
||||
|
||||
If you attempt to delete a snapshot while it is being created, the snapshot process aborts and all associated will be deleted.
|
||||
|
||||
To delete multiple snapshots in a single request, separate the snapshot names with a comma or use a wildcard (`*`).
|
||||
|
||||
TIP: Use the delete snapshot API to cancel long-running snapshot operations that were started by mistake.
|
||||
|
||||
[[delete-snapshot-api-path-params]]
|
||||
==== {api-path-parms-title}
|
||||
|
||||
|
|
|
@ -43,17 +43,6 @@ GET /_snapshot/my_repository/my_snapshot
|
|||
`monitor_snapshot`, `create_snapshot`, or `manage`
|
||||
<<privileges-list-cluster,cluster privilege>> to use this API.
|
||||
|
||||
[[get-snapshot-api-desc]]
|
||||
==== {api-description-title}
|
||||
|
||||
Use the get snapshot API to return information about one or more snapshots, including:
|
||||
|
||||
* Start and end time values
|
||||
* Version of {es} that created the snapshot
|
||||
* List of included indices
|
||||
* Current state of the snapshot
|
||||
* List of failures that occurred during the snapshot
|
||||
|
||||
[[get-snapshot-api-path-params]]
|
||||
==== {api-path-parms-title}
|
||||
|
||||
|
@ -222,7 +211,7 @@ Maximum number of segments per shard in this index snapshot.
|
|||
====
|
||||
|
||||
`data_streams`::
|
||||
(array)
|
||||
(array of strings)
|
||||
List of <<data-streams,data streams>> included in the snapshot.
|
||||
|
||||
`include_global_state`::
|
||||
|
@ -231,12 +220,19 @@ Indicates whether the current cluster state is included in the snapshot.
|
|||
|
||||
[[get-snapshot-api-feature-states]]
|
||||
`feature_states`::
|
||||
(array)
|
||||
List of feature states which were included when the snapshot was taken,
|
||||
including the list of system indices included as part of the feature state. The
|
||||
`feature_name` field of each can be used in the `feature_states` parameter when
|
||||
restoring the snapshot to restore a subset of feature states. Only present if
|
||||
the snapshot includes one or more feature states.
|
||||
(array of objects) <<feature-state,Feature states>> in the snapshot.
|
||||
Only present if the snapshot contains one or more feature states.
|
||||
+
|
||||
.Properties of `features_states` objects
|
||||
[%collapsible%open]
|
||||
====
|
||||
`feature_name`::
|
||||
(string) Name of the feature, as returned by the <<get-features-api,get features
|
||||
API>>.
|
||||
|
||||
`indices`::
|
||||
(array of strings) Indices in the feature state.
|
||||
====
|
||||
|
||||
`start_time`::
|
||||
(string)
|
||||
|
|
|
@ -95,7 +95,18 @@ Use the get snapshot status API to retrieve detailed information about snapshots
|
|||
|
||||
If you specify both the repository name and snapshot, the request retrieves detailed status information for the given snapshot, even if not currently running.
|
||||
|
||||
include::{es-ref-dir}/snapshot-restore/monitor-snapshot-restore.asciidoc[tag=get-snapshot-status-warning]
|
||||
[WARNING]
|
||||
====
|
||||
Using the API to return the status of any snapshots other than currently running
|
||||
snapshots can be expensive. The API requires a read from the repository for each
|
||||
shard in each snapshot. For example, if you have 100 snapshots with 1,000 shards
|
||||
each, an API request that includes all snapshots will require 100,000 reads (100
|
||||
snapshots * 1,000 shards).
|
||||
|
||||
Depending on the latency of your storage, such requests can take an extremely
|
||||
long time to return results. These requests can also tax machine resources
|
||||
and, when using cloud storage, incur high processing costs.
|
||||
====
|
||||
|
||||
[[get-snapshot-status-api-path-params]]
|
||||
==== {api-path-parms-title}
|
||||
|
|
|
@ -30,17 +30,11 @@ PUT /_snapshot/my_repository
|
|||
* If the {es} {security-features} are enabled, you must have the `manage`
|
||||
<<privileges-list-cluster,cluster privilege>> to use this API.
|
||||
|
||||
[[put-snapshot-repo-api-desc]]
|
||||
==== {api-description-title}
|
||||
|
||||
A snapshot repository must be registered before you can perform
|
||||
<<snapshot-restore,snapshot and restore>> operations. You can use the put
|
||||
snapshot repository API to register new repositories and update existing ones.
|
||||
See <<snapshots-register-repository>>.
|
||||
|
||||
TIP: Because snapshot formats can change between major versions of
|
||||
{es}, we recommend registering a new snapshot repository for each major version.
|
||||
See <<snapshot-restore-version-compatibility>>.
|
||||
// tag::put-repo-api-prereqs[]
|
||||
* To register a snapshot repository, the cluster's global metadata must be
|
||||
writeable. Ensure there aren't any <<cluster-read-only,cluster blocks>> that
|
||||
prevent write access.
|
||||
// end::put-repo-api-prereqs[]
|
||||
|
||||
[[put-snapshot-repo-api-path-params]]
|
||||
==== {api-path-parms-title}
|
||||
|
@ -229,6 +223,8 @@ type. See <<snapshots-source-only-repository>>.
|
|||
[%collapsible%open]
|
||||
====
|
||||
`url`::
|
||||
+
|
||||
---
|
||||
(Required, string)
|
||||
URL location of the root of the shared filesystem repository. The following
|
||||
protocols are supported:
|
||||
|
@ -239,13 +235,24 @@ protocols are supported:
|
|||
* `https`
|
||||
* `jar`
|
||||
|
||||
URLs using the `http`, `https`, or `ftp` protocols must be explicitly allowed
|
||||
with the <<repositories-url-allowed,`repositories.url.allowed_urls`>> cluster
|
||||
setting. This setting supports wildcards in the place of a host, path, query, or
|
||||
fragment in the URL.
|
||||
|
||||
URLs using the `file` protocol must point to the location of a shared filesystem
|
||||
accessible to all master and data nodes in the cluster. This location must be
|
||||
registered in the `path.repo` setting.
|
||||
registered in the `path.repo` setting. You don't need to register URLs using the
|
||||
`ftp`, `http`, `https`, or `jar` protocols in the `path.repo` setting.
|
||||
---
|
||||
|
||||
URLs using the `http`, `https`, or `ftp` protocols must be explicitly allowed with the
|
||||
`repositories.url.allowed_urls` setting. This setting supports wildcards in the
|
||||
place of a host, path, query, or fragment in the URL.
|
||||
`http_max_retries`::
|
||||
(Optional, integer) Maximum number of retries for `http` and `https` URLs.
|
||||
Defaults to `5`.
|
||||
|
||||
`http_socket_timeout`::
|
||||
(Optional, <<time-units,time value>>) Maximum wait time for data transfers over
|
||||
a connection. Defaults to `50s`.
|
||||
====
|
||||
--
|
||||
|
||||
|
|
|
@ -70,10 +70,10 @@ POST /_snapshot/my_repository/my_snapshot/_restore
|
|||
[[restore-snapshot-api-prereqs]]
|
||||
==== {api-prereq-title}
|
||||
|
||||
// tag::restore-prereqs[]
|
||||
* If you use the {es} security features, you must have the `manage` or
|
||||
`cluster:admin/snapshot/*` cluster privilege to restore a snapshot.
|
||||
* If you use {es} security features, you must have the `manage` or
|
||||
`cluster:admin/snapshot/*` cluster privilege to use this API.
|
||||
|
||||
// tag::restore-prereqs[]
|
||||
* You can only restore a snapshot to a running cluster with an elected
|
||||
<<master-node,master node>>. The snapshot's repository must be
|
||||
<<snapshots-register-repository,registered>> and available to the cluster.
|
||||
|
@ -81,10 +81,24 @@ POST /_snapshot/my_repository/my_snapshot/_restore
|
|||
* The snapshot and cluster versions must be compatible. See
|
||||
<<snapshot-restore-version-compatibility>>.
|
||||
|
||||
* If you restore a data stream, ensure the cluster contains a
|
||||
<<create-index-template,matching index template>> with data stream enabled.
|
||||
Without a matching index template, a data stream can't roll over or create
|
||||
backing indices.
|
||||
* To restore a snapshot, the cluster's global metadata must be writable. Ensure
|
||||
there aren't any <<cluster-read-only,cluster blocks>> that prevent writes. The
|
||||
restore operation ignores <<index-modules-blocks,index blocks>>.
|
||||
|
||||
* Before you restore a data stream, ensure the cluster contains a
|
||||
<<create-index-template,matching index template>> with data stream enabled. To
|
||||
check, use {kib}'s <<manage-index-templates,**Index Management**>> feature or
|
||||
the <<indices-get-template,get index template API>>:
|
||||
+
|
||||
[source,console]
|
||||
----
|
||||
GET /_index_template/*?filter_path=index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream
|
||||
----
|
||||
+
|
||||
If no such template exists, you can <<create-index-template,create one>> or
|
||||
<<restore-entire-cluster,restore a cluster state>> that
|
||||
contains one. Without a matching index template, a data stream can't roll over
|
||||
or create backing indices.
|
||||
|
||||
* If your snapshot contains data from App Search or Workplace Search, ensure
|
||||
you've restored the {enterprise-search-ref}/encryption-keys.html[Enterprise
|
||||
|
@ -151,8 +165,7 @@ The cluster state includes:
|
|||
* <<indices-templates-v1,Legacy index templates>>
|
||||
* <<ingest,Ingest pipelines>>
|
||||
* <<index-lifecycle-management,{ilm-init} policies>>
|
||||
* For snapshots taken after 7.12.0, data stored in system indices, such as
|
||||
Watches and task records.
|
||||
* For snapshots taken after 7.12.0, <<feature-state,feature states>>
|
||||
// end::cluster-state-contents[]
|
||||
|
||||
If `include_global_state` is `true` then the restore operation merges the
|
||||
|
@ -162,20 +175,18 @@ It completely removes all persistent settings, non-legacy index templates,
|
|||
ingest pipelines and {ilm-init} lifecycle policies that exist in your cluster
|
||||
and replaces them with the corresponding items from the snapshot.
|
||||
|
||||
You can use the `feature_states` parameter to configure how system indices
|
||||
are restored from the cluster state.
|
||||
Use the `feature_states` parameter to configure how feature states are restored.
|
||||
--
|
||||
|
||||
[[restore-snapshot-api-feature-states]]
|
||||
`feature_states`::
|
||||
(Optional, array of strings)
|
||||
A comma-separated list of feature states you wish to restore. Each feature state contains one or more system indices. The list of feature states
|
||||
available in a given snapshot are returned by the <<get-snapshot-api-feature-states, Get Snapshot API>>. Note that feature
|
||||
states restored this way will completely replace any existing configuration, rather than returning an error if the system index already exists.
|
||||
Providing an empty array will restore no feature states, regardless of the value of `include_global_state`.
|
||||
<<feature-state,Feature states>> to restore.
|
||||
+
|
||||
By default, all available feature states will be restored if `include_global_state` is `true`, and no feature states will be restored if
|
||||
`include_global_state` is `false`.
|
||||
If `include_global_state` is `true`, the request restores all feature states
|
||||
in the snapshot by default. If `include_global_state` is `false`, the request
|
||||
restores no feature states by default. To restore no feature states, regardless
|
||||
of the `include_global_state` value, specify an empty array (`[]`).
|
||||
|
||||
[[restore-snapshot-api-index-settings]]
|
||||
`index_settings`::
|
||||
|
|
|
@ -4,7 +4,8 @@
|
|||
<titleabbrev>Verify snapshot repository</titleabbrev>
|
||||
++++
|
||||
|
||||
Verifies that a snapshot repository is functional.
|
||||
Verifies that a snapshot repository is functional. See
|
||||
<<snapshots-repository-verification>>.
|
||||
|
||||
////
|
||||
[source,console]
|
||||
|
@ -36,21 +37,6 @@ POST /_snapshot/my_repository/_verify
|
|||
* If the {es} {security-features} are enabled, you must have the `manage`
|
||||
<<privileges-list-cluster,cluster privilege>> to use this API.
|
||||
|
||||
[[verify-snapshot-repo-api-desc]]
|
||||
==== {api-description-title}
|
||||
|
||||
By default, <<put-snapshot-repo-api,create or update snapshot repository API>>
|
||||
requests verify that a snapshot is functional on all master and data nodes in
|
||||
the cluster.
|
||||
|
||||
You can skip this verification using the create or update snapshot repository
|
||||
API's `verify` parameter. You can then use the verify snapshot repository API to
|
||||
manually verify the repository.
|
||||
|
||||
If verification is successful, the verify snapshot repository API returns a list
|
||||
of nodes connected to the snapshot repository. If verification failed, the API
|
||||
returns an error.
|
||||
|
||||
[[verify-snapshot-repo-api-path-params]]
|
||||
==== {api-path-parms-title}
|
||||
|
||||
|
|
|
@ -1,47 +0,0 @@
|
|||
[[delete-snapshots]]
|
||||
== Delete a snapshot
|
||||
|
||||
////
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
PUT /_snapshot/my_backup
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "my_backup_location"
|
||||
}
|
||||
}
|
||||
|
||||
PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
|
||||
|
||||
PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true
|
||||
|
||||
PUT /_snapshot/my_backup/snapshot_3?wait_for_completion=true
|
||||
-----------------------------------
|
||||
// TESTSETUP
|
||||
|
||||
////
|
||||
|
||||
Use the <<delete-snapshot-api,delete snapshot API>> to delete a snapshot
|
||||
from the repository:
|
||||
|
||||
[source,console]
|
||||
----
|
||||
DELETE /_snapshot/my_backup/snapshot_1
|
||||
----
|
||||
|
||||
When a snapshot is deleted from a repository, {es} deletes all files associated with the
|
||||
snapshot that are not in-use by other snapshots.
|
||||
|
||||
If the delete snapshot operation starts while the snapshot is being
|
||||
created, the snapshot process halts and all files created as part of the snapshotting process are
|
||||
removed. Use the <<delete-snapshot-api,Delete snapshot API>> to cancel long running snapshot operations that were
|
||||
started by mistake.
|
||||
|
||||
To delete multiple snapshots from a repository, separate snapshot names by commas or use wildcards:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
DELETE /_snapshot/my_backup/snapshot_2,snapshot_3
|
||||
DELETE /_snapshot/my_backup/snap*
|
||||
-----------------------------------
|
|
@ -1,77 +1,114 @@
|
|||
[[snapshot-restore]]
|
||||
= Snapshot and restore
|
||||
|
||||
[partintro]
|
||||
--
|
||||
A snapshot is a backup of a running {es} cluster. You can use snapshots to:
|
||||
|
||||
// tag::snapshot-intro[]
|
||||
A _snapshot_ is a backup taken from a running {es} cluster.
|
||||
You can take snapshots of an entire cluster, including all its data streams and
|
||||
indices. You can also take snapshots of only specific data streams or indices in
|
||||
the cluster.
|
||||
* Regularly back up a cluster with no downtime
|
||||
* Recover data after deletion or a hardware failure
|
||||
* Transfer data between clusters
|
||||
* Reduce your storage costs by using <<searchable-snapshots,searchable
|
||||
snapshots>> in the cold and frozen data tiers
|
||||
|
||||
You must
|
||||
<<snapshots-register-repository, register a snapshot repository>>
|
||||
before you can <<snapshots-take-snapshot, create snapshots>>.
|
||||
[discrete]
|
||||
[[snapshot-workflow]]
|
||||
== The snapshot workflow
|
||||
|
||||
Snapshots can be stored in either local or remote repositories.
|
||||
Remote repositories can reside on Amazon S3, HDFS, Microsoft Azure,
|
||||
Google Cloud Storage,
|
||||
and other platforms supported by a {plugins}/repository.html[repository plugin].
|
||||
// end::snapshot-intro[]
|
||||
{es} stores snapshots in an off-cluster storage location called a snapshot
|
||||
repository. Before you can take or restore snapshots, you must
|
||||
<<snapshots-register-repository,register a snapshot repository>> on the cluster.
|
||||
{es} supports several repository types with cloud storage options, including:
|
||||
|
||||
{es} takes snapshots incrementally: the snapshotting process only copies data
|
||||
to the repository that was not already copied there by an earlier snapshot,
|
||||
avoiding unnecessary duplication of work or storage space. This means you can
|
||||
safely take snapshots very frequently with minimal overhead. This
|
||||
incrementality only applies within a single repository because no data is
|
||||
shared between repositories. Snapshots are also logically independent from each
|
||||
other, even within a single repository: deleting a snapshot does not affect the
|
||||
integrity of any other snapshot.
|
||||
* AWS S3
|
||||
* Google Cloud Storage (GCS)
|
||||
* Microsoft Azure
|
||||
|
||||
// tag::restore-intro[]
|
||||
You can <<snapshots-restore-snapshot,restore snapshots>> to a running cluster, which includes all data streams and indices in the snapshot
|
||||
by default.
|
||||
However, you can choose to restore only the cluster state or specific data
|
||||
streams or indices from a snapshot.
|
||||
// end::restore-intro[]
|
||||
After you register a snapshot repository, you can use
|
||||
<<snapshot-lifecycle-management,{slm} ({slm-init})>> to automatically take and
|
||||
manage snapshots. You can then <<snapshots-restore-snapshot,restore a snapshot>>
|
||||
to recover or transfer its data.
|
||||
|
||||
You can use
|
||||
<<getting-started-snapshot-lifecycle-management, {slm}>>
|
||||
to automatically take and manage snapshots.
|
||||
[discrete]
|
||||
[[snapshot-contents]]
|
||||
== Snapshot contents
|
||||
|
||||
// tag::backup-warning[]
|
||||
WARNING: **The only reliable and supported way to back up a cluster is by
|
||||
taking a snapshot**. You cannot back up an {es} cluster by making copies of the
|
||||
data directories of its nodes. There are no supported methods to restore any
|
||||
data from a filesystem-level backup. If you try to restore a cluster from such
|
||||
a backup, it may fail with reports of corruption or missing files or other data
|
||||
inconsistencies, or it may appear to have succeeded having silently lost some
|
||||
of your data.
|
||||
// end::backup-warning[]
|
||||
By default, a snapshot of a cluster contains the cluster state, all data
|
||||
streams, and all indices, including system indices. The cluster state includes:
|
||||
|
||||
A copy of the data directories of a cluster's nodes does not work as a backup
|
||||
because it is not a consistent representation of their contents at a single
|
||||
point in time. You cannot fix this by shutting down nodes while making the
|
||||
copies, nor by taking atomic filesystem-level snapshots, because {es} has
|
||||
consistency requirements that span the whole cluster. You must use the built-in
|
||||
snapshot functionality for cluster backups.
|
||||
include::apis/restore-snapshot-api.asciidoc[tag=cluster-state-contents]
|
||||
|
||||
You can also take snapshots of only specific data streams or indices in the
|
||||
cluster. A snapshot that includes a data stream or index automatically includes
|
||||
its aliases. When you restore a snapshot, you can choose whether to restore
|
||||
these aliases.
|
||||
|
||||
Snapshots don't contain or back up:
|
||||
|
||||
* Transient cluster settings
|
||||
* Registered snapshot repositories
|
||||
* Node configuration files
|
||||
|
||||
[discrete]
|
||||
[[feature-state]]
|
||||
=== Feature states
|
||||
|
||||
A feature state contains the indices and data streams used to store
|
||||
configurations, history, and other data for an Elastic feature, such as {es}
|
||||
security or {kib}.
|
||||
|
||||
A feature state typically includes one or more <<system-indices,system indices
|
||||
or system data streams>>. It may also include regular indices and data streams
|
||||
used by the feature. For example, a feature state may include a regular index
|
||||
that contains the feature's execution history. Storing this history in a regular
|
||||
index lets you more easily search it.
|
||||
|
||||
[discrete]
|
||||
[[how-snapshots-work]]
|
||||
== How snapshots work
|
||||
|
||||
Snapshots are automatically deduplicated to save storage space and reduce network
|
||||
transfer costs. To back up an index, a snapshot makes a copy of the index's
|
||||
<<near-real-time,segments>> and stores them in the snapshot repository. Since
|
||||
segments are immutable, the snapshot only needs to copy any new segments created
|
||||
since the repository's last snapshot.
|
||||
|
||||
Each snapshot is also logically independent. When you delete a snapshot, {es}
|
||||
only deletes the segments used exclusively by that snapshot. {es} doesn't delete
|
||||
segments used by other snapshots in the repository.
|
||||
|
||||
[discrete]
|
||||
[[snapshots-shard-allocation]]
|
||||
=== Snapshots and shard allocation
|
||||
|
||||
A snapshot copies segments from an index's primary shards. When you start a
|
||||
snapshot, {es} immediately starts copying the segments of any available primary
|
||||
shards. If a shard is starting or relocating, {es} will wait for these processes
|
||||
to complete before copying the shard's segments. If one or more primary shards
|
||||
aren't available, the snapshot attempt fails.
|
||||
|
||||
Once a snapshot begins copying a shard's segments, {es} won't move the shard to
|
||||
another node, even if rebalancing or shard allocation settings would typically
|
||||
trigger reallocation. {es} will only move the shard after the snapshot finishes
|
||||
copying the shard's data.
|
||||
|
||||
[discrete]
|
||||
[[snapshot-start-stop-times]]
|
||||
=== Snapshot start and stop times
|
||||
|
||||
A snapshot doesn't represent a cluster at a precise point in time. Instead, each
|
||||
snapshot includes a start and end time. The snapshot represents a view of each
|
||||
shard's data at some point between these two times.
|
||||
|
||||
[discrete]
|
||||
[[snapshot-restore-version-compatibility]]
|
||||
=== Version compatibility
|
||||
== Snapshot compatibility
|
||||
|
||||
IMPORTANT: Version compatibility refers to the underlying Lucene index
|
||||
compatibility. Follow the <<setup-upgrade,Upgrade documentation>>
|
||||
when migrating between versions.
|
||||
To restore a snapshot to a cluster, the versions for the snapshot, cluster, and
|
||||
any restored indices must be compatible.
|
||||
|
||||
A snapshot contains a copy of the on-disk data structures that comprise an
|
||||
index or a data stream's backing indices. This means that snapshots can only be restored to versions of
|
||||
{es} that can read the indices.
|
||||
[discrete]
|
||||
[[snapshot-cluster-compatibility]]
|
||||
=== Snapshot version compatibility
|
||||
|
||||
The following table indicates snapshot compatibility between versions. The first column denotes the base version that you can restore snapshots from.
|
||||
|
||||
// tag::snapshot-compatibility-matrix[]
|
||||
[cols="6"]
|
||||
|===
|
||||
| 5+^h| Cluster version
|
||||
|
@ -82,17 +119,9 @@ The following table indicates snapshot compatibility between versions. The first
|
|||
^| *6.x* -> ^|{no-icon} ^|{no-icon} ^|{yes-icon} ^|{yes-icon} ^|{no-icon}
|
||||
^| *7.x* -> ^|{no-icon} ^|{no-icon} ^|{no-icon} ^|{yes-icon} ^|{yes-icon}
|
||||
|===
|
||||
// end::snapshot-compatibility-matrix[]
|
||||
|
||||
The following conditions apply for restoring snapshots and indices across versions:
|
||||
|
||||
* *Snapshots*: You cannot restore snapshots from later {es} versions into a cluster running an earlier {es} version. For example, you cannot restore a snapshot taken in 7.6.0 to a cluster running 7.5.0.
|
||||
* *Indices*: You cannot restore indices into a cluster running a version of {es} that is more than _one major version_ newer than the version of {es} used to snapshot the indices. For example, you cannot restore indices from a snapshot taken in 5.0 to a cluster running 7.0.
|
||||
+
|
||||
[NOTE]
|
||||
====
|
||||
The one caveat is that snapshots taken by {es} 2.0 can be restored in clusters running {es} 5.0.
|
||||
====
|
||||
You can't restore a snapshot to an earlier version of {es}. For example, you
|
||||
can't restore a snapshot taken in 7.6.0 to a cluster running 7.5.0.
|
||||
|
||||
ifeval::["{release-state}"!="released"]
|
||||
[[snapshot-prerelease-build-compatibility]]
|
||||
|
@ -111,32 +140,53 @@ succeed having silently lost some data. You should discard your repository
|
|||
before using a different build.
|
||||
endif::[]
|
||||
|
||||
Each snapshot can contain indices created in various versions of {es}. This
|
||||
includes backing indices created for data streams. When restoring a snapshot, it
|
||||
must be possible to restore all of these indices into the target cluster. If any
|
||||
indices in a snapshot were created in an incompatible version, you will not be
|
||||
able restore the snapshot.
|
||||
[discrete]
|
||||
[[snapshot-index-compatibility]]
|
||||
=== Index compatibility
|
||||
|
||||
IMPORTANT: When backing up your data prior to an upgrade, keep in mind that you
|
||||
won't be able to restore snapshots after you upgrade if they contain indices
|
||||
created in a version that's incompatible with the upgrade version.
|
||||
A cluster is only compatible with indices created in the previous major version
|
||||
of {es}. Any data stream or index you restore from a snapshot must be compatible
|
||||
with the current cluster's version. If you try to restore an index created in an
|
||||
incompatible version, the restore attempt will fail.
|
||||
|
||||
If you end up in a situation where you need to restore a snapshot of a data stream or index
|
||||
that is incompatible with the version of the cluster you are currently running,
|
||||
you can restore it on the latest compatible version and use
|
||||
<<reindex-from-remote,reindex-from-remote>> to rebuild the data stream or index on the current
|
||||
version. Reindexing from remote is only possible if the original data stream or index has
|
||||
source enabled. Retrieving and reindexing the data can take significantly
|
||||
longer than simply restoring a snapshot. If you have a large amount of data, we
|
||||
recommend testing the reindex from remote process with a subset of your data to
|
||||
understand the time requirements before proceeding.
|
||||
A snapshot can contain indices created in a previous major version. For example,
|
||||
a snapshot of a 6.x cluster can contain an index created in 5.x. If you try to
|
||||
restore the 5.x index to a 7.x cluster, the restore attempt will fail. Keep this
|
||||
in mind if you take a snapshot before upgrading a cluster.
|
||||
|
||||
--
|
||||
As a workaround, you can first restore the data stream or index to another
|
||||
cluster running the latest version of {es} that's compatible with both the index
|
||||
and your current cluster. You can then use
|
||||
<<reindex-from-remote,reindex-from-remote>> to rebuild the data stream or index
|
||||
on your current cluster. Reindex from remote is only possible if the index's
|
||||
<<mapping-source-field,`_source`>> is enabled.
|
||||
|
||||
Reindexing from remote can take significantly longer than restoring a snapshot.
|
||||
Before you start, test the reindex from remote process with a subset of the data
|
||||
to estimate your time requirements.
|
||||
|
||||
[discrete]
|
||||
[[other-backup-methods]]
|
||||
== Other backup methods
|
||||
|
||||
// tag::backup-warning[]
|
||||
**Taking a snapshot is the only reliable and supported way to back up a
|
||||
cluster.** You cannot back up an {es} cluster by making copies of the data
|
||||
directories of its nodes. There are no supported methods to restore any data
|
||||
from a filesystem-level backup. If you try to restore a cluster from such a
|
||||
backup, it may fail with reports of corruption or missing files or other data
|
||||
inconsistencies, or it may appear to have succeeded having silently lost some of
|
||||
your data.
|
||||
// end::backup-warning[]
|
||||
|
||||
A copy of the data directories of a cluster's nodes does not work as a backup
|
||||
because it is not a consistent representation of their contents at a single
|
||||
point in time. You cannot fix this by shutting down nodes while making the
|
||||
copies, nor by taking atomic filesystem-level snapshots, because {es} has
|
||||
consistency requirements that span the whole cluster. You must use the built-in
|
||||
snapshot functionality for cluster backups.
|
||||
|
||||
include::register-repository.asciidoc[]
|
||||
include::take-snapshot.asciidoc[]
|
||||
include::monitor-snapshot-restore.asciidoc[]
|
||||
include::delete-snapshot.asciidoc[]
|
||||
include::restore-snapshot.asciidoc[]
|
||||
include::../slm/index.asciidoc[]
|
||||
include::../searchable-snapshots/index.asciidoc[]
|
||||
|
|
|
@ -1,153 +0,0 @@
|
|||
[[snapshots-monitor-snapshot-restore]]
|
||||
== Monitor snapshot progress
|
||||
|
||||
Use the <<get-snapshot-api,get snapshot API>> or the
|
||||
<<get-snapshot-status-api,get snapshot status API>> to monitor the
|
||||
progress of snapshot operations. Both APIs support the
|
||||
`wait_for_completion` parameter that blocks the client until the
|
||||
operation finishes, which is the simplest method of being notified
|
||||
about operation completion.
|
||||
|
||||
////
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
PUT /_snapshot/my_backup
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "my_backup_location"
|
||||
}
|
||||
}
|
||||
|
||||
PUT /_snapshot/my_fs_backup
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "my_other_backup_location"
|
||||
}
|
||||
}
|
||||
|
||||
PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
|
||||
|
||||
PUT /_snapshot/my_backup/some_other_snapshot?wait_for_completion=true
|
||||
-----------------------------------
|
||||
// TESTSETUP
|
||||
|
||||
////
|
||||
|
||||
Use the `_current` parameter to retrieve all currently running
|
||||
snapshots in the cluster:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
GET /_snapshot/my_backup/_current
|
||||
-----------------------------------
|
||||
|
||||
Including a snapshot name in the request retrieves information about a single snapshot:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
GET /_snapshot/my_backup/snapshot_1
|
||||
-----------------------------------
|
||||
|
||||
This request retrieves basic information about the snapshot, including start and end time, version of
|
||||
{es} that created the snapshot, the list of included data streams and indices, the current state of the
|
||||
snapshot and the list of failures that occurred during the snapshot.
|
||||
|
||||
Similar to repositories, you can retrieve information about multiple snapshots in a single request, and wildcards are supported:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
GET /_snapshot/my_backup/snapshot_*,some_other_snapshot
|
||||
-----------------------------------
|
||||
|
||||
Separate repository names with commas or use wildcards to retrieve snapshots from multiple repositories:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
GET /_snapshot/_all
|
||||
GET /_snapshot/my_backup,my_fs_backup
|
||||
GET /_snapshot/my*/snap*
|
||||
-----------------------------------
|
||||
|
||||
Add the `_all` parameter to the request to list all snapshots currently stored in the repository:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
GET /_snapshot/my_backup/_all
|
||||
-----------------------------------
|
||||
|
||||
This request fails if some of the snapshots are unavailable. Use the boolean parameter `ignore_unavailable` to
|
||||
return all snapshots that are currently available.
|
||||
|
||||
Getting all snapshots in the repository can be costly on cloud-based repositories,
|
||||
both from a cost and performance perspective. If the only information required is
|
||||
the snapshot names or UUIDs in the repository and the data streams and indices in each snapshot, then
|
||||
the optional boolean parameter `verbose` can be set to `false` to execute a more
|
||||
performant and cost-effective retrieval of the snapshots in the repository.
|
||||
|
||||
NOTE: Setting `verbose` to `false` omits additional information
|
||||
about the snapshot, such as metadata, start and end time, number of shards that include the snapshot, and error messages. The default value of the `verbose` parameter is `true`.
|
||||
|
||||
[discrete]
|
||||
[[get-snapshot-detailed-status]]
|
||||
=== Retrieving snapshot status
|
||||
To retrieve more detailed information about snapshots, use the <<get-snapshot-status-api,get snapshot status API>>. While snapshot request returns only basic information about the snapshot in progress, the snapshot status request returns
|
||||
complete breakdown of the current state for each shard participating in the snapshot.
|
||||
|
||||
// tag::get-snapshot-status-warning[]
|
||||
[WARNING]
|
||||
====
|
||||
Using the get snapshot status API to return any status results other than the currently running snapshots (`_current`) can be very expensive. Each request to retrieve snapshot status results in file reads from every shard in a snapshot, for each snapshot. Such requests are taxing to machine resources and can also incur high processing costs when running in the cloud.
|
||||
|
||||
For example, if you have 100 snapshots with 1,000 shards each, the API request will result in 100,000 file reads (100 snapshots * 1,000 shards). Depending on the latency of your file storage, the request can take extremely long to retrieve results.
|
||||
====
|
||||
// end::get-snapshot-status-warning[]
|
||||
|
||||
The following request retrieves all currently running snapshots with
|
||||
detailed status information:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
GET /_snapshot/_status
|
||||
-----------------------------------
|
||||
|
||||
By specifying a repository name, it's possible
|
||||
to limit the results to a particular repository:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
GET /_snapshot/my_backup/_status
|
||||
-----------------------------------
|
||||
|
||||
If both repository name and snapshot name are specified, the request
|
||||
returns detailed status information for the given snapshot, even
|
||||
if not currently running:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
GET /_snapshot/my_backup/snapshot_1/_status
|
||||
-----------------------------------
|
||||
|
||||
[discrete]
|
||||
[[get-snapshot-stop-snapshot]]
|
||||
=== Stop snapshot operations
|
||||
To stop a currently running snapshot that was started by mistake or is taking unusually long, use
|
||||
the <<delete-snapshot-api,delete snapshot API>>.
|
||||
This operation checks whether the deleted snapshot is currently running. If it is, the delete snapshot operation stops
|
||||
that snapshot before deleting the snapshot data from the repository.
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
DELETE /_snapshot/my_backup/snapshot_1
|
||||
-----------------------------------
|
||||
|
||||
[discrete]
|
||||
[[get-snapshot-cluster-blocks]]
|
||||
=== Effect of cluster blocks on snapshot and restore
|
||||
Many snapshot and restore operations are affected by cluster and index blocks. For example, registering and unregistering
|
||||
repositories require global metadata write access. The snapshot operation requires that all indices, backing indices, and their metadata (including
|
||||
global metadata) are readable. The restore operation requires the global metadata to be writable. However,
|
||||
the index level blocks are ignored during restore because indices are essentially recreated during restore.
|
||||
A repository content is not part of the cluster and therefore cluster blocks do not affect internal
|
||||
repository operations such as listing or deleting snapshots from an already registered repository.
|
|
@ -4,104 +4,132 @@
|
|||
<titleabbrev>Register a repository</titleabbrev>
|
||||
++++
|
||||
|
||||
You must register a snapshot repository before you can perform snapshot and
|
||||
restore operations. Use the <<put-snapshot-repo-api,create or update snapshot
|
||||
repository API>> to register or update a snapshot repository. We recommend
|
||||
creating a new snapshot repository for each major version. The valid repository
|
||||
settings depend on the repository type.
|
||||
This guide shows you how to register a snapshot repository. A snapshot
|
||||
repository is an off-cluster storage location for your snapshots. You must
|
||||
register a repository before you can take or restore snapshots.
|
||||
|
||||
If you register the same snapshot repository with multiple clusters, only
|
||||
one cluster should have write access to the repository. All other clusters
|
||||
connected to that repository should set the repository to `readonly` mode.
|
||||
In this guide, you’ll learn how to:
|
||||
|
||||
[IMPORTANT]
|
||||
====
|
||||
The snapshot format can change across major versions, so if you have
|
||||
clusters on different versions trying to write to the same repository, snapshots
|
||||
written by one version may not be visible to the other and the repository could
|
||||
be corrupted. While setting the repository to `readonly` on all but one of the
|
||||
clusters should work with multiple clusters differing by one major version, it
|
||||
is not a supported configuration.
|
||||
====
|
||||
* Register a snapshot repository
|
||||
* Verify that a repository is functional
|
||||
* Clean up a repository to remove unneeded files
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
PUT /_snapshot/my_backup
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "my_backup_location"
|
||||
}
|
||||
}
|
||||
-----------------------------------
|
||||
// TESTSETUP
|
||||
[discrete]
|
||||
[[snapshot-repo-prereqs]]
|
||||
=== Prerequisites
|
||||
|
||||
Use the <<get-snapshot-api,get snapshot API>> to retrieve information about a registered repository:
|
||||
// tag::kib-snapshot-prereqs[]
|
||||
* To use {kib}'s **Snapshot and Restore** feature, you must have the following
|
||||
permissions:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
GET /_snapshot/my_backup
|
||||
-----------------------------------
|
||||
** <<privileges-list-cluster,Cluster privileges>>: `monitor`, `manage_slm`,
|
||||
`cluster:admin/snapshot`, and `cluster:admin/repository`
|
||||
|
||||
This request returns the following response:
|
||||
** <<privileges-list-indices,Index privilege>>: `all` on the `monitor` index
|
||||
// end::kib-snapshot-prereqs[]
|
||||
|
||||
[source,console-result]
|
||||
-----------------------------------
|
||||
{
|
||||
"my_backup": {
|
||||
"type": "fs",
|
||||
"uuid": "0JLknrXbSUiVPuLakHjBrQ",
|
||||
"settings": {
|
||||
"location": "my_backup_location"
|
||||
}
|
||||
}
|
||||
}
|
||||
-----------------------------------
|
||||
// TESTRESPONSE[s/"uuid": "0JLknrXbSUiVPuLakHjBrQ"/"uuid": $body.my_backup.uuid/]
|
||||
include::apis/put-repo-api.asciidoc[tag=put-repo-api-prereqs]
|
||||
|
||||
To retrieve information about multiple repositories, specify a comma-delimited
|
||||
list of repositories. You can also use a wildcard (`*`) when
|
||||
specifying repository names. For example, the following request retrieves
|
||||
information about all of the snapshot repositories that start with `repo` or
|
||||
contain `backup`:
|
||||
[discrete]
|
||||
[[snapshot-repo-considerations]]
|
||||
=== Considerations
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
GET /_snapshot/repo*,*backup*
|
||||
-----------------------------------
|
||||
When registering a snapshot repository, keep the following in mind:
|
||||
|
||||
To retrieve information about all registered snapshot repositories, omit the
|
||||
repository name:
|
||||
* Each snapshot repository is separate and independent. {es} doesn't share
|
||||
data between repositories.
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
GET /_snapshot
|
||||
-----------------------------------
|
||||
* {blank}
|
||||
+
|
||||
--
|
||||
// tag::multi-cluster-repo[]
|
||||
If you register the same snapshot repository with multiple clusters, only one
|
||||
cluster should have write access to the repository. On other clusters, register
|
||||
the repository as read-only.
|
||||
|
||||
Alternatively, you can specify `_all`:
|
||||
This prevents multiple clusters from writing to the repository at the same time
|
||||
and corrupting the repository’s contents.
|
||||
// end::multi-cluster-repo[]
|
||||
--
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
GET /_snapshot/_all
|
||||
-----------------------------------
|
||||
* Use a different snapshot repository for each major version of {es}. Mixing
|
||||
snapshots from different major versions can corrupt a repository’s contents.
|
||||
|
||||
You can unregister a repository using the
|
||||
<<delete-snapshot-repo-api,delete snapshot repository API>>:
|
||||
[discrete]
|
||||
[[manage-snapshot-repos]]
|
||||
=== Manage snapshot repositories
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
DELETE /_snapshot/my_backup
|
||||
-----------------------------------
|
||||
You can register and manage snapshot repositories in two ways:
|
||||
|
||||
When a repository is unregistered, {es} only removes the reference to the
|
||||
location where the repository is storing the snapshots. The snapshots themselves
|
||||
are left untouched and in place.
|
||||
* {kib}'s **Snapshot and Restore** feature
|
||||
* {es}'s <<snapshot-restore-repo-apis,snapshot repository management APIs>>
|
||||
|
||||
To manage repositories in {kib}, go to the main menu and click **Stack
|
||||
Management** > **Snapshot and Restore** > **Repositories**. To register a
|
||||
snapshot repository, click **Register repository**.
|
||||
|
||||
[discrete]
|
||||
[[snapshot-repo-types]]
|
||||
=== Snapshot repository types
|
||||
|
||||
Supported snapshot repository types vary based on your deployment type.
|
||||
|
||||
[discrete]
|
||||
[[ess-repo-types]]
|
||||
==== {ess} repository types
|
||||
|
||||
{ess-trial}[{ess} deployments] automatically register the
|
||||
{cloud}/ec-snapshot-restore.html[`found-snapshots`] repository. {ess} uses this
|
||||
repository and the `cloud-snapshot-policy` to take periodic snapshots of your
|
||||
cluster. You can also use the `found-snapshots` repository for your own
|
||||
<<automate-snapshots-slm,{slm-init} policies>> or to store searchable snapshots.
|
||||
|
||||
The `found-snapshots` repository is specific to each deployment. However, you
|
||||
can restore snapshots from another deployment's `found-snapshots` repository if
|
||||
the deployments are under the same account and in the same region. See
|
||||
{cloud}/ec_share_a_repository_across_clusters.html[Share a repository across
|
||||
clusters].
|
||||
|
||||
{ess} deployments also support the following repository types:
|
||||
|
||||
* {cloud}/ec-aws-custom-repository.html[AWS S3]
|
||||
* {cloud}/ec-gcs-snapshotting.html[Google Cloud Storage (GCS)]
|
||||
* {cloud}/ec-azure-snapshotting.html[Microsoft Azure]
|
||||
* <<snapshots-source-only-repository>>
|
||||
|
||||
[discrete]
|
||||
[[self-managed-repo-types]]
|
||||
==== Self-managed repository types
|
||||
|
||||
If you run the {es} on your own hardware, you can use the following built-in
|
||||
snapshot repository types:
|
||||
|
||||
* <<snapshots-filesystem-repository,Shared file system>>
|
||||
* <<snapshots-read-only-repository>>
|
||||
* <<snapshots-source-only-repository>>
|
||||
|
||||
[[snapshots-repository-plugins]]
|
||||
Other repository types are available through official plugins:
|
||||
|
||||
* {plugins}/repository-s3.html[AWS S3]
|
||||
* {plugins}/repository-gcs.html[Google Cloud Storage (GCS)]
|
||||
* {plugins}/repository-hdfs.html[Hadoop Distributed File System (HDFS)]
|
||||
* {plugins}/repository-azure.html[Microsoft Azure]
|
||||
|
||||
You can also use alternative implementations of these repository types, such as
|
||||
MinIO, as long as they're compatible. To verify a repository's compatibility,
|
||||
see <<snapshots-repository-verification>>.
|
||||
|
||||
[discrete]
|
||||
[[snapshots-filesystem-repository]]
|
||||
=== Shared file system repository
|
||||
==== Shared file system repository
|
||||
|
||||
Use a shared file system repository (`"type": "fs"`) to store snapshots on a
|
||||
// tag::on-prem-repo-type[]
|
||||
NOTE: This repository type is only available if you run {es} on your own
|
||||
hardware. If you use {ess}, see <<ess-repo-types>>.
|
||||
// end::on-prem-repo-type[]
|
||||
|
||||
Use a shared file system repository to store snapshots on a
|
||||
shared file system.
|
||||
|
||||
To register a shared file system repository, first mount the file system to the
|
||||
|
@ -123,24 +151,20 @@ include::{es-repo-dir}/tab-widgets/register-fs-repo-widget.asciidoc[]
|
|||
|
||||
[discrete]
|
||||
[[snapshots-read-only-repository]]
|
||||
=== Read-only URL repository
|
||||
==== Read-only URL repository
|
||||
|
||||
If you register the same snapshot repository with multiple clusters, only one
|
||||
cluster should have write access to the repository. Having multiple clusters
|
||||
write to the repository at the same time risks corrupting the contents of the
|
||||
repository.
|
||||
include::register-repository.asciidoc[tag=on-prem-repo-type]
|
||||
|
||||
To reduce this risk, you can use URL repositories (`"type": "url"`) to give one
|
||||
or more clusters read-only access to a shared file system repository. As URL
|
||||
repositories are always read-only, they are a safer and more convenient
|
||||
alternative to registering a read-only shared filesystem repository.
|
||||
You can use a URL repository to give a cluster read-only access to a shared file
|
||||
system. Since URL repositories are always read-only, they're a safer and more
|
||||
convenient alternative to registering a read-only shared filesystem repository.
|
||||
|
||||
The URL specified in the `url` parameter should point to the root of the shared
|
||||
filesystem repository.
|
||||
Use {kib} or the <<put-snapshot-repo-api,create snapshot repository API>> to
|
||||
register a URL repository.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT /_snapshot/my_read_only_url_repository
|
||||
PUT _snapshot/my_read_only_url_repository
|
||||
{
|
||||
"type": "url",
|
||||
"settings": {
|
||||
|
@ -150,62 +174,28 @@ PUT /_snapshot/my_read_only_url_repository
|
|||
----
|
||||
// TEST[skip:no access to url file path]
|
||||
|
||||
The following settings are supported:
|
||||
|
||||
`url`::
|
||||
(Required)
|
||||
URL where the snapshots are stored.
|
||||
|
||||
The `url` parameter supports the following protocols:
|
||||
|
||||
* `file`
|
||||
* `ftp`
|
||||
* `http`
|
||||
* `https`
|
||||
* `jar`
|
||||
|
||||
`http_max_retries`::
|
||||
|
||||
Specifies the maximun number of retries that are performed in case of transient failures for `http` and `https` URLs.
|
||||
The default value is `5`.
|
||||
|
||||
`http_socket_timeout`::
|
||||
|
||||
Specifies the maximum time to wait for data to be transferred over a connection before timing out. The default value is `50s`.
|
||||
|
||||
URLs using the `file` protocol must point to the location of a shared filesystem
|
||||
accessible to all master and data nodes in the cluster. This location must be
|
||||
registered in the `path.repo` setting, similar to a
|
||||
<<snapshots-filesystem-repository,shared file system repository>>.
|
||||
|
||||
URLs using the `ftp`, `http`, or `https` protocols must be explicitly allowed with the
|
||||
`repositories.url.allowed_urls` setting. This setting supports wildcards (`*`)
|
||||
in place of a host, path, query, or fragment in the URL. For example:
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
repositories.url.allowed_urls: ["http://www.example.org/root/*", "https://*.mydomain.com/*?*#*"]
|
||||
----
|
||||
|
||||
NOTE: URLs using the `ftp`, `http`, `https`, or `jar` protocols do not need to
|
||||
be registered in the `path.repo` setting.
|
||||
|
||||
[discrete]
|
||||
[[snapshots-source-only-repository]]
|
||||
=== Source only repository
|
||||
==== Source-only repository
|
||||
|
||||
A source repository enables you to create minimal, source-only snapshots that take up to 50% less space on disk.
|
||||
Source only snapshots contain stored fields and index metadata. They do not include index or doc values structures
|
||||
and are not searchable when restored. After restoring a source-only snapshot, you must <<docs-reindex,reindex>>
|
||||
the data into a new index.
|
||||
You can use a source-only repository to take minimal, source-only snapshots that
|
||||
use up to 50% less disk space than regular snapshots.
|
||||
|
||||
Source repositories delegate to another snapshot repository for storage.
|
||||
Unlike other repository types, a source-only repository doesn't directly store
|
||||
snapshots. It delegates storage to another registered snapshot repository.
|
||||
|
||||
When you take a snapshot using a source-only repository, {es} creates a
|
||||
source-only snapshot in the delegated storage repository. This snapshot only
|
||||
contains stored fields and metadata. It doesn't include index or doc values
|
||||
structures and isn't immediately searchable when restored. To search the
|
||||
restored data, you first have to <<docs-reindex,reindex>> it into a new data
|
||||
stream or index.
|
||||
|
||||
[IMPORTANT]
|
||||
==================================================
|
||||
|
||||
Source only snapshots are only supported if the `_source` field is enabled and no source-filtering is applied.
|
||||
When you restore a source only snapshot:
|
||||
Source-only snapshots are only supported if the `_source` field is enabled and no source-filtering is applied.
|
||||
When you restore a source-only snapshot:
|
||||
|
||||
* The restored index is read-only and can only serve `match_all` search or scroll requests to enable reindexing.
|
||||
|
||||
|
@ -216,11 +206,13 @@ When you restore a source only snapshot:
|
|||
|
||||
==================================================
|
||||
|
||||
When you create a source repository, you must specify the type and name of the delegate repository
|
||||
where the snapshots will be stored:
|
||||
Before registering a source-only repository, use {kib} or the
|
||||
<<put-snapshot-repo-api,create snapshot repository API>> to register a snapshot
|
||||
repository of another type to use for storage. Then register the source-only
|
||||
repository and specify the delegated storage repository in the request.
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
----
|
||||
PUT _snapshot/my_src_only_repository
|
||||
{
|
||||
"type": "source",
|
||||
|
@ -229,78 +221,83 @@ PUT _snapshot/my_src_only_repository
|
|||
"location": "my_backup_location"
|
||||
}
|
||||
}
|
||||
-----------------------------------
|
||||
----
|
||||
// TEST[continued]
|
||||
|
||||
[discrete]
|
||||
[[snapshots-repository-plugins]]
|
||||
=== Repository plugins
|
||||
|
||||
Other repository backends are available in these official plugins:
|
||||
|
||||
* {plugins}/repository-s3.html[repository-s3] for S3 repository support
|
||||
* {plugins}/repository-hdfs.html[repository-hdfs] for HDFS repository support in Hadoop environments
|
||||
* {plugins}/repository-azure.html[repository-azure] for Azure storage repositories
|
||||
* {plugins}/repository-gcs.html[repository-gcs] for Google Cloud Storage repositories
|
||||
|
||||
[discrete]
|
||||
[[snapshots-repository-verification]]
|
||||
=== Repository verification
|
||||
When a repository is registered, it's immediately verified on all master and data nodes to make sure that it is functional
|
||||
on all nodes currently present in the cluster. The `verify` parameter can be used to explicitly disable the repository
|
||||
verification when registering or updating a repository:
|
||||
=== Verify a repository
|
||||
|
||||
When you register a snapshot repository, {es} automatically verifies that the
|
||||
repository is available and functional on all master and data nodes.
|
||||
|
||||
To disable this verification, set the <<put-snapshot-repo-api,create snapshot
|
||||
repository API>>'s `verify` query parameter to `false`. You can't disable
|
||||
repository verification in {kib}.
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
PUT /_snapshot/my_unverified_backup?verify=false
|
||||
----
|
||||
PUT _snapshot/my_unverified_backup?verify=false
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "my_unverified_backup_location"
|
||||
}
|
||||
}
|
||||
-----------------------------------
|
||||
----
|
||||
// TEST[continued]
|
||||
|
||||
The verification process can also be executed manually by running the following command:
|
||||
If wanted, you can manually run the repository verification check. To verify a
|
||||
repository in {kib}, go to the **Repositories** list page and click the name of
|
||||
a repository. Then click **Verify repository**. You can also use the
|
||||
<<verify-snapshot-repo-api,verify snapshot repository API>>.
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
POST /_snapshot/my_unverified_backup/_verify
|
||||
-----------------------------------
|
||||
----
|
||||
POST _snapshot/my_unverified_backup/_verify
|
||||
----
|
||||
// TEST[continued]
|
||||
|
||||
It returns a list of nodes where repository was successfully verified or an error message if verification process failed.
|
||||
If successful, the request returns a list of nodes used to verify the
|
||||
repository. If verification fails, the request returns an error.
|
||||
|
||||
If desired, you can also test a repository more thoroughly using the
|
||||
<<repo-analysis-api,Repository analysis API>>.
|
||||
You can test a repository more thoroughly using the
|
||||
<<repo-analysis-api,repository analysis API>>.
|
||||
|
||||
[discrete]
|
||||
[[snapshots-repository-cleanup]]
|
||||
=== Repository cleanup
|
||||
=== Clean up a repository
|
||||
|
||||
Repositories can over time accumulate data that is not referenced by any existing snapshot. This is a result of the data safety guarantees
|
||||
the snapshot functionality provides in failure scenarios during snapshot creation and the decentralized nature of the snapshot creation
|
||||
process. This unreferenced data does in no way negatively impact the performance or safety of a snapshot repository but leads to higher
|
||||
than necessary storage use. In order to clean up this unreferenced data, users can call the cleanup endpoint for a repository which will
|
||||
trigger a complete accounting of the repositories contents and subsequent deletion of all unreferenced data that was found.
|
||||
than necessary storage use. To remove this unreferenced data, you can run a cleanup operation on the repository. This will
|
||||
trigger a complete accounting of the repository's contents and delete any unreferenced data.
|
||||
|
||||
To run the repository cleanup operation in {kib}, go to the **Repositories**
|
||||
list page and click the name of a repository. Then click **Clean up
|
||||
repository**.
|
||||
|
||||
You can also use the <<clean-up-snapshot-repo-api,clean up snapshot repository
|
||||
API>>.
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
POST /_snapshot/my_repository/_cleanup
|
||||
-----------------------------------
|
||||
----
|
||||
POST _snapshot/my_repository/_cleanup
|
||||
----
|
||||
// TEST[continued]
|
||||
|
||||
The response to a cleanup request looks as follows:
|
||||
The API returns:
|
||||
|
||||
[source,console-result]
|
||||
--------------------------------------------------
|
||||
----
|
||||
{
|
||||
"results": {
|
||||
"deleted_bytes": 20,
|
||||
"deleted_blobs": 5
|
||||
}
|
||||
}
|
||||
--------------------------------------------------
|
||||
----
|
||||
|
||||
Depending on the concrete repository implementation the numbers shown for bytes free as well as the number of blobs removed will either
|
||||
be an approximation or an exact result. Any non-zero value for the number of blobs removed implies that unreferenced blobs were found and
|
||||
|
@ -312,7 +309,7 @@ and should lower your frequency of invoking it accordingly.
|
|||
|
||||
[discrete]
|
||||
[[snapshots-repository-backup]]
|
||||
=== Repository backup
|
||||
=== Back up a repository
|
||||
|
||||
You may wish to make an independent backup of your repository, for instance so
|
||||
that you have an archive copy of its contents that you can use to recreate the
|
||||
|
|
|
@ -23,6 +23,8 @@ errors>>.
|
|||
[[restore-snapshot-prereqs]]
|
||||
=== Prerequisites
|
||||
|
||||
include::register-repository.asciidoc[tag=kib-snapshot-prereqs]
|
||||
|
||||
include::apis/restore-snapshot-api.asciidoc[tag=restore-prereqs]
|
||||
|
||||
[discrete]
|
||||
|
@ -65,7 +67,7 @@ GET _snapshot
|
|||
// TEST[setup:setup-snapshots]
|
||||
|
||||
Then use the get snapshot API to get a list of snapshots in a specific
|
||||
repository.
|
||||
repository. This also returns each snapshot's contents.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
|
@ -235,9 +237,10 @@ POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore
|
|||
[[restore-feature-state]]
|
||||
=== Restore a feature state
|
||||
|
||||
You can restore a feature state to recover system indices, system data streams,
|
||||
and other configuration data for a feature from a snapshot. Restoring a feature
|
||||
state is the preferred way to restore system indices and system data streams.
|
||||
You can restore a <<feature-state,feature state>> to recover system indices,
|
||||
system data streams, and other configuration data for a feature from a snapshot.
|
||||
Restoring a feature state is the preferred way to restore system indices and
|
||||
system data streams.
|
||||
|
||||
If you restore a snapshot's cluster state, the operation restores all feature
|
||||
states in the snapshot by default. Similarly, if you don't restore a snapshot's
|
||||
|
@ -290,8 +293,8 @@ POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore
|
|||
=== Restore an entire cluster
|
||||
|
||||
In some cases, you need to restore an entire cluster from a snapshot, including
|
||||
the cluster state and all feature states. These cases should be rare, such as in
|
||||
the event of a catastrophic failure.
|
||||
the cluster state and all <<feature-state,feature states>>. These cases should
|
||||
be rare, such as in the event of a catastrophic failure.
|
||||
|
||||
Restoring an entire cluster involves deleting important system indices,
|
||||
including those used for authentication. Consider whether you can restore
|
||||
|
@ -529,8 +532,12 @@ After all primary shards are recovered, the replication process creates and
|
|||
distributes replicas across eligible data nodes. When replication is complete,
|
||||
the cluster health status typically becomes `green`.
|
||||
|
||||
You can monitor the cluster health status using the <<cluster-health,cluster
|
||||
health API>>.
|
||||
Once you start a restore in {kib}, you’re navigated to the **Restore Status**
|
||||
page. You can use this page to track the current state for each shard in the
|
||||
snapshot.
|
||||
|
||||
You can also monitor snapshot recover using {es} APIs. To monitor the cluster
|
||||
health status, use the <<cluster-health,cluster health API>>.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
|
@ -562,7 +569,7 @@ use the <<cluster-allocation-explain,cluster allocation explanation API>>.
|
|||
|
||||
[source,console]
|
||||
----
|
||||
GET _cluster/allocation/explain?filter_path=index,node_allocation_decisions.node_name,node_allocation_decisions.deciders.*
|
||||
GET _cluster/allocation/explain
|
||||
{
|
||||
"index": "my-index",
|
||||
"shard": 0,
|
||||
|
@ -602,14 +609,15 @@ clusters].
|
|||
|
||||
Snapshots aren't tied to a particular cluster or a cluster name. You can create
|
||||
a snapshot in one cluster and restore it in another
|
||||
<<snapshot-restore-version-compatibility,compatible cluster>>. The topology of
|
||||
the clusters doesn't need to match.
|
||||
<<snapshot-restore-version-compatibility,compatible cluster>>. Any data stream
|
||||
or index you restore from a snapshot must also be compatible with the current
|
||||
cluster’s version. The topology of the clusters doesn't need to match.
|
||||
|
||||
To restore a snapshot, its repository must be
|
||||
<<snapshots-register-repository,registered>> and available to the new cluster.
|
||||
If the original cluster still has write access to the repository, register the
|
||||
repository in `readonly` mode. This prevents multiple clusters from writing to
|
||||
the repository at the same time and corrupting the repository's contents.
|
||||
repository as read-only. This prevents multiple clusters from writing to the
|
||||
repository at the same time and corrupting the repository's contents.
|
||||
|
||||
Before you start a restore operation, ensure the new cluster has enough capacity
|
||||
for any data streams or indices you want to restore. If the new cluster has a
|
||||
|
|
|
@ -1,162 +1,611 @@
|
|||
[[snapshots-take-snapshot]]
|
||||
== Create a snapshot
|
||||
|
||||
A repository can contain multiple snapshots of the same cluster. Snapshots are identified by unique names within the
|
||||
cluster.
|
||||
|
||||
Use the <<put-snapshot-repo-api,create or update snapshot repository API>> to
|
||||
register or update a snapshot repository, and then use the
|
||||
<<create-snapshot-api,create snapshot API>> to create a snapshot in a
|
||||
repository.
|
||||
|
||||
The following request creates a snapshot with the name `snapshot_1` in the repository `my_backup`:
|
||||
|
||||
////
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
PUT /_snapshot/my_backup
|
||||
----
|
||||
PUT _slm/policy/nightly-snapshots
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "my_backup_location"
|
||||
"schedule": "0 30 1 * * ?",
|
||||
"name": "<nightly-snap-{now/d}>",
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"indices": "*",
|
||||
"include_global_state": true
|
||||
},
|
||||
"retention": {
|
||||
"expire_after": "30d",
|
||||
"min_count": 5,
|
||||
"max_count": 50
|
||||
}
|
||||
}
|
||||
-----------------------------------
|
||||
----
|
||||
// TEST[setup:setup-repository]
|
||||
// TESTSETUP
|
||||
////
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
|
||||
-----------------------------------
|
||||
This guide shows you how to take snapshots of a running cluster. You can later
|
||||
<<snapshots-restore-snapshot,restore a snapshot>> to recover or transfer its
|
||||
data.
|
||||
|
||||
The `wait_for_completion` parameter specifies whether or not the request should return immediately after snapshot
|
||||
initialization (default) or wait for snapshot completion. During snapshot initialization, information about all
|
||||
previous snapshots is loaded into memory, which means that in large repositories it may take several seconds (or
|
||||
even minutes) for this request to return even if the `wait_for_completion` parameter is set to `false`.
|
||||
In this guide, you’ll learn how to:
|
||||
|
||||
By default, a snapshot backs up all data streams and open indices in the cluster. You can change this behavior by
|
||||
specifying the list of data streams and indices in the body of the snapshot request:
|
||||
* Automate snapshot creation and retention with {slm} ({slm-init})
|
||||
* Manually take a snapshot
|
||||
* Monitor a snapshot's progress
|
||||
* Delete or cancel a snapshot
|
||||
* Back up cluster configuration files
|
||||
|
||||
The guide also provides tips for creating dedicated cluster state snapshots and
|
||||
taking snapshots at different time intervals.
|
||||
|
||||
[discrete]
|
||||
[[create-snapshot-prereqs]]
|
||||
=== Prerequisites
|
||||
|
||||
include::register-repository.asciidoc[tag=kib-snapshot-prereqs]
|
||||
|
||||
* You can only take a snapshot from a running cluster with an elected
|
||||
<<master-node,master node>>.
|
||||
|
||||
* A snapshot repository must be <<snapshots-register-repository,registered>> and
|
||||
available to the cluster.
|
||||
|
||||
* The cluster's global metadata must be readable. To include an index in a
|
||||
snapshot, the index and its metadata must also be readable. Ensure there aren't
|
||||
any <<cluster-read-only,cluster blocks>> or <<index-modules-blocks,index
|
||||
blocks>> that prevent read access.
|
||||
|
||||
[discrete]
|
||||
[[create-snapshot-considerations]]
|
||||
=== Considerations
|
||||
|
||||
* Each snapshot must have a unique name within its repository. Attempts to
|
||||
create a snapshot with the same name as an existing snapshot will fail.
|
||||
|
||||
* Snapshots are automatically deduplicated. You can take frequent snapshots with
|
||||
little impact to your storage overhead.
|
||||
|
||||
* Each snapshot is logically independent. You can delete a snapshot without
|
||||
affecting other snapshots.
|
||||
|
||||
* Taking a snapshot can temporarily pause shard allocations.
|
||||
See <<snapshots-shard-allocation>>.
|
||||
|
||||
* Taking a snapshot doesn't block indexing or other requests. However, the
|
||||
snapshot won't include changes made after the snapshot process starts.
|
||||
|
||||
* You can take multiple snapshots at the same time. The
|
||||
<<snapshot-max-concurrent-ops,`snapshot.max_concurrent_operations`>> cluster
|
||||
setting limits the maximum number of concurrent snapshot operations.
|
||||
|
||||
* If you include a data stream in a snapshot, the snapshot also includes the
|
||||
stream’s backing indices and metadata.
|
||||
+
|
||||
You can also include only specific backing indices in a snapshot. However, the
|
||||
snapshot won't include the data stream’s metadata or its other backing indices.
|
||||
|
||||
* A snapshot can include a data stream but exclude specific backing indices.
|
||||
When you restore such a data stream, it will contain only backing indices in the
|
||||
snapshot. If the stream’s original write index is not in the snapshot, the most
|
||||
recent backing index from the snapshot becomes the stream’s write index.
|
||||
|
||||
[discrete]
|
||||
[[automate-snapshots-slm]]
|
||||
=== Automate snapshots with {slm-init}
|
||||
|
||||
{slm-cap} ({slm-init}) is the easiest way to regularly back up a cluster. An
|
||||
{slm-init} policy automatically takes snapshots on a preset schedule. The policy
|
||||
can also delete snapshots based on retention rules you define.
|
||||
|
||||
TIP: {ess} deployments automatically include the `cloud-snapshot-policy`
|
||||
{slm-init} policy. {ess} uses this policy to take periodic snapshots of your
|
||||
cluster. For more information, see the {cloud}/ec-snapshot-restore.html[{ess}
|
||||
snapshot documentation].
|
||||
|
||||
[discrete]
|
||||
[[slm-security]]
|
||||
==== {slm-init} security
|
||||
|
||||
The following <<privileges-list-cluster,cluster privileges>> control access to
|
||||
the {slm-init} actions when {es} {security-features} are enabled:
|
||||
|
||||
`manage_slm`::
|
||||
Allows a user to perform all {slm-init} actions, including
|
||||
creating and updating policies and starting and stopping {slm-init}.
|
||||
|
||||
`read_slm`::
|
||||
Allows a user to perform all read-only {slm-init} actions, such as getting
|
||||
policies and checking the {slm-init} status.
|
||||
|
||||
`cluster:admin/snapshot/*`::
|
||||
Allows a user to take and delete snapshots of any index, whether or not they
|
||||
have access to that index.
|
||||
|
||||
You can create and manage roles to assign these privileges through {kib}
|
||||
Management.
|
||||
|
||||
To grant the privileges necessary to create and manage {slm-init} policies and
|
||||
snapshots, you can set up a role with the `manage_slm` and
|
||||
`cluster:admin/snapshot/*` cluster privileges and full access to the {slm-init}
|
||||
history indices.
|
||||
|
||||
For example, the following request creates an `slm-admin` role:
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true
|
||||
----
|
||||
POST _security/role/slm-admin
|
||||
{
|
||||
"indices": "data_stream_1,index_1,index_2",
|
||||
"ignore_unavailable": true,
|
||||
"include_global_state": false,
|
||||
"metadata": {
|
||||
"taken_by": "kimchy",
|
||||
"taken_because": "backup before upgrading"
|
||||
"cluster": [ "manage_slm", "cluster:admin/snapshot/*" ],
|
||||
"indices": [
|
||||
{
|
||||
"names": [ ".slm-history-*" ],
|
||||
"privileges": [ "all" ]
|
||||
}
|
||||
]
|
||||
}
|
||||
----
|
||||
// TEST[skip:security is not enabled here]
|
||||
|
||||
To grant read-only access to {slm-init} policies and the snapshot history,
|
||||
you can set up a role with the `read_slm` cluster privilege and read access
|
||||
to the {slm} history indices.
|
||||
|
||||
For example, the following request creates a `slm-read-only` role:
|
||||
|
||||
[source,console]
|
||||
----
|
||||
POST _security/role/slm-read-only
|
||||
{
|
||||
"cluster": [ "read_slm" ],
|
||||
"indices": [
|
||||
{
|
||||
"names": [ ".slm-history-*" ],
|
||||
"privileges": [ "read" ]
|
||||
}
|
||||
]
|
||||
}
|
||||
----
|
||||
// TEST[skip:security is not enabled here]
|
||||
|
||||
[discrete]
|
||||
[[create-slm-policy]]
|
||||
==== Create an {slm-init} policy
|
||||
|
||||
To manage {slm-init} in {kib}, go to the main menu and click **Stack
|
||||
Management** > **Snapshot and Restore** > **Policies**. To create a policy,
|
||||
click **Create policy**.
|
||||
|
||||
You can also manage {slm-init} using the
|
||||
<<snapshot-lifecycle-management-api,{slm-init} APIs>>. To create a policy, use
|
||||
the <<slm-api-put-policy,create {slm-init} policy API>>.
|
||||
|
||||
The following request creates a policy that backs up the cluster state, all data
|
||||
streams, and all indices daily at 1:30 a.m. UTC.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT _slm/policy/nightly-snapshots
|
||||
{
|
||||
"schedule": "0 30 1 * * ?", <1>
|
||||
"name": "<nightly-snap-{now/d}>", <2>
|
||||
"repository": "my_repository", <3>
|
||||
"config": {
|
||||
"indices": "*", <4>
|
||||
"include_global_state": true <5>
|
||||
},
|
||||
"retention": { <6>
|
||||
"expire_after": "30d",
|
||||
"min_count": 5,
|
||||
"max_count": 50
|
||||
}
|
||||
}
|
||||
-----------------------------------
|
||||
// TEST[skip:cannot complete subsequent snapshot]
|
||||
----
|
||||
|
||||
Use the `indices` parameter to list the data streams and indices that should be included in the snapshot. This parameter supports
|
||||
<<api-multi-index,multi-target syntax>>, although the options that control the behavior of multi-index syntax
|
||||
must be supplied in the body of the request, rather than as request parameters.
|
||||
|
||||
Data stream backups include the stream's backing indices and metadata, such as
|
||||
the current <<data-streams-generation,generation>> and timestamp field.
|
||||
|
||||
You can also choose to include only specific backing indices in a snapshot.
|
||||
However, these backups do not include the associated data stream's
|
||||
metadata or its other backing indices.
|
||||
|
||||
Snapshots can also include a data stream but exclude specific backing indices.
|
||||
When you restore the data stream, it will contain only backing indices present
|
||||
in the snapshot. If the stream's original write index is not in the snapshot,
|
||||
the most recent backing index from the snapshot becomes the stream's write index.
|
||||
<1> When to take snapshots, written in <<schedule-cron,Cron syntax>>.
|
||||
<2> Snapshot name. Supports <<api-date-math-index-names,date math>>. To prevent
|
||||
naming conflicts, the policy also appends a UUID to each snapshot name.
|
||||
<3> <<snapshots-register-repository,Registered snapshot repository>> used to
|
||||
store the policy's snapshots.
|
||||
<4> Data streams and indices to include in the policy's snapshots. This
|
||||
configuration includes all data streams and indices, including system
|
||||
indices.
|
||||
<5> If `true`, the policy's snapshots include the cluster state. This also
|
||||
includes all feature states by default. To only include specific feature
|
||||
states, see <<back-up-specific-feature-state>>.
|
||||
<6> Optional retention rules. This configuration keeps snapshots for 30 days,
|
||||
retaining at least 5 and no more than 50 snapshots regardless of age. See
|
||||
<<slm-retention-task>> and <<snapshot-retention-limits>>.
|
||||
|
||||
[discrete]
|
||||
[[create-snapshot-process-details]]
|
||||
=== Snapshot process details
|
||||
[[manually-run-slm-policy]]
|
||||
==== Manually run an {slm-init} policy
|
||||
|
||||
The snapshot process works by taking a byte-for-byte copy of the files that
|
||||
make up each index or data stream and placing these copies in the repository.
|
||||
These files are mostly written by Lucene and contain a compact representation
|
||||
of all the data in each index or data stream in a form that is designed to be
|
||||
searched efficiently. This means that when you restore an index or data stream
|
||||
from a snapshot there is no need to rebuild these search-focused data
|
||||
structures. It also means that you can use <<searchable-snapshots>> to directly
|
||||
search the data in the repository.
|
||||
You can manually run an {slm-init} policy to immediately create a snapshot. This
|
||||
is useful for testing a new policy or taking a snapshot before an upgrade.
|
||||
Manually running a policy doesn't affect its snapshot schedule.
|
||||
|
||||
The snapshot process is incremental: {es} compares the files that make up the
|
||||
index or data stream against the files that already exist in the repository
|
||||
and only copies files that were created or changed
|
||||
since the last snapshot. Snapshots are very space-efficient since they reuse
|
||||
any files copied to the repository by earlier snapshots.
|
||||
|
||||
Snapshotting does not interfere with ongoing indexing or searching operations.
|
||||
A snapshot captures a view of each shard at some point in time between the
|
||||
start and end of the snapshotting process. The snapshot may not include
|
||||
documents added to a data stream or index after the snapshot process starts.
|
||||
|
||||
You can start multiple snapshot operations at the same time. Concurrent snapshot
|
||||
operations are limited by the `snapshot.max_concurrent_operations` cluster
|
||||
setting, which defaults to `1000`. This limit applies in total to all ongoing snapshot
|
||||
creation, cloning, and deletion operations. {es} will reject any operations
|
||||
that would exceed this limit.
|
||||
|
||||
The snapshot process starts immediately for the primary shards that have been
|
||||
started and are not relocating at the moment. {es} waits for relocation or
|
||||
initialization of shards to complete before snapshotting them.
|
||||
|
||||
Besides creating a copy of each data stream and index, the snapshot process can
|
||||
also store global cluster metadata, which includes persistent cluster settings,
|
||||
templates, and data stored in system indices, such as Watches and task records,
|
||||
regardless of whether those system indices are named in the `indices` section
|
||||
of the request. You can also use the create snapshot
|
||||
API's <<create-snapshot-api-feature-states,`feature_states`>> parameter to
|
||||
include only a subset of system indices in the snapshot. Snapshots do not
|
||||
store transient settings or registered snapshot repositories.
|
||||
|
||||
While a snapshot of a particular shard is being created, the shard cannot be
|
||||
moved to another node, which can interfere with rebalancing and allocation
|
||||
filtering. {es} can only move the shard to another node (according to the current
|
||||
allocation filtering settings and rebalancing algorithm) after the snapshot
|
||||
process is finished.
|
||||
|
||||
You can use the <<get-snapshot-api,Get snapshot API>> to retrieve information
|
||||
about ongoing and completed snapshots. See
|
||||
<<snapshots-monitor-snapshot-restore,Monitor snapshot and restore progress>>.
|
||||
|
||||
[discrete]
|
||||
[[create-snapshot-options]]
|
||||
=== Options for creating a snapshot
|
||||
The create snapshot request supports the
|
||||
`ignore_unavailable` option. Setting it to `true` will cause data streams and indices that do not exist to be ignored during snapshot
|
||||
creation. By default, when the `ignore_unavailable` option is not set and a data stream or index is missing, the snapshot request will fail.
|
||||
|
||||
By setting `include_global_state` to `false` it's possible to prevent the cluster global state to be stored as part of
|
||||
the snapshot.
|
||||
|
||||
IMPORTANT: The global cluster state includes the cluster's index
|
||||
templates, such as those <<create-index-template,matching a data
|
||||
stream>>. If your snapshot includes data streams, we recommend storing the
|
||||
global state as part of the snapshot. This lets you later restored any
|
||||
templates required for a data stream.
|
||||
|
||||
By default, the entire snapshot will fail if one or more indices participating in the snapshot do not have
|
||||
all primary shards available. You can change this behaviour by setting `partial` to `true`. The `expand_wildcards`
|
||||
option can be used to control whether hidden and closed indices will be included in the snapshot, and defaults to `all`.
|
||||
|
||||
Use the `metadata` field to attach arbitrary metadata to the snapshot,
|
||||
such as who took the snapshot,
|
||||
why it was taken, or any other data that might be useful.
|
||||
|
||||
Snapshot names can be automatically derived using <<date-math-index-names,date math expressions>>, similarly as when creating
|
||||
new indices. Special characters must be URI encoded.
|
||||
|
||||
For example, use the <<create-snapshot-api,create snapshot API>> to create
|
||||
a snapshot with the current day in the name, such as `snapshot-2020.07.11`:
|
||||
To run a policy in {kib}, go to the **Policies** page and click the run icon
|
||||
under the **Actions** column. You can also use the
|
||||
<<slm-api-execute-lifecycle,execute {slm-init} policy API>>.
|
||||
|
||||
[source,console]
|
||||
-----------------------------------
|
||||
PUT /_snapshot/my_backup/<snapshot-{now/d}>
|
||||
PUT /_snapshot/my_backup/%3Csnapshot-%7Bnow%2Fd%7D%3E
|
||||
-----------------------------------
|
||||
// TEST[continued]
|
||||
----
|
||||
POST _slm/policy/nightly-snapshots/_execute
|
||||
----
|
||||
// TEST[skip:we can't easily handle snapshots from docs tests]
|
||||
|
||||
NOTE: You can also create snapshots that are copies of part of an existing snapshot using the <<clone-snapshot-api,clone snapshot API>>.
|
||||
The snapshot process runs in the background. To monitor its progress, see
|
||||
<<monitor-snapshot>>.
|
||||
|
||||
[discrete]
|
||||
[[slm-retention-task]]
|
||||
==== {slm-init} retention
|
||||
|
||||
{slm-init} snapshot retention is a cluster-level task that runs separately from
|
||||
a policy's snapshot schedule. To control when the {slm-init} retention task
|
||||
runs, configure the <<slm-retention-schedule,`slm.retention_schedule`>> cluster
|
||||
setting.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT _cluster/settings
|
||||
{
|
||||
"persistent" : {
|
||||
"slm.retention_schedule" : "0 30 1 * * ?"
|
||||
}
|
||||
}
|
||||
----
|
||||
|
||||
To immediately run the retention task, use the
|
||||
<<slm-api-execute-retention,execute {slm-init} retention policy API>>.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
POST _slm/_execute_retention
|
||||
----
|
||||
|
||||
An {slm-init} policy's retention rules only apply to snapshots created using the
|
||||
policy. Other snapshots don't count toward the policy's retention limits.
|
||||
|
||||
[discrete]
|
||||
[[snapshot-retention-limits]]
|
||||
==== Snapshot retention limits
|
||||
|
||||
While not a hard limit, a snapshot repository shouldn't contain more than
|
||||
{max-snapshot-count} snapshots at a time. This ensures the repository's metadata
|
||||
doesn't grow to a size that may destabilize the master node. We recommend you
|
||||
set up your {slm-init} policy's retention rules to enforce this limit.
|
||||
|
||||
[discrete]
|
||||
[[manually-create-snapshot]]
|
||||
=== Manually create a snapshot
|
||||
|
||||
To take a snapshot without an {slm-init} policy, use the
|
||||
<<create-snapshot-api,create snapshot API>>. The snapshot name supports
|
||||
<<api-date-math-index-names,date math>>.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
# PUT _snapshot/my_repository/<my_snapshot_{now/d}>
|
||||
PUT _snapshot/my_repository/%3Cmy_snapshot_%7Bnow%2Fd%7D%3E
|
||||
----
|
||||
// TEST[s/3E/3E?wait_for_completion=true/]
|
||||
|
||||
Depending on its size, a snapshot can take a while to complete. By default,
|
||||
the create snapshot API only initiates the snapshot process, which runs in the
|
||||
background. To block the client until the snapshot finishes, set the
|
||||
`wait_for_completion` query parameter to `true`.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT _snapshot/my_repository/my_snapshot?wait_for_completion=true
|
||||
----
|
||||
|
||||
You can also clone an existing snapshot using <<clone-snapshot-api,clone
|
||||
snapshot API>>.
|
||||
|
||||
[discrete]
|
||||
[[monitor-snapshot]]
|
||||
=== Monitor a snapshot
|
||||
|
||||
To monitor any currently running snapshots, use the <<get-snapshot-api,get
|
||||
snapshot API>> with the `_current` request path parameter.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
GET _snapshot/my_repository/_current
|
||||
----
|
||||
|
||||
To get a complete breakdown of each shard participating in any currently running
|
||||
snapshots, use the <<get-snapshot-api,get snapshot status API>>.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
GET _snapshot/_status
|
||||
----
|
||||
|
||||
[discrete]
|
||||
[[check-slm-history]]
|
||||
==== Check {slm-init} history
|
||||
|
||||
Use the <<slm-api-get-policy,get {slm-init} policy API>> to check when an
|
||||
{slm-init} policy last successfully started the snapshot process. A successful
|
||||
start doesn't guarantee the snapshot completed.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
GET _slm/policy/nightly-snapshots
|
||||
----
|
||||
|
||||
To get more information about a cluster's {slm-init} execution history,
|
||||
including stats for each {slm-init} policy, use the <<slm-api-get-stats,get
|
||||
{slm-init} stats API>>. The API also returns information about the cluster's
|
||||
snapshot retention task history.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
GET _slm/stats
|
||||
----
|
||||
|
||||
[discrete]
|
||||
[[delete-snapshot]]
|
||||
=== Delete or cancel a snapshot
|
||||
|
||||
To delete a snapshot in {kib}, go to the **Snapshots** page and click the trash
|
||||
icon under the **Actions** column. You can also use the
|
||||
<<delete-snapshot-api,delete snapshot API>>.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
DELETE _snapshot/my_repository/my_snapshot_2099.05.06
|
||||
----
|
||||
// TEST[setup:setup-snapshots]
|
||||
|
||||
If you delete a snapshot that's in progress, {es} cancels it. The snapshot
|
||||
process halts and deletes any files created for the snapshot. Deleting a
|
||||
snapshot doesn't delete files used by other snapshots.
|
||||
|
||||
[discrete]
|
||||
[[back-up-config-files]]
|
||||
=== Back up configuration files
|
||||
|
||||
If you run {es} on your own hardware, we recommend that, in addition to backups,
|
||||
you take regular backups of the files in each node's `$ES_PATH_CONF` directory
|
||||
using the file backup software of your choice. Snapshots don't back up these
|
||||
files.
|
||||
|
||||
Depending on your setup, some of these configuration files may contain sensitive
|
||||
data, such as passwords or keys. If so, consider encrypting your file backups.
|
||||
|
||||
[discrete]
|
||||
[[back-up-specific-feature-state]]
|
||||
=== Back up a specific feature state
|
||||
|
||||
By default, a snapshot that includes the cluster state also includes all
|
||||
<<feature-state,feature states>>. Similarly, a snapshot that excludes the
|
||||
cluster state excludes all feature states by default.
|
||||
|
||||
You can also configure a snapshot to only include specific feature states,
|
||||
regardless of the cluster state.
|
||||
|
||||
To get a list of available features, use the <<get-features-api,get features
|
||||
API>>.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
GET _features
|
||||
----
|
||||
|
||||
The API returns:
|
||||
|
||||
[source,console-result]
|
||||
----
|
||||
{
|
||||
"features": [
|
||||
{
|
||||
"name": "tasks",
|
||||
"description": "Manages task results"
|
||||
},
|
||||
{
|
||||
"name": "kibana",
|
||||
"description": "Manages Kibana configuration and reports"
|
||||
},
|
||||
{
|
||||
"name": "security",
|
||||
"description": "Manages configuration for Security features, such as users and roles"
|
||||
},
|
||||
...
|
||||
]
|
||||
}
|
||||
----
|
||||
// TESTRESPONSE[skip:response may vary based on features in test cluster]
|
||||
|
||||
To include a specific feature state in a snapshot, specify the feature `name` in
|
||||
the `feature_states` array.
|
||||
|
||||
For example, the following {slm-init} policy only includes feature states for
|
||||
the {kib} and {es} security features in its snapshots.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT _slm/policy/nightly-snapshots
|
||||
{
|
||||
"schedule": "0 30 2 * * ?",
|
||||
"name": "<nightly-snap-{now/d}>",
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"indices": "*",
|
||||
"include_global_state": true,
|
||||
"feature_states": [
|
||||
"kibana",
|
||||
"security"
|
||||
]
|
||||
},
|
||||
"retention": {
|
||||
"expire_after": "30d",
|
||||
"min_count": 5,
|
||||
"max_count": 50
|
||||
}
|
||||
}
|
||||
----
|
||||
|
||||
Any index or data stream that's part of the feature state will display in a
|
||||
snapshot's contents. For example, if you back up the `security` feature state,
|
||||
the `security-*` system indices display in the <<get-snapshot-api,get snapshot
|
||||
API>>'s response under both `indices` and `feature_states`.
|
||||
|
||||
[discrete]
|
||||
[[cluster-state-snapshots]]
|
||||
=== Dedicated cluster state snapshots
|
||||
|
||||
Some feature states contain sensitive data. For example, the `security` feature
|
||||
state includes system indices that may contain user names and encrypted password
|
||||
hashes.
|
||||
|
||||
To better protect this data, consider creating a dedicated repository and
|
||||
{slm-init} policy for snapshots of the cluster state. This lets you strictly
|
||||
limit and audit access to the repository.
|
||||
|
||||
For example, the following {slm-init} policy only backs up the cluster state.
|
||||
The policy stores these snapshots in a dedicated repository.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT _slm/policy/nightly-cluster-state-snapshots
|
||||
{
|
||||
"schedule": "0 30 2 * * ?",
|
||||
"name": "<nightly-cluster-state-snap-{now/d}>",
|
||||
"repository": "my_secure_repository",
|
||||
"config": {
|
||||
"include_global_state": true, <1>
|
||||
"indices": "-*" <2>
|
||||
},
|
||||
"retention": {
|
||||
"expire_after": "30d",
|
||||
"min_count": 5,
|
||||
"max_count": 50
|
||||
}
|
||||
}
|
||||
----
|
||||
// TEST[s/my_secure_repository/my_repository/]
|
||||
|
||||
<1> Includes the cluster state. This also includes all feature states by
|
||||
default.
|
||||
<2> Excludes regular data streams and indices.
|
||||
|
||||
If you take dedicated snapshots of the cluster state, you'll need to exclude the
|
||||
cluster state and system indices from your other snapshots. For example:
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT _slm/policy/nightly-snapshots
|
||||
{
|
||||
"schedule": "0 30 2 * * ?",
|
||||
"name": "<nightly-snap-{now/d}>",
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"include_global_state": false, <1>
|
||||
"indices": "*,-.*" <2>
|
||||
},
|
||||
"retention": {
|
||||
"expire_after": "30d",
|
||||
"min_count": 5,
|
||||
"max_count": 50
|
||||
}
|
||||
}
|
||||
----
|
||||
|
||||
<1> Excludes the cluster state. This also excludes all feature states by
|
||||
default.
|
||||
<2> Includes all data streams and indices except system indices and other
|
||||
indices that begin with a dot (`.`).
|
||||
|
||||
[discrete]
|
||||
[[create-snapshots-different-time-intervals]]
|
||||
=== Create snapshots at different time intervals
|
||||
|
||||
If you only use a single {slm-init} policy, it can be difficult to take frequent
|
||||
snapshots and retain snapshots with longer time intervals.
|
||||
|
||||
For example, a policy that takes snapshots every 30 minutes with a maximum of
|
||||
100 snapshots will only keep snapshots for approximately two days. While this
|
||||
setup is great for backing up recent changes, it doesn't let you restore data
|
||||
from a previous week or month.
|
||||
|
||||
To fix this, you can create multiple {slm-init} policies with the same snapshot
|
||||
repository that run on different schedules. Since a policy's retention rules
|
||||
only apply to its snapshots, a policy won't delete a snapshot created by another
|
||||
policy. However, you'll need to ensure the total number of snapshots in the
|
||||
repository doesn't exceed the <<snapshot-retention-limits,{max-snapshot-count}
|
||||
snapshot soft limit>>.
|
||||
|
||||
For example, the following {slm-init} policy takes hourly snapshots with a
|
||||
maximum of 24 snapshots. The policy keeps its snapshots for one day.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT _slm/policy/hourly-snapshots
|
||||
{
|
||||
"name": "<hourly-snapshot-{now/d}>",
|
||||
"schedule": "0 0 * * * ?",
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"indices": "*",
|
||||
"include_global_state": true
|
||||
},
|
||||
"retention": {
|
||||
"expire_after": "1d",
|
||||
"min_count": 1,
|
||||
"max_count": 24
|
||||
}
|
||||
}
|
||||
----
|
||||
|
||||
The following policy takes nightly snapshots in the same snapshot repository.
|
||||
The policy keeps its snapshots for one month.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT _slm/policy/daily-snapshots
|
||||
{
|
||||
"name": "<daily-snapshot-{now/d}>",
|
||||
"schedule": "0 45 23 * * ?", <1>
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"indices": "*",
|
||||
"include_global_state": true
|
||||
},
|
||||
"retention": {
|
||||
"expire_after": "30d",
|
||||
"min_count": 1,
|
||||
"max_count": 31
|
||||
}
|
||||
}
|
||||
----
|
||||
|
||||
<1> Runs at 11:45 p.m. UTC every day.
|
||||
|
||||
The following policy creates monthly snapshots in the same repository. The
|
||||
policy keeps its snapshots for one year.
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT _slm/policy/monthly-snapshots
|
||||
{
|
||||
"name": "<monthly-snapshot-{now/d}>",
|
||||
"schedule": "0 56 23 1 * ?", <1>
|
||||
"repository": "my_repository",
|
||||
"config": {
|
||||
"indices": "*",
|
||||
"include_global_state": true
|
||||
},
|
||||
"retention": {
|
||||
"expire_after": "366d",
|
||||
"min_count": 1,
|
||||
"max_count": 12
|
||||
}
|
||||
}
|
||||
----
|
||||
|
||||
<1> Runs on the first of the month at 11:56 p.m. UTC.
|
||||
|
|
|
@ -9,13 +9,15 @@ path:
|
|||
- /mount/long_term_backups
|
||||
----
|
||||
|
||||
After restarting each node, use the <<put-snapshot-repo-api,create or update
|
||||
snapshot repository>> API to register the file system repository. Specify the
|
||||
file system's path in `settings.location`:
|
||||
// tag::register-fs-repo[]
|
||||
After restarting each node, use {kib} or the <<put-snapshot-repo-api,create
|
||||
snapshot repository API>> to register the repository. When registering the
|
||||
repository, specify the file system's path:
|
||||
// end::register-fs-repo[]
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT /_snapshot/my_fs_backup
|
||||
PUT _snapshot/my_fs_backup
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
|
@ -26,12 +28,14 @@ PUT /_snapshot/my_fs_backup
|
|||
----
|
||||
// TEST[skip:no access to path]
|
||||
|
||||
If you specify a relative path in `settings.location`, {es} resolves the path
|
||||
using the first value in the `path.repo` setting.
|
||||
// tag::relative-path[]
|
||||
If you specify a relative path, {es} resolves the path using the first value in
|
||||
the `path.repo` setting.
|
||||
// end::relative-path[]
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT /_snapshot/my_fs_backup
|
||||
PUT _snapshot/my_fs_backup
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
|
@ -45,6 +49,27 @@ PUT /_snapshot/my_fs_backup
|
|||
<1> The first value in the `path.repo` setting is `/mount/backups`. This
|
||||
relative path, `my_fs_backup_location`, resolves to
|
||||
`/mount/backups/my_fs_backup_location`.
|
||||
|
||||
include::{es-repo-dir}/snapshot-restore/register-repository.asciidoc[tag=multi-cluster-repo]
|
||||
|
||||
// tag::fs-repo-read-only[]
|
||||
To register a file system repository as read-only using the create snapshot
|
||||
repository API, set the `readonly` parameter to true. Alternatively, you can
|
||||
register a <<snapshots-read-only-repository,URL repository>> for the file
|
||||
system.
|
||||
// end::fs-repo-read-only[]
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT _snapshot/my_fs_backup
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "my_fs_backup_location",
|
||||
"readonly": true
|
||||
}
|
||||
}
|
||||
----
|
||||
// end::unix[]
|
||||
|
||||
|
||||
|
@ -64,13 +89,11 @@ path:
|
|||
<1> DOS path
|
||||
<2> UNC path
|
||||
|
||||
After restarting each node, use the <<put-snapshot-repo-api,create or update
|
||||
snapshot repository>> API to register the file system repository. Specify the
|
||||
file system's path in `settings.location`:
|
||||
include::register-fs-repo.asciidoc[tag=register-fs-repo]
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT /_snapshot/my_fs_backup
|
||||
PUT _snapshot/my_fs_backup
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
|
@ -81,12 +104,11 @@ PUT /_snapshot/my_fs_backup
|
|||
----
|
||||
// TEST[skip:no access to path]
|
||||
|
||||
If you specify a relative path in `settings.location`, {es} resolves the path
|
||||
using the first value in the `path.repo` setting.
|
||||
include::register-fs-repo.asciidoc[tag=relative-path]
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT /_snapshot/my_fs_backup
|
||||
PUT _snapshot/my_fs_backup
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
|
@ -100,4 +122,21 @@ PUT /_snapshot/my_fs_backup
|
|||
<1> The first value in the `path.repo` setting is `E:\Mount\Backups`. This
|
||||
relative path, `My_fs_backup_location`, resolves to
|
||||
`E:\Mount\Backups\My_fs_backup_location`.
|
||||
// end::win[]
|
||||
|
||||
include::{es-repo-dir}/snapshot-restore/register-repository.asciidoc[tag=multi-cluster-repo]
|
||||
|
||||
include::register-fs-repo.asciidoc[tag=fs-repo-read-only]
|
||||
|
||||
[source,console]
|
||||
----
|
||||
PUT _snapshot/my_fs_backup
|
||||
{
|
||||
"type": "fs",
|
||||
"settings": {
|
||||
"location": "my_fs_backup_location",
|
||||
"readonly": true
|
||||
}
|
||||
}
|
||||
----
|
||||
// TEST[skip:no access to path]
|
||||
// end::win[]
|
||||
|
|
Loading…
Reference in New Issue