Provide tracing implementation using OpenTelemetry + APM agent (#88443)

Part of #84369. Implement the `Tracer` interface by providing a
module that uses OpenTelemetry, along with Elastic's APM
agent for Java.

See the file `TRACING.md` for background on the changes and the
reasoning for some of the implementation decisions.

The configuration mechanism is the most fiddly part of this PR. The
Security Manager permissions required by the APM Java agent make
it prohibitive to start an agent from within Elasticsearch
programmatically, so it must be configured when the ES JVM starts.
That means that the startup CLI needs to assemble the required JVM
options.

To complicate matters further, the APM agent needs a secret token
in order to ship traces to the APM server. We can't use Java system
properties to configure this, since otherwise the secret will be
readable to all code in Elasticsearch. It therefore has to be
configured in a dedicated config file. This in itself is awkward,
since we don't want to leave secrets in config files. Therefore,
we pull the APM secret token from the keystore, write it to a config
file, then delete the config file after ES starts.

There's a further issue with the config file. Any options we set
in the APM agent config file cannot later be reconfigured via system
properties, so we need to make sure that only "static" configuration
goes into the config file.

I generated most of the files under `qa/apm` using an APM test
utility (I can't remember which one now, unfortunately). The goal
is to setup up a complete system so that traces can be captured in
APM server, and the results in Elasticsearch inspected.
This commit is contained in:
Rory Hunter 2022-08-03 14:13:31 +01:00 committed by GitHub
parent 30142ea1e0
commit 512bfebc10
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
40 changed files with 3046 additions and 94 deletions

156
TRACING.md Normal file
View File

@ -0,0 +1,156 @@
# Tracing in Elasticsearch
Elasticsearch is instrumented using the [OpenTelemetry][otel] API, which allows
us to gather traces and analyze what Elasticsearch is doing.
## How is tracing implemented?
The Elasticsearch server code contains a [`tracing`][tracing] package, which is
an abstraction over the OpenTelemetry API. All locations in the code that
perform instrumentation and tracing must use these abstractions.
Separately, there is the [`apm`](./modules/apm/) module, which works with the
OpenTelemetry API directly to record trace data. Underneath the OTel API, we
use Elastic's [APM agent for Java][agent], which attaches at runtime to the
Elasticsearch JVM and removes the need for Elasticsearch to hard-code the use of
an OTel implementation. Note that while it is possible to programmatically start
the APM agent, the Security Manager permissions required make this essentially
impossible.
## How is tracing configured?
You must supply configuration and credentials for the APM server (see below).
You must also set `tracing.apm.enabled` to `true`, but this can be toggled at
runtime.
All APM settings live under `tracing.apm`. All settings related to the Java agent
go under `tracing.apm.agent`. Anything you set under there will be propagated to
the agent.
For agent settings that can be changed dynamically, you can use the cluster
settings REST API. For example, to change the sampling rate:
curl -XPUT \
-H "Content-type: application/json" \
-u "$USERNAME:$PASSWORD" \
-d '{ "persistent": { "tracing.apm.agent.transaction_sample_rate": "0.75" } }' \
https://localhost:9200/_cluster/settings
### More details about configuration
For context, the APM agent pulls configuration from [multiple
sources][agent-config], with a hierarchy that means, for example, that options
set in the config file cannot be overridden via system properties.
Now, in order to send tracing data to the APM server, ES needs to configured with
either a `secret_key` or an `api_key`. We could configure these in the agent via
system properties, but then their values would be available to any Java code in
Elasticsearch that can read system properties.
Instead, when Elasticsearch bootstraps itself, it compiles all APM settings
together, including any `secret_key` or `api_key` values from the ES keystore,
and writes out a temporary APM config file containing all static configuration
(i.e. values that cannot change after the agent starts). This file is deleted
as soon as possible after ES starts up. Settings that are not sensitive and can
be changed dynamically are configured via system properties. Calls to the ES
settings REST API are translated into system property writes, which the agent
later picks up and applies.
## Where is tracing data sent?
You need to have an APM server running somewhere. For example, you can create a
deployment in [Elastic Cloud](https://www.elastic.co/cloud/) with Elastic's APM
integration.
## What do we trace?
We primarily trace "tasks". The tasks framework in Elasticsearch allows work to
be scheduled for execution, cancelled, executed in a different thread pool, and
so on. Tracing a task results in a "span", which represents the execution of the
task in the tracing system. We also instrument REST requests, which are not (at
present) modelled by tasks.
A span can be associated with a parent span, which allows all spans in, for
example, a REST request to be grouped together. Spans can track work across
different Elasticsearch nodes.
Elasticsearch also supports distributed tracing via [W3c Trace Context][w3c]
headers. If clients of Elasticsearch send these headers with their requests,
then that data will be forwarded to the APM server in order to yield a trace
across systems.
In rare circumstances, it is possible to avoid tracing a task using
`TaskManager#register(String,String,TaskAwareRequest,boolean)`. For example,
Machine Learning uses tasks to record which models are loaded on each node. Such
tasks are long-lived and are not suitable candidates for APM tracing.
## Thread contexts and nested spans
When a span is started, Elasticsearch tracks information about that span in the
current [thread context][thread-context]. If a new thread context is created,
then the current span information must not propagated but instead renamed, so
that (1) it doesn't interfere when new trace information is set in the context,
and (2) the previous trace information is available to establish a parent /
child span relationship. This is done with `ThreadContext#newTraceContext()`.
Sometimes we need to detach new spans from their parent. For example, creating
an index starts some related background tasks, but these shouldn't be associated
with the REST request, otherwise all the background task spans will be
associated with the REST request for as long as Elasticsearch is running.
`ThreadContext` provides the `clearTraceContext`() method for this purpose.
## How to I trace something that isn't a task?
First work out if you can turn it into a task. No, really.
If you can't do that, you'll need to ensure that your class can get access to a
`Tracer` instance (this is available to inject, or you'll need to pass it when
your class is created). Then you need to call the appropriate methods on the
tracer when a span should start and end. You'll also need to manage the creation
of new trace contexts when child spans need to be created.
## What additional attributes should I set?
That's up to you. Be careful not to capture anything that could leak sensitive
or personal information.
## What is "scope" and when should I used it?
Usually you won't need to.
That said, sometimes you may want more details to be captured about a particular
section of code. You can think of "scope" as representing the currently active
tracing context. Using scope allows the APM agent to do the following:
* Enables automatic correlation between the "active span" and logging, where
logs have also been captured.
* Enables capturing any exceptions thrown when the span is active, and linking
those exceptions to the span
* Allows the sampling profiler to be used as it allows samples to be linked to
the active span (if any), so the agent can automatically get extra spans
without manual instrumentation.
However, a scope must be closed in the same thread in which it was opened, which
cannot be guaranteed when using tasks, making scope largely useless to
Elasticsearch.
In the OpenTelemetry documentation, spans, scope and context are fairly
straightforward to use, since `Scope` is an `AutoCloseable` and so can be
easily created and cleaned up use try-with-resources blocks. Unfortunately,
Elasticsearch is a complex piece of software, and also extremely asynchronous,
so the typical OpenTelemetry examples do not work.
Nonetheless, it is possible to manually use scope where we need more detail by
explicitly opening a scope via the `Tracer`.
[otel]: https://opentelemetry.io/
[thread-context]: ./server/src/main/java/org/elasticsearch/common/util/concurrent/ThreadContext.java).
[w3c]: https://www.w3.org/TR/trace-context/
[tracing]: ./server/src/main/java/org/elasticsearch/tracing/
[config]: ./modules/apm/src/main/config/elasticapm.properties
[agent-config]: https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html
[agent]: https://www.elastic.co/guide/en/apm/agent/java/current/index.html

View File

@ -0,0 +1,290 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
package org.elasticsearch.server.cli;
import org.elasticsearch.Build;
import org.elasticsearch.Version;
import org.elasticsearch.cli.ExitCodes;
import org.elasticsearch.cli.UserException;
import org.elasticsearch.common.settings.KeyStoreWrapper;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.core.Nullable;
import java.io.IOException;
import java.io.OutputStream;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
/**
* This class is responsible for working out if APM tracing is configured and if so, preparing
* a temporary config file for the APM Java agent and CLI options to the JVM to configure APM.
* APM doesn't need to be enabled, as that can be toggled at runtime, but some configuration e.g.
* server URL and secret key can only be provided when Elasticsearch starts.
*/
class APMJvmOptions {
/**
* Contains agent configuration that must always be applied, and cannot be overridden.
*/
// tag::noformat
private static final Map<String, String> STATIC_CONFIG = Map.of(
// Identifies the version of Elasticsearch in the captured trace data.
"service_version", Version.CURRENT.toString(),
// Configures a log file to write to. `_AGENT_HOME_` is a placeholder used
// by the agent. Don't disable writing to a log file, as the agent will then
// require extra Security Manager permissions when it tries to do something
// else, and it's just painful.
"log_file", "_AGENT_HOME_/../../logs/apm.log",
// ES does not use auto-instrumentation.
"instrument", "false"
);
/**
* Contains default configuration that will be used unless overridden by explicit configuration.
*/
private static final Map<String, String> CONFIG_DEFAULTS = Map.of(
// This is used to keep all the errors and transactions of a service
// together and is the primary filter in the Elastic APM user interface.
//
// You can optionally also set `service_node_name`, which is used to
// distinguish between different nodes of a service, therefore it should
// be unique for each JVM within a service. If not set, data
// aggregations will be done based on a container ID (where valid) or on
// the reported hostname (automatically discovered or manually
// configured through hostname). However, if this node's `node.name` is
// set, then that value is used for the `service_node_name`.
"service_name", "elasticsearch",
// An arbitrary string that identifies this deployment environment. For
// example, "dev", "staging" or "prod". Can be anything you like, but must
// have the same value across different systems in the same deployment
// environment.
"environment", "dev",
// Logging configuration. Unless you need detailed logs about what the APM
// is doing, leave this value alone.
"log_level", "error",
"application_packages", "org.elasticsearch,org.apache.lucene",
"metrics_interval", "120s",
"breakdown_metrics", "false",
"central_config", "false"
);
// end::noformat
/**
* Lists all APM configuration keys that are not dynamic and must be configured via the config file.
*/
private static final List<String> STATIC_AGENT_KEYS = List.of(
"api_key",
"aws_lambda_handler",
"breakdown_metrics",
"classes_excluded_from_instrumentation",
"cloud_provider",
"data_flush_timeout",
"disable_metrics",
"disable_send",
"enabled",
"enable_public_api_annotation_inheritance",
"environment",
"global_labels",
"hostname",
"include_process_args",
"log_ecs_formatter_allow_list",
"log_ecs_reformatting_additional_fields",
"log_ecs_reformatting_dir",
"log_file",
"log_file_size",
"log_format_file",
"log_format_sout",
"max_queue_size",
"metrics_interval",
"plugins_dir",
"profiling_inferred_spans_lib_directory",
"secret_token",
"service_name",
"service_node_name",
"service_version",
"stress_monitoring_interval",
"trace_methods_duration_threshold",
"use_jaxrs_path_as_transaction_name",
"verify_server_cert"
);
/**
* This method works out if APM tracing is enabled, and if so, prepares a temporary config file
* for the APM Java agent and CLI options to the JVM to configure APM. The config file is temporary
* because it will be deleted once Elasticsearch starts.
*
* @param settings the Elasticsearch settings to consider
* @param keystore a wrapper to access the keystore, or null if there is no keystore
* @param tmpdir Elasticsearch's temporary directory, where the config file will be written
*/
static List<String> apmJvmOptions(Settings settings, @Nullable KeyStoreWrapper keystore, Path tmpdir) throws UserException,
IOException {
final Path agentJar = findAgentJar();
if (agentJar == null) {
return List.of();
}
final Map<String, String> propertiesMap = extractApmSettings(settings);
// No point doing anything if we don't have a destination for the trace data, and it can't be configured dynamically
if (propertiesMap.containsKey("server_url") == false && propertiesMap.containsKey("server_urls") == false) {
return List.of();
}
if (propertiesMap.containsKey("service_node_name") == false) {
final String nodeName = settings.get("node.name");
if (nodeName != null) {
propertiesMap.put("service_node_name", nodeName);
}
}
if (keystore != null) {
extractSecureSettings(keystore, propertiesMap);
}
final Map<String, String> dynamicSettings = extractDynamicSettings(propertiesMap);
final Path tmpProperties = writeApmProperties(tmpdir, propertiesMap);
final List<String> options = new ArrayList<>();
// Use an agent argument to specify the config file instead of e.g. `-Delastic.apm.config_file=...`
// because then the agent won't try to reload the file, and we can remove it after startup.
options.add("-javaagent:" + agentJar + "=c=" + tmpProperties);
dynamicSettings.forEach((key, value) -> options.add("-Delastic.apm." + key + "=" + value));
return options;
}
private static void extractSecureSettings(KeyStoreWrapper keystore, Map<String, String> propertiesMap) {
final Set<String> settingNames = keystore.getSettingNames();
for (String key : List.of("api_key", "secret_token")) {
if (settingNames.contains("tracing.apm." + key)) {
try (SecureString token = keystore.getString("tracing.apm." + key)) {
propertiesMap.put(key, token.toString());
}
}
}
}
/**
* Removes settings that can be changed dynamically at runtime from the supplied map, and returns
* those settings in a new map.
*/
private static Map<String, String> extractDynamicSettings(Map<String, String> propertiesMap) {
final Map<String, String> cliOptionsMap = new HashMap<>();
final Iterator<Map.Entry<String, String>> propertiesIterator = propertiesMap.entrySet().iterator();
while (propertiesIterator.hasNext()) {
final Map.Entry<String, String> entry = propertiesIterator.next();
if (STATIC_AGENT_KEYS.contains(entry.getKey()) == false) {
propertiesIterator.remove();
cliOptionsMap.put(entry.getKey(), entry.getValue());
}
}
return cliOptionsMap;
}
private static Map<String, String> extractApmSettings(Settings settings) throws UserException {
final Map<String, String> propertiesMap = new HashMap<>();
final Settings agentSettings = settings.getByPrefix("tracing.apm.agent.");
agentSettings.keySet().forEach(key -> propertiesMap.put(key, String.valueOf(agentSettings.get(key))));
// These settings must not be changed
for (String key : STATIC_CONFIG.keySet()) {
if (propertiesMap.containsKey(key)) {
throw new UserException(
ExitCodes.CONFIG,
"Do not set a value for [tracing.apm.agent." + key + "], as this is configured automatically by Elasticsearch"
);
}
}
CONFIG_DEFAULTS.forEach(propertiesMap::putIfAbsent);
propertiesMap.putAll(STATIC_CONFIG);
return propertiesMap;
}
/**
* Writes a Java properties file with data from supplied map to a temporary config, and returns
* the file that was created.
*
* @param tmpdir the directory for the file
* @param propertiesMap the data to write
* @return the file that was created
* @throws IOException if writing the file fails
*/
private static Path writeApmProperties(Path tmpdir, Map<String, String> propertiesMap) throws IOException {
final Properties p = new Properties();
p.putAll(propertiesMap);
final Path tmpFile = Files.createTempFile(tmpdir, ".elstcapm.", ".tmp");
try (OutputStream os = Files.newOutputStream(tmpFile)) {
p.store(os, " Automatically generated by Elasticsearch, do not edit!");
}
return tmpFile;
}
/**
* The JVM argument that configure the APM agent needs to specify the agent jar path, so this method
* finds the jar by inspecting the filesystem.
* @return the agent jar file
* @throws IOException if a problem occurs reading the filesystem
*/
@Nullable
private static Path findAgentJar() throws IOException, UserException {
final Path apmModule = Path.of(System.getProperty("user.dir")).resolve("modules/apm");
if (Files.notExists(apmModule)) {
if (Build.CURRENT.isProductionRelease()) {
throw new UserException(
ExitCodes.CODE_ERROR,
"Expected to find [apm] module in [" + apmModule + "]! Installation is corrupt"
);
}
return null;
}
try (var apmStream = Files.list(apmModule)) {
final List<Path> paths = apmStream.filter(
path -> path.getFileName().toString().matches("elastic-apm-agent-\\d+\\.\\d+\\.\\d+\\.jar")
).toList();
if (paths.size() > 1) {
throw new UserException(
ExitCodes.CODE_ERROR,
"Found multiple [elastic-apm-agent] jars under [" + apmModule + "]! Installation is corrupt."
);
}
if (paths.isEmpty()) {
throw new UserException(
ExitCodes.CODE_ERROR,
"Found no [elastic-apm-agent] jar under [" + apmModule + "]! Installation is corrupt."
);
}
return paths.get(0);
}
}
}

View File

@ -8,8 +8,10 @@
package org.elasticsearch.server.cli;
import org.elasticsearch.bootstrap.ServerArgs;
import org.elasticsearch.cli.ExitCodes;
import org.elasticsearch.cli.UserException;
import org.elasticsearch.common.settings.KeyStoreWrapper;
import java.io.BufferedReader;
import java.io.IOException;
@ -68,16 +70,17 @@ final class JvmOptionsParser {
* files in the {@code jvm.options.d} directory, and the options given by the {@code ES_JAVA_OPTS} environment
* variable.
*
* @param configDir the ES config dir
* @param tmpDir the directory that should be passed to {@code -Djava.io.tmpdir}
* @param envOptions the options passed through the ES_JAVA_OPTS env var
* @param keystore the installation's keystore
* @param configDir the ES config dir
* @param tmpDir the directory that should be passed to {@code -Djava.io.tmpdir}
* @param envOptions the options passed through the ES_JAVA_OPTS env var
* @return the list of options to put on the Java command line
* @throws InterruptedException if the java subprocess is interrupted
* @throws IOException if there is a problem reading any of the files
* @throws UserException if there is a problem parsing the `jvm.options` file or `jvm.options.d` files
* @throws IOException if there is a problem reading any of the files
* @throws UserException if there is a problem parsing the `jvm.options` file or `jvm.options.d` files
*/
static List<String> determineJvmOptions(Path configDir, Path tmpDir, String envOptions) throws InterruptedException, IOException,
UserException {
static List<String> determineJvmOptions(ServerArgs args, KeyStoreWrapper keystore, Path configDir, Path tmpDir, String envOptions)
throws InterruptedException, IOException, UserException {
final JvmOptionsParser parser = new JvmOptionsParser();
@ -86,7 +89,7 @@ final class JvmOptionsParser {
substitutions.put("ES_PATH_CONF", configDir.toString());
try {
return parser.jvmOptions(configDir, envOptions, substitutions);
return parser.jvmOptions(args, keystore, configDir, tmpDir, envOptions, substitutions);
} catch (final JvmOptionsFileParserException e) {
final String errorMessage = String.format(
Locale.ROOT,
@ -115,8 +118,14 @@ final class JvmOptionsParser {
}
}
private List<String> jvmOptions(final Path config, final String esJavaOpts, final Map<String, String> substitutions)
throws InterruptedException, IOException, JvmOptionsFileParserException {
private List<String> jvmOptions(
ServerArgs args,
KeyStoreWrapper keystore,
final Path config,
Path tmpDir,
final String esJavaOpts,
final Map<String, String> substitutions
) throws InterruptedException, IOException, JvmOptionsFileParserException, UserException {
final List<String> jvmOptions = readJvmOptionsFiles(config);
@ -132,12 +141,15 @@ final class JvmOptionsParser {
final List<String> ergonomicJvmOptions = JvmErgonomics.choose(substitutedJvmOptions);
final List<String> systemJvmOptions = SystemJvmOptions.systemJvmOptions();
final List<String> apmOptions = APMJvmOptions.apmJvmOptions(args.nodeSettings(), keystore, tmpDir);
final List<String> finalJvmOptions = new ArrayList<>(
systemJvmOptions.size() + substitutedJvmOptions.size() + ergonomicJvmOptions.size()
systemJvmOptions.size() + substitutedJvmOptions.size() + ergonomicJvmOptions.size() + apmOptions.size()
);
finalJvmOptions.addAll(systemJvmOptions); // add the system JVM options first so that they can be overridden
finalJvmOptions.addAll(substitutedJvmOptions);
finalJvmOptions.addAll(ergonomicJvmOptions);
finalJvmOptions.addAll(apmOptions);
return finalJvmOptions;
}

View File

@ -27,7 +27,6 @@ import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.env.Environment;
import org.elasticsearch.monitor.jvm.JvmInfo;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Arrays;
@ -76,15 +75,21 @@ class ServerCli extends EnvironmentAwareCommand {
validateConfig(options, env);
// setup security
final SecureString keystorePassword = getKeystorePassword(env.configFile(), terminal);
env = autoConfigureSecurity(terminal, options, processInfo, env, keystorePassword);
try (KeyStoreWrapper keystore = KeyStoreWrapper.load(env.configFile())) {
// setup security
final SecureString keystorePassword = getKeystorePassword(keystore, terminal);
env = autoConfigureSecurity(terminal, options, processInfo, env, keystorePassword);
// install/remove plugins from elasticsearch-plugins.yml
syncPlugins(terminal, env, processInfo);
if (keystore != null) {
keystore.decrypt(keystorePassword.getChars());
}
ServerArgs args = createArgs(options, env, keystorePassword, processInfo);
this.server = startServer(terminal, processInfo, args);
// install/remove plugins from elasticsearch-plugins.yml
syncPlugins(terminal, env, processInfo);
ServerArgs args = createArgs(options, env, keystorePassword, processInfo);
this.server = startServer(terminal, processInfo, args, keystore);
}
if (options.has(daemonizeOption)) {
server.detach();
@ -122,13 +127,11 @@ class ServerCli extends EnvironmentAwareCommand {
}
}
private static SecureString getKeystorePassword(Path configDir, Terminal terminal) throws IOException {
try (KeyStoreWrapper keystore = KeyStoreWrapper.load(configDir)) {
if (keystore != null && keystore.hasPassword()) {
return new SecureString(terminal.readSecret(KeyStoreWrapper.PROMPT));
} else {
return new SecureString(new char[0]);
}
private static SecureString getKeystorePassword(KeyStoreWrapper keystore, Terminal terminal) {
if (keystore != null && keystore.hasPassword()) {
return new SecureString(terminal.readSecret(KeyStoreWrapper.PROMPT));
} else {
return new SecureString(new char[0]);
}
}
@ -226,7 +229,8 @@ class ServerCli extends EnvironmentAwareCommand {
}
// protected to allow tests to override
protected ServerProcess startServer(Terminal terminal, ProcessInfo processInfo, ServerArgs args) throws UserException {
return ServerProcess.start(terminal, processInfo, args);
protected ServerProcess startServer(Terminal terminal, ProcessInfo processInfo, ServerArgs args, KeyStoreWrapper keystore)
throws UserException {
return ServerProcess.start(terminal, processInfo, args, keystore);
}
}

View File

@ -15,6 +15,7 @@ import org.elasticsearch.cli.ProcessInfo;
import org.elasticsearch.cli.Terminal;
import org.elasticsearch.cli.UserException;
import org.elasticsearch.common.io.stream.OutputStreamStreamOutput;
import org.elasticsearch.common.settings.KeyStoreWrapper;
import org.elasticsearch.core.IOUtils;
import org.elasticsearch.core.PathUtils;
import org.elasticsearch.core.SuppressForbidden;
@ -36,7 +37,7 @@ import static org.elasticsearch.server.cli.ProcessUtil.nonInterruptible;
/**
* A helper to control a {@link Process} running the main Elasticsearch server.
*
* <p> The process can be started by calling {@link #start(Terminal, ProcessInfo, ServerArgs)}.
* <p> The process can be started by calling {@link #start(Terminal, ProcessInfo, ServerArgs, KeyStoreWrapper)}.
* The process is controlled by internally sending arguments and control signals on stdin,
* and receiving control signals on stderr. The start method does not return until the
* server is ready to process requests and has exited the bootstrap thread.
@ -66,7 +67,8 @@ public class ServerProcess {
// this allows mocking the process building by tests
interface OptionsBuilder {
List<String> getJvmOptions(Path configDir, Path tmpDir, String envOptions) throws InterruptedException, IOException, UserException;
List<String> getJvmOptions(ServerArgs args, KeyStoreWrapper keyStoreWrapper, Path configDir, Path tmpDir, String envOptions)
throws InterruptedException, IOException, UserException;
}
// this allows mocking the process building by tests
@ -77,14 +79,16 @@ public class ServerProcess {
/**
* Start a server in a new process.
*
* @param terminal A terminal to connect the standard inputs and outputs to for the new process.
* @param processInfo Info about the current process, for passing through to the subprocess.
* @param args Arguments to the server process.
* @param terminal A terminal to connect the standard inputs and outputs to for the new process.
* @param processInfo Info about the current process, for passing through to the subprocess.
* @param args Arguments to the server process.
* @param keystore A keystore for accessing secrets.
* @return A running server process that is ready for requests
* @throws UserException If the process failed during bootstrap
*/
public static ServerProcess start(Terminal terminal, ProcessInfo processInfo, ServerArgs args) throws UserException {
return start(terminal, processInfo, args, JvmOptionsParser::determineJvmOptions, ProcessBuilder::start);
public static ServerProcess start(Terminal terminal, ProcessInfo processInfo, ServerArgs args, KeyStoreWrapper keystore)
throws UserException {
return start(terminal, processInfo, args, keystore, JvmOptionsParser::determineJvmOptions, ProcessBuilder::start);
}
// package private so tests can mock options building and process starting
@ -92,6 +96,7 @@ public class ServerProcess {
Terminal terminal,
ProcessInfo processInfo,
ServerArgs args,
KeyStoreWrapper keystore,
OptionsBuilder optionsBuilder,
ProcessStarter processStarter
) throws UserException {
@ -100,7 +105,7 @@ public class ServerProcess {
boolean success = false;
try {
jvmProcess = createProcess(processInfo, args.configDir(), optionsBuilder, processStarter);
jvmProcess = createProcess(args, keystore, processInfo, args.configDir(), optionsBuilder, processStarter);
errorPump = new ErrorPumpThread(terminal.getErrorWriter(), jvmProcess.getErrorStream());
errorPump.start();
sendArgs(args, jvmProcess.getOutputStream());
@ -193,6 +198,8 @@ public class ServerProcess {
}
private static Process createProcess(
ServerArgs args,
KeyStoreWrapper keystore,
ProcessInfo processInfo,
Path configDir,
OptionsBuilder optionsBuilder,
@ -204,7 +211,7 @@ public class ServerProcess {
envVars.put("LIBFFI_TMPDIR", tempDir.toString());
}
List<String> jvmOptions = optionsBuilder.getJvmOptions(configDir, tempDir, envVars.remove("ES_JAVA_OPTS"));
List<String> jvmOptions = optionsBuilder.getJvmOptions(args, keystore, configDir, tempDir, envVars.remove("ES_JAVA_OPTS"));
// also pass through distribution type
jvmOptions.add("-Des.distribution.type=" + processInfo.sysprops().get("es.distribution.type"));

View File

@ -436,7 +436,7 @@ public class ServerCliTests extends CommandTestCase {
}
@Override
protected ServerProcess startServer(Terminal terminal, ProcessInfo processInfo, ServerArgs args) {
protected ServerProcess startServer(Terminal terminal, ProcessInfo processInfo, ServerArgs args, KeyStoreWrapper keystore) {
if (argsValidator != null) {
argsValidator.accept(args);
}

View File

@ -92,7 +92,7 @@ public class ServerProcessTests extends ESTestCase {
envVars.clear();
esHomeDir = createTempDir();
nodeSettings = Settings.builder();
optionsBuilder = (configDir, tmpDir, envOptions) -> new ArrayList<>();
optionsBuilder = (args, keystore, configDir, tmpDir, envOptions) -> new ArrayList<>();
processValidator = null;
mainCallback = null;
}
@ -201,7 +201,7 @@ public class ServerProcessTests extends ESTestCase {
process = new MockElasticsearchProcess();
return process;
};
return ServerProcess.start(terminal, pinfo, args, optionsBuilder, starter);
return ServerProcess.start(terminal, pinfo, args, null, optionsBuilder, starter);
}
public void testProcessBuilder() throws Exception {
@ -253,7 +253,9 @@ public class ServerProcessTests extends ESTestCase {
}
public void testOptionsBuildingInterrupted() throws Exception {
optionsBuilder = (configDir, tmpDir, envOptions) -> { throw new InterruptedException("interrupted while get jvm options"); };
optionsBuilder = (args, keystore, configDir, tmpDir, envOptions) -> {
throw new InterruptedException("interrupted while get jvm options");
};
var e = expectThrows(RuntimeException.class, () -> runForeground());
assertThat(e.getCause().getMessage(), equalTo("interrupted while get jvm options"));
}
@ -277,7 +279,7 @@ public class ServerProcessTests extends ESTestCase {
}
public void testTempDir() throws Exception {
optionsBuilder = (configDir, tmpDir, envOptions) -> {
optionsBuilder = (args, keystore, configDir, tmpDir, envOptions) -> {
assertThat(tmpDir.toString(), Files.exists(tmpDir), is(true));
assertThat(tmpDir.getFileName().toString(), startsWith("elasticsearch-"));
return new ArrayList<>();
@ -289,7 +291,7 @@ public class ServerProcessTests extends ESTestCase {
Path baseTmpDir = createTempDir();
sysprops.put("os.name", "Windows 10");
sysprops.put("java.io.tmpdir", baseTmpDir.toString());
optionsBuilder = (configDir, tmpDir, envOptions) -> {
optionsBuilder = (args, keystore, configDir, tmpDir, envOptions) -> {
assertThat(tmpDir.toString(), Files.exists(tmpDir), is(true));
assertThat(tmpDir.getFileName().toString(), equalTo("elasticsearch"));
assertThat(tmpDir.getParent().toString(), equalTo(baseTmpDir.toString()));
@ -301,7 +303,7 @@ public class ServerProcessTests extends ESTestCase {
public void testTempDirOverride() throws Exception {
Path customTmpDir = createTempDir();
envVars.put("ES_TMPDIR", customTmpDir.toString());
optionsBuilder = (configDir, tmpDir, envOptions) -> {
optionsBuilder = (args, keystore, configDir, tmpDir, envOptions) -> {
assertThat(tmpDir.toString(), equalTo(customTmpDir.toString()));
return new ArrayList<>();
};
@ -327,7 +329,7 @@ public class ServerProcessTests extends ESTestCase {
public void testCustomJvmOptions() throws Exception {
envVars.put("ES_JAVA_OPTS", "-Dmyoption=foo");
optionsBuilder = (configDir, tmpDir, envOptions) -> {
optionsBuilder = (args, keystore, configDir, tmpDir, envOptions) -> {
assertThat(envOptions, equalTo("-Dmyoption=foo"));
return new ArrayList<>();
};
@ -336,7 +338,7 @@ public class ServerProcessTests extends ESTestCase {
}
public void testCommandLineSysprops() throws Exception {
optionsBuilder = (configDir, tmpDir, envOptions) -> List.of("-Dfoo1=bar", "-Dfoo2=baz");
optionsBuilder = (args, keystore, configDir, tmpDir, envOptions) -> List.of("-Dfoo1=bar", "-Dfoo2=baz");
processValidator = pb -> {
assertThat(pb.command(), contains("-Dfoo1=bar"));
assertThat(pb.command(), contains("-Dfoo2=bar"));

View File

@ -35,7 +35,7 @@ class WindowsServiceDaemon extends EnvironmentAwareCommand {
@Override
public void execute(Terminal terminal, OptionSet options, Environment env, ProcessInfo processInfo) throws Exception {
var args = new ServerArgs(false, true, null, new SecureString(""), env.settings(), env.configFile());
this.server = ServerProcess.start(terminal, processInfo, args);
this.server = ServerProcess.start(terminal, processInfo, args, null);
// start does not return until the server is ready, and we do not wait for the process
}

View File

@ -0,0 +1,6 @@
pr: 88443
summary: Provide tracing implementation using OpenTelemetry and APM Java agent
area: Infra/Core
type: feature
issues:
- 84369

24
modules/apm/build.gradle Normal file
View File

@ -0,0 +1,24 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0; you may not use this file except in compliance with the Elastic License
* 2.0.
*/
apply plugin: 'elasticsearch.internal-es-plugin'
esplugin {
name 'apm'
description 'Provides APM integration for Elasticsearch'
classname 'org.elasticsearch.tracing.apm.APM'
}
dependencies {
implementation "io.opentelemetry:opentelemetry-api:1.15.0"
implementation "io.opentelemetry:opentelemetry-context:1.15.0"
implementation "io.opentelemetry:opentelemetry-semconv:1.15.0-alpha"
runtimeOnly "co.elastic.apm:elastic-apm-agent:1.33.0"
}
tasks.named("dependencyLicenses").configure {
mapping from: /opentelemetry-.*/, to: 'opentelemetry'
}

View File

@ -0,0 +1 @@
2a36d1338fde40250a6c0ffe9bfc8cf96a3fd962

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 Elastic and contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,465 @@
Elastic APM Java Agent
Copyright 2018-2022 Elasticsearch B.V.
###############################################################################
This product includes software licensed under the Apache License 2.0 from the
following sources:
- stagemonitor - Copyright 2014-2017 iSYS Software GmbH
- micrometer
- https://github.com/raphw/weak-lock-free
- openzipkin/brave
- LMAX Disruptor - Copyright 2011 LMAX Ltd.
- Byte Buddy (https://bytebuddy.net) - Copyright Rafael Winterhalter
- JCTools
- https://github.com/jvm-profiling-tools/async-profiler
- https://github.com/real-logic/agrona
- Apache Log4j 2 - https://logging.apache.org/log4j/2.x/license.html
------------------------------------------------------------------------------
stagemonitor NOTICE
stagemonitor
Copyright 2014-2017 iSYS Software GmbH
This product includes software developed at
iSYS Software GmbH (http://www.isys-software.de/).
This product bundles jQuery treetable 3.2.0, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/jquery-treetable/jquery.treetable.js
This product bundles Twitter Bootstrap 3.3.2, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/bootstrap/bootstrap.min.css
This product bundles typeahead.js-bootstrap3.less, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/typeahead.css
This product bundles typeahead.js, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/typeahead.jquery.min.js
This product bundles Handlebars 1.3.0, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/handlebars.min.js
This product bundles jQuery 1.11.1, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/jquery.1.11.1.min.js
This product bundles jQuery serializeObject, which is available under the "BSD" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/jquery.serialize-object.min.js
This product bundles Bootstrap Growl 2.0.1, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/bootstrap/bootstrap-growl.min.js
This product bundles Animate.css, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/animate/animate.min.css
This product native sigar bindings, which are available under the "Apache 2.0" license. For details, see
stagemonitor-os/src/main/resources/sigar
This product bundles DataTables 1.10.3, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/datatables/jquery.dataTables.min.js
This product bundles Flot, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/flot/jquery.flot.min.js
This product bundles Flot, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/flot/jquery.flot.resize.min.js
This product bundles Flot stack plugin, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/flot/jquery.flot.stack.original.js
This product bundles Flot time plugin, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/flot/jquery.flot.time.min.js
This product bundles Flot tooltip plugin, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/stagemonitor/static/flot/jquery.flot.tooltip.min.js
This product bundles weasel, which is available under the "MIT" license. For details, see
stagemonitor-web-servlet/src/main/resources/eum.debug.js
This product includes code derived from the Metrics project, which is available under the "Apache 2.0" license. For details, see
stagemonitor-web-servlet/src/main/java/org/stagemonitor/web/metrics/StagemonitorMetricsServlet.java
This product includes code derived from https://github.com/raphw/weak-lock-free/, which is available under the "Apache 2.0" license. For details, see
stagemonitor-core/src/main/java/org/stagemonitor/core/instrument/WeakConcurrentMap.java
This product includes code derived from https://github.com/prometheus/client_java/blob/master/simpleclient_dropwizard/src/main/java/io/prometheus/client/dropwizard/DropwizardExports.java, which is available under the "Apache 2.0" license. For details, see
stagemonitor-core/src/main/java/org/stagemonitor/core/metrics/prometheus/StagemonitorPrometheusCollector.java
This product includes code from https://github.com/uber/jaeger-client-java, which is available under the "MIT" license.
stagemonitor/stagemonitor-tracing/src/main/java/org/stagemonitor/tracing/utils/RateLimiter.java
This product includes code derived from Google Guava, which is available under the "Apache 2.0" license. For details, see
stagemonitor-core/src/main/java/org/stagemonitor/core/util/InetAddresses.java
stagemonitor-core/src/main/java/org/stagemonitor/core/util/Ints.java
stagemonitor-core/src/main/java/org/stagemonitor/core/util/Assert.java
This product includes code from Spring Framework, which is available under the "Apache 2.0" license. For details, see
stagemonitor-web-servlet/src/main/java/org/stagemonitor/web/servlet/util/AntPathMatcher.java
------------------------------------------------------------------------------
async-profiler NOTICE
async-profiler
Copyright 2018 - 2020 Andrei Pangin
This product includes software licensed under CDDL 1.0 from the
following sources:
This product includes a specialized C++ port of the FlameGraph script, licensed under CDDL, available at
https://github.com/brendangregg/FlameGraph/blob/master/flamegraph.pl
Copyright 2016 Netflix, Inc.
Copyright 2011 Joyent, Inc. All rights reserved.
Copyright 2011 Brendan Gregg. All rights reserved.
CDDL HEADER START
The contents of this file are subject to the terms of the
Common Development and Distribution License (the "License").
You may not use this file except in compliance with the License.
You can obtain a copy of the license at docs/cddl1.txt or
http://opensource.org/licenses/CDDL-1.0.
See the License for the specific language governing permissions
and limitations under the License.
When distributing Covered Code, include this CDDL HEADER in each
file and include the License file at docs/cddl1.txt.
If applicable, add the following below this CDDL HEADER, with the
fields enclosed by brackets "[]" replaced with your own identifying
information: Portions Copyright [yyyy] [name of copyright owner]
CDDL HEADER END
------------------------------------------------------------------------------
Apache Log4j NOTICE
Apache Log4j
Copyright 1999-2021 Apache Software Foundation
This product includes software developed at
The Apache Software Foundation (http://www.apache.org/).
ResolverUtil.java
Copyright 2005-2006 Tim Fennell
Dumbster SMTP test server
Copyright 2004 Jason Paul Kitchen
TypeUtil.java
Copyright 2002-2012 Ramnivas Laddad, Juergen Hoeller, Chris Beams
picocli (http://picocli.info)
Copyright 2017 Remko Popma
------------------------------------------------------------------------------
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
###############################################################################
This product includes code from https://github.com/ngs-doo/dsl-json,
under The BSD 3-Clause License:
Copyright (c) 2015, Nova Generacija Softvera d.o.o.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of Nova Generacija Softvera d.o.o. nor the names of its
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
###############################################################################
This product includes code from slf4j, under MIT License.
It also includes code that is based on some slf4j interfaces.
Copyright (c) 2004-2011 QOS.ch
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
###############################################################################
This product includes code from HdrHistogram, dual-licensed under CC0 and BSD 2-Clause License
The code in this repository code was Written by Gil Tene, Michael Barker,
and Matt Warren, and released to the public domain, as explained at
http://creativecommons.org/publicdomain/zero/1.0/
For users of this code who wish to consume it under the "BSD" license
rather than under the public domain or CC0 contribution text mentioned
above, the code found under this directory is *also* provided under the
following license (commonly referred to as the BSD 2-Clause License). This
license does not detract from the above stated release of the code into
the public domain, and simply represents an additional license granted by
the Author.
-----------------------------------------------------------------------------
** Beginning of "BSD 2-Clause License" text. **
Copyright (c) 2012, 2013, 2014, 2015, 2016 Gil Tene
Copyright (c) 2014 Michael Barker
Copyright (c) 2014 Matt Warren
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
###############################################################################

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1 @@
549bf64119e092bb9917e73601e9ebb508b5a2e3

View File

@ -0,0 +1 @@
4401a67f7aef7af786012965408ecb5172f19e2f

View File

@ -0,0 +1 @@
ffbb3697b70da736b72bd55ed45005049dc8df54

View File

@ -0,0 +1,107 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
package org.elasticsearch.tracing.apm;
import org.apache.lucene.util.SetOnce;
import org.elasticsearch.client.internal.Client;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.io.stream.NamedWriteableRegistry;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.env.NodeEnvironment;
import org.elasticsearch.plugins.NetworkPlugin;
import org.elasticsearch.plugins.Plugin;
import org.elasticsearch.plugins.TracerPlugin;
import org.elasticsearch.repositories.RepositoriesService;
import org.elasticsearch.script.ScriptService;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.tracing.Tracer;
import org.elasticsearch.watcher.ResourceWatcherService;
import org.elasticsearch.xcontent.NamedXContentRegistry;
import java.util.Collection;
import java.util.List;
import java.util.function.Supplier;
/**
* This module integrates Elastic's APM product with Elasticsearch. Elasticsearch has
* a {@link org.elasticsearch.tracing.Tracer} interface, which this module implements via
* {@link APMTracer}. We use the OpenTelemetry API to capture "spans", and attach the
* Elastic APM Java to ship those spans to an APM server. Although it is possible to
* programmatically attach the agent, the Security Manager permissions required for this
* make this approach difficult to the point of impossibility.
* <p>
* All settings are found under the <code>tracing.apm.</code> prefix. Any setting under
* the <code>tracing.apm.agent.</code> prefix will be forwarded on to the APM Java agent
* by setting appropriate system properties. Some settings can only be set once, and must be
* set when the agent starts. We therefore also create and configure a config file in
* the {@code APMJvmOptions} class, which we then delete when Elasticsearch starts, so that
* sensitive settings such as <code>secret_token</code> or <code>api_key</code> are not
* left on disk.
* <p>
* When settings are reconfigured using the settings REST API, the new values will again
* be passed via system properties to the Java agent, which periodically checks for changes
* and applies the new settings values, provided those settings can be dynamically updated.
*/
public class APM extends Plugin implements NetworkPlugin, TracerPlugin {
private final SetOnce<APMTracer> tracer = new SetOnce<>();
private final Settings settings;
public APM(Settings settings) {
this.settings = settings;
}
@Override
public Tracer getTracer(Settings settings) {
final APMTracer apmTracer = new APMTracer(settings);
tracer.set(apmTracer);
return apmTracer;
}
@Override
public Collection<Object> createComponents(
Client client,
ClusterService clusterService,
ThreadPool threadPool,
ResourceWatcherService resourceWatcherService,
ScriptService scriptService,
NamedXContentRegistry xContentRegistry,
Environment environment,
NodeEnvironment nodeEnvironment,
NamedWriteableRegistry namedWriteableRegistry,
IndexNameExpressionResolver indexNameExpressionResolver,
Supplier<RepositoriesService> repositoriesServiceSupplier,
Tracer unused
) {
final APMTracer apmTracer = tracer.get();
apmTracer.setClusterName(clusterService.getClusterName().value());
apmTracer.setNodeName(clusterService.getNodeName());
final APMAgentSettings apmAgentSettings = new APMAgentSettings();
apmAgentSettings.syncAgentSystemProperties(settings);
apmAgentSettings.addClusterSettingsListeners(clusterService, apmTracer);
return List.of(apmTracer);
}
@Override
public List<Setting<?>> getSettings() {
return List.of(
APMAgentSettings.APM_ENABLED_SETTING,
APMAgentSettings.APM_TRACING_NAMES_INCLUDE_SETTING,
APMAgentSettings.APM_TRACING_NAMES_EXCLUDE_SETTING,
APMAgentSettings.APM_AGENT_SETTINGS,
APMAgentSettings.APM_SECRET_TOKEN_SETTING,
APMAgentSettings.APM_API_KEY_SETTING
);
}
}

View File

@ -0,0 +1,156 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
package org.elasticsearch.tracing.apm;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.ClusterSettings;
import org.elasticsearch.common.settings.SecureSetting;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Setting;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.core.SuppressForbidden;
import java.security.AccessController;
import java.security.PrivilegedAction;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.function.Function;
import static org.elasticsearch.common.settings.Setting.Property.NodeScope;
import static org.elasticsearch.common.settings.Setting.Property.OperatorDynamic;
/**
* This class is responsible for APM settings, both for Elasticsearch and the APM Java agent.
* The methods could all be static, however they are not in order to make unit testing easier.
*/
class APMAgentSettings {
private static final Logger LOGGER = LogManager.getLogger(APMAgentSettings.class);
/**
* Sensible defaults that Elasticsearch configures. This cannot be done via the APM agent
* config file, as then their values could not be overridden dynamically via system properties.
*/
// tag::noformat
static Map<String, String> APM_AGENT_DEFAULT_SETTINGS = Map.of(
"transaction_sample_rate", "0.2"
);
// end::noformat
void addClusterSettingsListeners(ClusterService clusterService, APMTracer apmTracer) {
final ClusterSettings clusterSettings = clusterService.getClusterSettings();
clusterSettings.addSettingsUpdateConsumer(APM_ENABLED_SETTING, enabled -> {
apmTracer.setEnabled(enabled);
// The agent records data other than spans, e.g. JVM metrics, so we toggle this setting in order to
// minimise its impact to a running Elasticsearch.
this.setAgentSetting("recording", Boolean.toString(enabled));
});
clusterSettings.addSettingsUpdateConsumer(APM_TRACING_NAMES_INCLUDE_SETTING, apmTracer::setIncludeNames);
clusterSettings.addSettingsUpdateConsumer(APM_TRACING_NAMES_EXCLUDE_SETTING, apmTracer::setExcludeNames);
clusterSettings.addAffixMapUpdateConsumer(APM_AGENT_SETTINGS, map -> map.forEach(this::setAgentSetting), (x, y) -> {});
}
/**
* Copies APM settings from the provided settings object into the corresponding system properties.
* @param settings the settings to apply
*/
void syncAgentSystemProperties(Settings settings) {
this.setAgentSetting("recording", Boolean.toString(APM_ENABLED_SETTING.get(settings)));
// Apply default values for some system properties. Although we configure
// the settings in APM_AGENT_DEFAULT_SETTINGS to defer to the default values, they won't
// do anything if those settings are never configured.
APM_AGENT_DEFAULT_SETTINGS.keySet()
.forEach(
key -> this.setAgentSetting(key, APM_AGENT_SETTINGS.getConcreteSetting(APM_AGENT_SETTINGS.getKey() + key).get(settings))
);
// Then apply values from the settings in the cluster state
APM_AGENT_SETTINGS.getAsMap(settings).forEach(this::setAgentSetting);
}
/**
* Copies a setting to the APM agent's system properties under <code>elastic.apm</code>, either
* by setting the property if {@code value} has a value, or by deleting the property if it doesn't.
* @param key the config key to set, without any prefix
* @param value the value to set, or <code>null</code>
*/
@SuppressForbidden(reason = "Need to be able to manipulate APM agent-related properties to set them dynamically")
void setAgentSetting(String key, String value) {
final String completeKey = "elastic.apm." + Objects.requireNonNull(key);
AccessController.doPrivileged((PrivilegedAction<Void>) () -> {
if (value == null || value.isEmpty()) {
LOGGER.trace("Clearing system property [{}]", completeKey);
System.clearProperty(completeKey);
} else {
LOGGER.trace("Setting setting property [{}] to [{}]", completeKey, value);
System.setProperty(completeKey, value);
}
return null;
});
}
private static final String APM_SETTING_PREFIX = "tracing.apm.";
/**
* A list of APM agent config keys that should never be configured by the user.
*/
private static final List<String> PROHIBITED_AGENT_KEYS = List.of(
// ES generates a config file and sets this value
"config_file",
// ES controls this via `tracing.apm.enabled`
"recording"
);
static final Setting.AffixSetting<String> APM_AGENT_SETTINGS = Setting.prefixKeySetting(
APM_SETTING_PREFIX + "agent.",
(qualifiedKey) -> {
final String[] parts = qualifiedKey.split("\\.");
final String key = parts[parts.length - 1];
final String defaultValue = APM_AGENT_DEFAULT_SETTINGS.getOrDefault(key, "");
return new Setting<>(qualifiedKey, defaultValue, (value) -> {
if (PROHIBITED_AGENT_KEYS.contains(key)) {
throw new IllegalArgumentException("Explicitly configuring [" + qualifiedKey + "] is prohibited");
}
return value;
}, Setting.Property.NodeScope, Setting.Property.OperatorDynamic);
}
);
static final Setting<List<String>> APM_TRACING_NAMES_INCLUDE_SETTING = Setting.listSetting(
APM_SETTING_PREFIX + "names.include",
Collections.emptyList(),
Function.identity(),
OperatorDynamic,
NodeScope
);
static final Setting<List<String>> APM_TRACING_NAMES_EXCLUDE_SETTING = Setting.listSetting(
APM_SETTING_PREFIX + "names.exclude",
Collections.emptyList(),
Function.identity(),
OperatorDynamic,
NodeScope
);
static final Setting<Boolean> APM_ENABLED_SETTING = Setting.boolSetting(
APM_SETTING_PREFIX + "enabled",
false,
OperatorDynamic,
NodeScope
);
static final Setting<SecureString> APM_SECRET_TOKEN_SETTING = SecureSetting.secureString(APM_SETTING_PREFIX + "secret_token", null);
static final Setting<SecureString> APM_API_KEY_SETTING = SecureSetting.secureString(APM_SETTING_PREFIX + "api_key", null);
}

View File

@ -0,0 +1,394 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
package org.elasticsearch.tracing.apm;
import io.opentelemetry.api.GlobalOpenTelemetry;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.SpanBuilder;
import io.opentelemetry.api.trace.SpanKind;
import io.opentelemetry.api.trace.Tracer;
import io.opentelemetry.context.Context;
import io.opentelemetry.context.Scope;
import io.opentelemetry.context.propagation.TextMapGetter;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.lucene.util.automaton.Automata;
import org.apache.lucene.util.automaton.Automaton;
import org.apache.lucene.util.automaton.CharacterRunAutomaton;
import org.apache.lucene.util.automaton.Operations;
import org.apache.lucene.util.automaton.RegExp;
import org.elasticsearch.Version;
import org.elasticsearch.common.component.AbstractLifecycleComponent;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.Maps;
import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.core.Nullable;
import org.elasticsearch.core.Releasable;
import org.elasticsearch.tasks.Task;
import java.security.AccessController;
import java.security.PrivilegedAction;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import static org.elasticsearch.tracing.apm.APMAgentSettings.APM_ENABLED_SETTING;
import static org.elasticsearch.tracing.apm.APMAgentSettings.APM_TRACING_NAMES_EXCLUDE_SETTING;
import static org.elasticsearch.tracing.apm.APMAgentSettings.APM_TRACING_NAMES_INCLUDE_SETTING;
/**
* This is an implementation of the {@link org.elasticsearch.tracing.Tracer} interface, which uses
* the OpenTelemetry API to capture spans.
* <p>
* This module doesn't provide an implementation of the OTel API. Normally that would mean that the
* API's default, no-op implementation would be used. However, when the APM Java is attached, it
* intercepts the {@link GlobalOpenTelemetry} class and provides its own implementation instead.
*/
public class APMTracer extends AbstractLifecycleComponent implements org.elasticsearch.tracing.Tracer {
private static final Logger logger = LogManager.getLogger(APMTracer.class);
/** Holds in-flight span information. */
private final Map<String, Context> spans = ConcurrentCollections.newConcurrentMap();
private volatile boolean enabled;
private volatile APMServices services;
private List<String> includeNames;
private List<String> excludeNames;
/** Built using {@link #includeNames} and {@link #excludeNames}, and filters out spans based on their name. */
private volatile CharacterRunAutomaton filterAutomaton;
private String clusterName;
private String nodeName;
public void setClusterName(String clusterName) {
this.clusterName = clusterName;
}
public void setNodeName(String nodeName) {
this.nodeName = nodeName;
}
/**
* This class is used to make all OpenTelemetry services visible at once
*/
record APMServices(Tracer tracer, OpenTelemetry openTelemetry) {}
public APMTracer(Settings settings) {
this.includeNames = APM_TRACING_NAMES_INCLUDE_SETTING.get(settings);
this.excludeNames = APM_TRACING_NAMES_EXCLUDE_SETTING.get(settings);
this.filterAutomaton = buildAutomaton(includeNames, excludeNames);
this.enabled = APM_ENABLED_SETTING.get(settings);
}
void setEnabled(boolean enabled) {
this.enabled = enabled;
if (enabled) {
this.services = createApmServices();
} else {
destroyApmServices();
}
}
void setIncludeNames(List<String> includeNames) {
this.includeNames = includeNames;
this.filterAutomaton = buildAutomaton(includeNames, excludeNames);
}
void setExcludeNames(List<String> excludeNames) {
this.excludeNames = excludeNames;
this.filterAutomaton = buildAutomaton(includeNames, excludeNames);
}
@Override
protected void doStart() {
if (enabled) {
this.services = createApmServices();
}
}
@Override
protected void doStop() {
destroyApmServices();
}
@Override
protected void doClose() {}
private APMServices createApmServices() {
assert this.enabled;
assert this.services == null;
return AccessController.doPrivileged((PrivilegedAction<APMServices>) () -> {
var openTelemetry = GlobalOpenTelemetry.get();
var tracer = openTelemetry.getTracer("elasticsearch", Version.CURRENT.toString());
return new APMServices(tracer, openTelemetry);
});
}
private void destroyApmServices() {
this.services = null;
this.spans.clear();// discard in-flight spans
}
@Override
public void startTrace(ThreadContext threadContext, String spanId, String spanName, @Nullable Map<String, Object> attributes) {
assert threadContext != null;
assert spanId != null;
assert spanName != null;
// If tracing has been disabled, return immediately
var services = this.services;
if (services == null) {
return;
}
if (filterAutomaton.run(spanName) == false) {
logger.trace("Skipping tracing [{}] [{}] as it has been filtered out", spanId, spanName);
return;
}
spans.computeIfAbsent(spanId, _spanId -> AccessController.doPrivileged((PrivilegedAction<Context>) () -> {
logger.trace("Tracing [{}] [{}]", spanId, spanName);
final SpanBuilder spanBuilder = services.tracer.spanBuilder(spanName);
// A span can have a parent span, which here is modelled though a parent span context.
// Setting this is important for seeing a complete trace in the APM UI.
final Context parentContext = getParentContext(threadContext);
if (parentContext != null) {
spanBuilder.setParent(parentContext);
}
setSpanAttributes(threadContext, attributes, spanBuilder);
final Span span = spanBuilder.startSpan();
final Context contextForNewSpan = Context.current().with(span);
updateThreadContext(threadContext, services, contextForNewSpan);
return contextForNewSpan;
}));
}
private static void updateThreadContext(ThreadContext threadContext, APMServices services, Context context) {
// The new span context can be used as the parent context directly within the same Java process...
threadContext.putTransient(Task.APM_TRACE_CONTEXT, context);
// ...whereas for tasks sent to other ES nodes, we need to put trace HTTP headers into the threadContext so
// that they can be propagated.
services.openTelemetry.getPropagators().getTextMapPropagator().inject(context, threadContext, (tc, key, value) -> {
if (isSupportedContextKey(key)) {
tc.putHeader(key, value);
}
});
}
private Context getParentContext(ThreadContext threadContext) {
// https://github.com/open-telemetry/opentelemetry-java/discussions/2884#discussioncomment-381870
// If you just want to propagate across threads within the same process, you don't need context propagators (extract/inject).
// You can just pass the Context object directly to another thread (it is immutable and thus thread-safe).
// Attempt to fetch a local parent context first, otherwise look for a remote parent
Context parentContext = threadContext.getTransient("parent_" + Task.APM_TRACE_CONTEXT);
if (parentContext == null) {
final String traceParentHeader = threadContext.getTransient("parent_" + Task.TRACE_PARENT_HTTP_HEADER);
final String traceStateHeader = threadContext.getTransient("parent_" + Task.TRACE_STATE);
if (traceParentHeader != null) {
final Map<String, String> traceContextMap = Maps.newMapWithExpectedSize(2);
// traceparent and tracestate should match the keys used by W3CTraceContextPropagator
traceContextMap.put(Task.TRACE_PARENT_HTTP_HEADER, traceParentHeader);
if (traceStateHeader != null) {
traceContextMap.put(Task.TRACE_STATE, traceStateHeader);
}
parentContext = services.openTelemetry.getPropagators()
.getTextMapPropagator()
.extract(Context.current(), traceContextMap, new MapKeyGetter());
}
}
return parentContext;
}
/**
* Most of the examples of how to use the OTel API look something like this, where the span context
* is automatically propagated:
*
* <pre>{@code
* Span span = tracer.spanBuilder("parent").startSpan();
* try (Scope scope = parentSpan.makeCurrent()) {
* // ...do some stuff, possibly creating further spans
* } finally {
* span.end();
* }
* }</pre>
* This typically isn't useful in Elasticsearch, because a {@link Scope} can't be used across threads.
* However, if a scope is active, then the APM agent can capture additional information, so this method
* exists to make it possible to use scopes in the few situation where it makes sense.
*
* @param spanId the ID of a currently-open span for which to open a scope.
* @return a method to close the scope when you are finished with it.
*/
@Override
public Releasable withScope(String spanId) {
final Context context = spans.get(spanId);
if (context != null) {
var scope = context.makeCurrent();
return scope::close;
}
return () -> {};
}
private void setSpanAttributes(ThreadContext threadContext, @Nullable Map<String, Object> spanAttributes, SpanBuilder spanBuilder) {
if (spanAttributes != null) {
for (Map.Entry<String, Object> entry : spanAttributes.entrySet()) {
final String key = entry.getKey();
final Object value = entry.getValue();
if (value instanceof String) {
spanBuilder.setAttribute(key, (String) value);
} else if (value instanceof Long) {
spanBuilder.setAttribute(key, (Long) value);
} else if (value instanceof Integer) {
spanBuilder.setAttribute(key, (Integer) value);
} else if (value instanceof Double) {
spanBuilder.setAttribute(key, (Double) value);
} else if (value instanceof Boolean) {
spanBuilder.setAttribute(key, (Boolean) value);
} else {
throw new IllegalArgumentException(
"span attributes do not support value type of [" + value.getClass().getCanonicalName() + "]"
);
}
}
final boolean isHttpSpan = spanAttributes.keySet().stream().anyMatch(key -> key.startsWith("http."));
spanBuilder.setSpanKind(isHttpSpan ? SpanKind.SERVER : SpanKind.INTERNAL);
} else {
spanBuilder.setSpanKind(SpanKind.INTERNAL);
}
spanBuilder.setAttribute(org.elasticsearch.tracing.Tracer.AttributeKeys.NODE_NAME, nodeName);
spanBuilder.setAttribute(org.elasticsearch.tracing.Tracer.AttributeKeys.CLUSTER_NAME, clusterName);
final String xOpaqueId = threadContext.getHeader(Task.X_OPAQUE_ID_HTTP_HEADER);
if (xOpaqueId != null) {
spanBuilder.setAttribute("es.x-opaque-id", xOpaqueId);
}
}
@Override
public void addError(String spanId, Throwable throwable) {
final var span = Span.fromContextOrNull(spans.get(spanId));
if (span != null) {
span.recordException(throwable);
}
}
@Override
public void setAttribute(String spanId, String key, boolean value) {
final var span = Span.fromContextOrNull(spans.get(spanId));
if (span != null) {
span.setAttribute(key, value);
}
}
@Override
public void setAttribute(String spanId, String key, double value) {
final var span = Span.fromContextOrNull(spans.get(spanId));
if (span != null) {
span.setAttribute(key, value);
}
}
@Override
public void setAttribute(String spanId, String key, long value) {
final var span = Span.fromContextOrNull(spans.get(spanId));
if (span != null) {
span.setAttribute(key, value);
}
}
@Override
public void setAttribute(String spanId, String key, String value) {
final var span = Span.fromContextOrNull(spans.get(spanId));
if (span != null) {
span.setAttribute(key, value);
}
}
@Override
public void stopTrace(String spanId) {
final var span = Span.fromContextOrNull(spans.remove(spanId));
if (span != null) {
logger.trace("Finishing trace [{}]", spanId);
span.end();
}
}
@Override
public void addEvent(String spanId, String eventName) {
final var span = Span.fromContextOrNull(spans.get(spanId));
if (span != null) {
span.addEvent(eventName);
}
}
private static class MapKeyGetter implements TextMapGetter<Map<String, String>> {
@Override
public Iterable<String> keys(Map<String, String> carrier) {
return carrier.keySet().stream().filter(APMTracer::isSupportedContextKey).collect(Collectors.toSet());
}
@Override
public String get(Map<String, String> carrier, String key) {
return carrier.get(key);
}
}
private static boolean isSupportedContextKey(String key) {
return Task.TRACE_PARENT_HTTP_HEADER.equals(key) || Task.TRACE_STATE.equals(key);
}
// VisibleForTesting
Map<String, Context> getSpans() {
return spans;
}
private static CharacterRunAutomaton buildAutomaton(List<String> includeNames, List<String> excludeNames) {
Automaton includeAutomaton = patternsToAutomaton(includeNames);
Automaton excludeAutomaton = patternsToAutomaton(excludeNames);
if (includeAutomaton == null) {
includeAutomaton = Automata.makeAnyString();
}
final Automaton finalAutomaton = excludeAutomaton == null
? includeAutomaton
: Operations.minus(includeAutomaton, excludeAutomaton, Operations.DEFAULT_DETERMINIZE_WORK_LIMIT);
return new CharacterRunAutomaton(finalAutomaton);
}
private static Automaton patternsToAutomaton(List<String> patterns) {
final List<Automaton> automata = patterns.stream().map(s -> {
final String regex = s.replaceAll("\\.", "\\\\.").replaceAll("\\*", ".*");
return new RegExp(regex).toAutomaton();
}).toList();
if (automata.isEmpty()) {
return null;
}
if (automata.size() == 1) {
return automata.get(0);
}
return Operations.union(automata);
}
}

View File

@ -0,0 +1,22 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
grant {
permission java.lang.RuntimePermission "accessSystemModules";
permission java.lang.RuntimePermission "createClassLoader";
permission java.lang.RuntimePermission "getClassLoader";
permission java.util.PropertyPermission "elastic.apm.*", "write";
};
grant codeBase "${codebase.elastic-apm-agent}" {
permission java.lang.RuntimePermission "accessDeclaredMembers";
permission java.lang.RuntimePermission "setContextClassLoader";
permission java.lang.RuntimePermission "setFactory";
permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
permission java.net.SocketPermission "*", "connect,resolve";
};

View File

@ -0,0 +1,89 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
package org.elasticsearch.tracing.apm;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.test.ESTestCase;
import static org.elasticsearch.tracing.apm.APMAgentSettings.APM_AGENT_SETTINGS;
import static org.elasticsearch.tracing.apm.APMAgentSettings.APM_ENABLED_SETTING;
import static org.mockito.Mockito.spy;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
public class APMAgentSettingsTests extends ESTestCase {
/**
* Check that when the tracer is enabled, it also sets the APM agent's recording system property to true.
*/
public void test_whenTracerEnabled_setsRecordingProperty() {
APMAgentSettings apmAgentSettings = spy(new APMAgentSettings());
Settings settings = Settings.builder().put(APM_ENABLED_SETTING.getKey(), true).build();
apmAgentSettings.syncAgentSystemProperties(settings);
verify(apmAgentSettings).setAgentSetting("recording", "true");
}
/**
* Check that when the tracer is disabled, it also sets the APM agent's recording system property to false.
*/
public void test_whenTracerDisabled_setsRecordingProperty() {
APMAgentSettings apmAgentSettings = spy(new APMAgentSettings());
Settings settings = Settings.builder().put(APM_ENABLED_SETTING.getKey(), false).build();
apmAgentSettings.syncAgentSystemProperties(settings);
verify(apmAgentSettings).setAgentSetting("recording", "false");
}
/**
* Check that when cluster settings are synchronised with the system properties, default values are
* applied.
*/
public void test_whenTracerCreated_defaultSettingsApplied() {
APMAgentSettings apmAgentSettings = spy(new APMAgentSettings());
Settings settings = Settings.builder().put(APM_ENABLED_SETTING.getKey(), true).build();
apmAgentSettings.syncAgentSystemProperties(settings);
verify(apmAgentSettings).setAgentSetting("transaction_sample_rate", "0.2");
}
/**
* Check that when cluster settings are synchronised with the system properties, values in the settings
* are reflected in the system properties, overwriting default values.
*/
public void test_whenTracerCreated_clusterSettingsOverrideDefaults() {
APMAgentSettings apmAgentSettings = spy(new APMAgentSettings());
Settings settings = Settings.builder()
.put(APM_ENABLED_SETTING.getKey(), true)
.put(APM_AGENT_SETTINGS.getKey() + "transaction_sample_rate", "0.75")
.build();
apmAgentSettings.syncAgentSystemProperties(settings);
// This happens twice because we first apply the default settings, whose values are overridden
// from the cluster settings, then we apply all the APM-agent related settings, not just the
// ones with default values. Although there is some redundancy here, it only happens at startup
// for a very small number of settings.
verify(apmAgentSettings, times(2)).setAgentSetting("transaction_sample_rate", "0.75");
}
/**
* Check that when cluster settings are synchronised with the system properties, agent settings other
* than those with default values are set.
*/
public void test_whenTracerCreated_clusterSettingsAlsoApplied() {
APMAgentSettings apmAgentSettings = spy(new APMAgentSettings());
Settings settings = Settings.builder()
.put(APM_ENABLED_SETTING.getKey(), true)
.put(APM_AGENT_SETTINGS.getKey() + "span_compression_enabled", "true")
.build();
apmAgentSettings.syncAgentSystemProperties(settings);
verify(apmAgentSettings).setAgentSetting("span_compression_enabled", "true");
}
}

View File

@ -0,0 +1,174 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
package org.elasticsearch.tracing.apm;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.tasks.Task;
import org.elasticsearch.test.ESTestCase;
import java.util.List;
import static org.elasticsearch.tracing.apm.APMAgentSettings.APM_ENABLED_SETTING;
import static org.elasticsearch.tracing.apm.APMAgentSettings.APM_TRACING_NAMES_EXCLUDE_SETTING;
import static org.elasticsearch.tracing.apm.APMAgentSettings.APM_TRACING_NAMES_INCLUDE_SETTING;
import static org.hamcrest.Matchers.aMapWithSize;
import static org.hamcrest.Matchers.anEmptyMap;
import static org.hamcrest.Matchers.hasKey;
import static org.hamcrest.Matchers.not;
import static org.hamcrest.Matchers.notNullValue;
public class APMTracerTests extends ESTestCase {
/**
* Check that the tracer doesn't create spans when tracing is disabled.
*/
public void test_onTraceStarted_withTracingDisabled_doesNotStartTrace() {
Settings settings = Settings.builder().put(APM_ENABLED_SETTING.getKey(), false).build();
APMTracer apmTracer = buildTracer(settings);
apmTracer.startTrace(new ThreadContext(settings), "id1", "name1", null);
assertThat(apmTracer.getSpans(), anEmptyMap());
}
/**
* Check that the tracer doesn't create spans if a Traceable's span name is filtered out.
*/
public void test_onTraceStarted_withSpanNameOmitted_doesNotStartTrace() {
Settings settings = Settings.builder()
.put(APM_ENABLED_SETTING.getKey(), true)
.putList(APM_TRACING_NAMES_INCLUDE_SETTING.getKey(), List.of("filtered*"))
.build();
APMTracer apmTracer = buildTracer(settings);
apmTracer.startTrace(new ThreadContext(settings), "id1", "name1", null);
assertThat(apmTracer.getSpans(), anEmptyMap());
}
/**
* Check that when a trace is started, the tracer starts a span and records it.
*/
public void test_onTraceStarted_startsTrace() {
Settings settings = Settings.builder().put(APM_ENABLED_SETTING.getKey(), true).build();
APMTracer apmTracer = buildTracer(settings);
apmTracer.startTrace(new ThreadContext(settings), "id1", "name1", null);
assertThat(apmTracer.getSpans(), aMapWithSize(1));
assertThat(apmTracer.getSpans(), hasKey("id1"));
}
/**
* Check that when a trace is started, the tracer ends the span and removes the record of it.
*/
public void test_onTraceStopped_stopsTrace() {
Settings settings = Settings.builder().put(APM_ENABLED_SETTING.getKey(), true).build();
APMTracer apmTracer = buildTracer(settings);
apmTracer.startTrace(new ThreadContext(settings), "id1", "name1", null);
apmTracer.stopTrace("id1");
assertThat(apmTracer.getSpans(), anEmptyMap());
}
/**
* Check that when a trace is started, then the thread context is updated with tracing information.
* <p>
* We expect the APM agent to inject the {@link Task#TRACE_PARENT_HTTP_HEADER} and {@link Task#TRACE_STATE}
* headers into the context, and it does, but this doesn't happen in the unit tests. We can
* check that the local context object is added, however.
*/
public void test_whenTraceStarted_threadContextIsPopulated() {
Settings settings = Settings.builder().put(APM_ENABLED_SETTING.getKey(), true).build();
APMTracer apmTracer = buildTracer(settings);
ThreadContext threadContext = new ThreadContext(settings);
apmTracer.startTrace(threadContext, "id1", "name1", null);
assertThat(threadContext.getTransient(Task.APM_TRACE_CONTEXT), notNullValue());
}
/**
* Check that when a tracer has a list of include names configured, then those
* names are used to filter spans.
*/
public void test_whenTraceStarted_andSpanNameIncluded_thenSpanIsStarted() {
final List<String> includePatterns = List.of(
// exact name
"name-aaa",
// regex
"name-b*"
);
Settings settings = Settings.builder()
.put(APM_ENABLED_SETTING.getKey(), true)
.putList(APM_TRACING_NAMES_INCLUDE_SETTING.getKey(), includePatterns)
.build();
APMTracer apmTracer = buildTracer(settings);
apmTracer.startTrace(new ThreadContext(settings), "id1", "name-aaa", null);
apmTracer.startTrace(new ThreadContext(settings), "id2", "name-bbb", null);
apmTracer.startTrace(new ThreadContext(settings), "id3", "name-ccc", null);
assertThat(apmTracer.getSpans(), hasKey("id1"));
assertThat(apmTracer.getSpans(), hasKey("id2"));
assertThat(apmTracer.getSpans(), not(hasKey("id3")));
}
/**
* Check that when a tracer has a list of include and exclude names configured, and
* a span matches both, then the exclude filters take precedence.
*/
public void test_whenTraceStarted_andSpanNameIncludedAndExcluded_thenSpanIsNotStarted() {
final List<String> includePatterns = List.of("name-a*");
final List<String> excludePatterns = List.of("name-a*");
Settings settings = Settings.builder()
.put(APM_ENABLED_SETTING.getKey(), true)
.putList(APM_TRACING_NAMES_INCLUDE_SETTING.getKey(), includePatterns)
.putList(APM_TRACING_NAMES_EXCLUDE_SETTING.getKey(), excludePatterns)
.build();
APMTracer apmTracer = buildTracer(settings);
apmTracer.startTrace(new ThreadContext(settings), "id1", "name-aaa", null);
assertThat(apmTracer.getSpans(), not(hasKey("id1")));
}
/**
* Check that when a tracer has a list of exclude names configured, then those
* names are used to filter spans.
*/
public void test_whenTraceStarted_andSpanNameExcluded_thenSpanIsNotStarted() {
final List<String> excludePatterns = List.of(
// exact name
"name-aaa",
// regex
"name-b*"
);
Settings settings = Settings.builder()
.put(APM_ENABLED_SETTING.getKey(), true)
.putList(APM_TRACING_NAMES_EXCLUDE_SETTING.getKey(), excludePatterns)
.build();
APMTracer apmTracer = buildTracer(settings);
apmTracer.startTrace(new ThreadContext(settings), "id1", "name-aaa", null);
apmTracer.startTrace(new ThreadContext(settings), "id2", "name-bbb", null);
apmTracer.startTrace(new ThreadContext(settings), "id3", "name-ccc", null);
assertThat(apmTracer.getSpans(), not(hasKey("id1")));
assertThat(apmTracer.getSpans(), not(hasKey("id2")));
assertThat(apmTracer.getSpans(), hasKey("id3"));
}
private APMTracer buildTracer(Settings settings) {
APMTracer tracer = new APMTracer(settings);
tracer.doStart();
return tracer;
}
}

49
qa/apm/build.gradle Normal file
View File

@ -0,0 +1,49 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
import org.elasticsearch.gradle.Architecture
import org.elasticsearch.gradle.VersionProperties
import static org.elasticsearch.gradle.internal.distribution.InternalElasticsearchDistributionTypes.DOCKER;
apply plugin: 'elasticsearch.standalone-rest-test'
apply plugin: 'elasticsearch.test.fixtures'
apply plugin: 'elasticsearch.internal-distribution-download'
testFixtures.useFixture()
dependencies {
testImplementation project(':client:rest-high-level')
}
dockerCompose {
environment.put 'STACK_VERSION', VersionProperties.elasticsearch
// retainContainersOnStartupFailure = true
}
elasticsearch_distributions {
docker {
type = DOCKER
architecture = Architecture.current()
version = VersionProperties.getElasticsearch()
failIfUnavailable = false // This ensures we skip this testing if Docker is unavailable
}
}
tasks.named("preProcessFixture").configure {
dependsOn elasticsearch_distributions.matching { it.architecture == Architecture.current() }
}
tasks.register("integTest", Test) {
outputs.doNotCacheIf('Build cache is disabled for Docker tests') { true }
maxParallelForks = '1'
include '**/*IT.class'
}
tasks.named("check").configure {
dependsOn "integTest"
}

View File

@ -0,0 +1,34 @@
---
apm_server:
cluster: ['manage_ilm', 'manage_security', 'manage_api_key']
indices:
- names: ['apm-*', 'logs-apm*', 'metrics-apm*', 'traces-apm*']
privileges: ['write', 'create_index', 'manage', 'manage_ilm']
applications:
- application: 'apm'
privileges: ['sourcemap:write', 'event:write', 'config_agent:read']
resources: '*'
beats:
cluster: ['manage_index_templates', 'monitor', 'manage_ingest_pipelines', 'manage_ilm', 'manage_security', 'manage_api_key']
indices:
- names: ['filebeat-*', 'shrink-filebeat-*']
privileges: ['all']
filebeat:
cluster: ['manage_index_templates', 'monitor', 'manage_ingest_pipelines', 'manage_ilm']
indices:
- names: ['filebeat-*', 'shrink-filebeat-*']
privileges: ['all']
heartbeat:
cluster: ['manage_index_templates', 'monitor', 'manage_ingest_pipelines', 'manage_ilm']
indices:
- names: ['heartbeat-*', 'shrink-heartbeat-*']
privileges: ['all']
metricbeat:
cluster: ['manage_index_templates', 'monitor', 'manage_ingest_pipelines', 'manage_ilm']
indices:
- names: ['metricbeat-*', 'shrink-metricbeat-*']
privileges: ['all']
opbeans:
indices:
- names: ['opbeans-*']
privileges: ['write', 'read']

View File

@ -0,0 +1,2 @@
elastic/fleet-server/elastic-package-fleet-server-token:{PBKDF2_STRETCH}10000$PNiVyY96dHwRfoDszBvYPAz+mSLbC+NhtPh63dblDZU=$dAY1tXX1U5rXB+2Lt7m0L2LUNSb1q5nRaIqPNZTBxb8=
elastic/kibana/elastic-package-kibana-token:{PBKDF2_STRETCH}10000$wIEFHIIIZ2ap0D0iQsyw0MfB7YuFA1bHnXAmlCoL4Gg=$YxvIJnasjLZyDQZpmFBiJHdR/CGXd5BnVm013Jty6p0=

View File

@ -0,0 +1,9 @@
admin:$2a$10$xiY0ZzOKmDDN1p3if4t4muUBwh2.bFHADoMRAWQgSClm4ZJ4132Y.
apm_server_user:$2a$10$iTy29qZaCSVn4FXlIjertuO8YfYVLCbvoUAJ3idaXfLRclg9GXdGG
apm_user_ro:$2a$10$hQfy2o2u33SapUClsx8NCuRMpQyHP9b2l4t3QqrBA.5xXN2S.nT4u
beats_user:$2a$10$LRpKi4/Q3Qo4oIbiu26rH.FNIL4aOH4aj2Kwi58FkMo1z9FgJONn2
filebeat_user:$2a$10$sFxIEX8tKyOYgsbJLbUhTup76ssvSD3L4T0H6Raaxg4ewuNr.lUFC
heartbeat_user:$2a$10$nKUGDr/V5ClfliglJhfy8.oEkjrDtklGQfhd9r9NoFqQeoNxr7uUK
kibana_system_user:$2a$10$nN6sRtQl2KX9Gn8kV/.NpOLSk6Jwn8TehEDnZ7aaAgzyl/dy5PYzW
metricbeat_user:$2a$10$5PyTd121U2ZXnFk9NyqxPuLxdptKbB8nK5egt6M5/4xrKUkk.GReG
opbeans_user:$2a$10$iTy29qZaCSVn4FXlIjertuO8YfYVLCbvoUAJ3idaXfLRclg9GXdGG

View File

@ -0,0 +1,13 @@
apm_server:apm_server_user
apm_system:apm_server_user
apm_user:apm_server_user,apm_user_ro
beats:beats_user
beats_system:beats_user,filebeat_user,heartbeat_user,metricbeat_user
filebeat:filebeat_user
heartbeat:heartbeat_user
ingest_admin:apm_server_user
kibana_system:kibana_system_user
kibana_user:apm_server_user,apm_user_ro,beats_user,filebeat_user,heartbeat_user,metricbeat_user,opbeans_user
metricbeat:metricbeat_user
opbeans:opbeans_user
superuser:admin

View File

@ -0,0 +1,78 @@
xpack.fleet.packages:
- name: system
version: latest
- name: elastic_agent
version: latest
- name: apm
version: latest
- name: fleet_server
version: latest
xpack.fleet.agentPolicies:
- name: Fleet Server + APM policy
id: fleet-server-apm-policy
description: Fleet server policy with APM and System logs and metrics enabled
namespace: default
is_default_fleet_server: true
is_managed: false
monitoring_enabled:
- logs
- metrics
package_policies:
- name: system-1
package:
name: system
- name: apm-1
package:
name: apm
inputs:
- type: apm
keep_enabled: true
vars:
- name: host
value: 0.0.0.0:8200
frozen: true
- name: url
value: "${ELASTIC_APM_SERVER_URL}"
frozen: true
- name: enable_rum
value: true
frozen: true
- name: read_timeout
value: 1m
frozen: true
- name: shutdown_timeout
value: 2m
frozen: true
- name: write_timeout
value: 1m
frozen: true
- name: rum_allow_headers
value:
- x-custom-header
frozen: true
- name: secret_token
value: "${ELASTIC_APM_SECRET_TOKEN}"
frozen: true
- name: tls_enabled
value: ${ELASTIC_APM_TLS}
frozen: true
- name: tls_certificate
value: /usr/share/apmserver/config/certs/tls.crt
frozen: true
- name: tls_key
value: /usr/share/apmserver/config/certs/tls.key
frozen: true
- name: Fleet Server
package:
name: fleet_server
inputs:
- type: fleet-server
keep_enabled: true
vars:
- name: host
value: 0.0.0.0
frozen: true
- name: port
value: 8220
frozen: true

154
qa/apm/docker-compose.yml Normal file
View File

@ -0,0 +1,154 @@
version: "2.4"
networks:
default:
name: apm-integration-testing
services:
apmserver:
depends_on:
kibana:
condition: service_healthy
environment:
FLEET_ELASTICSEARCH_HOST: null
FLEET_SERVER_ELASTICSEARCH_INSECURE: "1"
FLEET_SERVER_ENABLE: "1"
FLEET_SERVER_HOST: 0.0.0.0
FLEET_SERVER_INSECURE_HTTP: "1"
FLEET_SERVER_POLICY_ID: fleet-server-apm-policy
FLEET_SERVER_PORT: "8220"
FLEET_SERVER_SERVICE_TOKEN: AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL2VsYXN0aWMtcGFja2FnZS1mbGVldC1zZXJ2ZXItdG9rZW46bmgtcFhoQzRRQ2FXbms2U0JySGlWQQ
KIBANA_FLEET_HOST: null
KIBANA_FLEET_SERVICE_TOKEN: AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL2VsYXN0aWMtcGFja2FnZS1mbGVldC1zZXJ2ZXItdG9rZW46bmgtcFhoQzRRQ2FXbms2U0JySGlWQQ
KIBANA_FLEET_SETUP: "1"
healthcheck:
test: /bin/true
image: docker.elastic.co/beats/elastic-agent:${STACK_VERSION}
labels:
- co.elastic.apm.stack-version=${STACK_VERSION}
logging:
driver: json-file
options:
max-file: "5"
max-size: 2m
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./scripts/tls/apmserver/cert.crt:/usr/share/apmserver/config/certs/tls.crt
- ./scripts/tls/apmserver/key.pem:/usr/share/apmserver/config/certs/tls.key
elasticsearch:
environment:
- action.destructive_requires_name=false
- bootstrap.memory_lock=true
- cluster.name=docker-cluster
- cluster.routing.allocation.disk.threshold_enabled=false
- discovery.type=single-node
- ES_JAVA_OPTS=-Xms1g -Xmx1g
- indices.id_field_data.enabled=true
- ingest.geoip.downloader.enabled=false
- path.repo=/usr/share/elasticsearch/data/backups
- xpack.license.self_generated.type=trial
- xpack.monitoring.collection.enabled=true
- xpack.security.authc.anonymous.roles=remote_monitoring_collector
- xpack.security.authc.api_key.enabled=true
- xpack.security.authc.realms.file.file1.order=0
- xpack.security.authc.realms.native.native1.order=1
- xpack.security.authc.token.enabled=true
- xpack.security.enabled=true
# APM specific settings. We don't configure `secret_key` because Kibana is configured with a blank key
- tracing.apm.enabled=true
- tracing.apm.agent.server_url=http://apmserver:8200
# Send traces to APM server aggressively
- tracing.apm.agent.metrics_interval=1s
# Record everything
- tracing.apm.agent.transaction_sample_rate=1
- tracing.apm.agent.log_level=debug
healthcheck:
interval: 20s
retries: 10
test: curl -s -k http://localhost:9200/_cluster/health | grep -vq '"status":"red"'
image: elasticsearch:test
labels:
- co.elastic.apm.stack-version=${STACK_VERSION}
- co.elastic.metrics/module=elasticsearch
- co.elastic.metrics/metricsets=node,node_stats
- co.elastic.metrics/hosts=http://$${data.host}:9200
logging:
driver: json-file
options:
max-file: "5"
max-size: 2m
ports:
# - 127.0.0.1:9200:9200
- "9200"
ulimits:
memlock:
hard: -1
soft: -1
volumes:
- ./config/elasticsearch/roles.yml:/usr/share/elasticsearch/config/roles.yml
- ./config/elasticsearch/users:/usr/share/elasticsearch/config/users
- ./config/elasticsearch/users_roles:/usr/share/elasticsearch/config/users_roles
- ./config/elasticsearch/service_tokens:/usr/share/elasticsearch/config/service_tokens
kibana:
depends_on:
elasticsearch:
condition: service_healthy
environment:
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
ELASTICSEARCH_PASSWORD: changeme
ELASTICSEARCH_USERNAME: kibana_system_user
ELASTIC_APM_SECRET_TOKEN: ""
ELASTIC_APM_SERVER_URL: http://apmserver:8200
ELASTIC_APM_TLS: "false"
SERVER_HOST: 0.0.0.0
SERVER_NAME: kibana.example.org
STATUS_ALLOWANONYMOUS: "true"
TELEMETRY_ENABLED: "false"
XPACK_APM_SERVICEMAPENABLED: "true"
XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY: fhjskloppd678ehkdfdlliverpoolfcr
XPACK_FLEET_AGENTS_ELASTICSEARCH_HOSTS: '["http://elasticsearch:9200"]'
# XPACK_FLEET_REGISTRYURL: https://epr-snapshot.elastic.co
XPACK_MONITORING_ENABLED: "true"
XPACK_REPORTING_ROLES_ENABLED: "false"
XPACK_SECURITY_ENCRYPTIONKEY: fhjskloppd678ehkdfdlliverpoolfcr
XPACK_SECURITY_LOGINASSISTANCEMESSAGE: Login&#32;details:&#32;`admin/changeme`.&#32;Further&#32;details&#32;[here](https://github.com/elastic/apm-integration-testing#logging-in).
XPACK_SECURITY_SESSION_IDLETIMEOUT: 1M
XPACK_SECURITY_SESSION_LIFESPAN: 3M
XPACK_XPACK_MAIN_TELEMETRY_ENABLED: "false"
healthcheck:
interval: 10s
retries: 30
start_period: 10s
test: curl -s -k http://kibana:5601/api/status | grep -q 'All services are available'
image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
labels:
- co.elastic.apm.stack-version=${STACK_VERSION}
logging:
driver: json-file
options:
max-file: "5"
max-size: 2m
# ports:
# - 127.0.0.1:5601:5601
volumes:
- ./config/kibana/kibana-8.yml:/usr/share/kibana/config/kibana.yml
# Rather than mess aroud with threads in the test, just run `curl` in a
# loop to generate traces with a known path
tracegenerator:
depends_on:
apmserver:
condition: service_healthy
elasticsearch:
condition: service_healthy
kibana:
condition: service_healthy
# Official curl image
image: curlimages/curl
command: /bin/sh -c "while true; do curl -s -k -u admin:changeme http://elasticsearch:9200/_nodes/stats >/dev/null ; sleep 3; done"
volumes:
esdata:
driver: local

View File

@ -0,0 +1,27 @@
-----BEGIN CERTIFICATE-----
MIIEpjCCAo4CCQDR9oXvJbopHjANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDDAph
cG0tc2VydmVyMB4XDTE5MTExOTE1MjE0NVoXDTI5MTExNjE1MjE0NVowFTETMBEG
A1UEAwwKYXBtLXNlcnZlcjCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIB
ANduj3tyeBIHj0Bf5aKMRImhRbkAaQ2p6T0WsHKlicd1P4/D5l783+vVsbwprRqR
qXAUsUWcUSYJXBX1qtC2MtKqi4xYUTAyQV5dgrMoCV+vtZY31SK4kolumd1vVMh+
po+IwueLvLMFK1tQGIXlJblSDYVauIt5rp79IIhWOY/YpcQy9RaxykljTYTbPjLW
m3T92bow1nLh5GL3ThJEAkLO+hkJv9716+YRWYtPcojiGzpLjFgF50MoP4Lilm9U
r2tBnqpvb2PwE1kkly8DDBtcg+HM4tgGsbdWo2Pgp82ARV4DL+JlNJ+SVQZAmTbc
3LMwxnUJtuKMeh2rwb9HOyuONXfF1PiEzyDhAlabyS6toAGy1mlMAop1ClO1wV5O
Ayy47TeD6ziNyMKB7/XHdW4rb16K6j6EV27Bg2ZK6Vrfkwm3aRbpztfVRMX+HMUp
ktH+V2OwJoP7l7lzw/q8yMdopG57zRJa1dx8NWP/UKi8Ej+87DYyWJODiNHD7PM7
9vfd47lNcWxw+p7ntEpnn6EeW2r7SlmfhtdIxL2DiTiKAq9Ktyi9cFnGnDfSDJST
T1G1vIDdG33Vt2Y5+wqzCGbYyMsAOaMdXZSeniXXFR4GX7iz+AGoKojBbmoo9VqP
mvbudNU+ysha4IJvTfOczJZgstxCXG+MXbEXFSgysImFAgMBAAEwDQYJKoZIhvcN
AQELBQADggIBAFh2YxRT6PaAXDq38rm25I91fCP9PzVPDuIkn9wl85e7avuh6FZi
R0nQG6+lB1i8XSm9UMl9+ISjE+EQqry6KB6mDsakGOsDuEUdZiw3sGJIUWQkQArB
ym5DqxKpeZBeVHBxnrEbQBV8s0j8uxd7X1E0ImfMKbKfNr/B5qPRXkREvydLWYvq
8yMcUPu1MiZFUgAGr9Py39kW3lbRPWZii/2bN8AB9h6gAhq5TiennfgJZsRiuSta
w/TmOcAuz4e/KPIzfvL/YCWbLyJ2vrIQeOc4N7jZfqMmLKgYCRyjI7+amfuyKPBW
J4psfJ0ssHdTxAUK65vghJ2s6FLvU3HoxzetZsJp5kj6CKYaFYkB4NkkYnlY8MP/
T68oOmdYwwwrcBmDtZwoppRb5zhev5k3aykgZ/B/vqVJE9oIPkp/7wqEP1WqSiUe
AgyQBu8UN4ho2Rf6nZezZ4cjW/0WyhGOHQBFmwPI2MBGsQxF2PF4lKkJtaywIEm7
4UsEQYK7Hf2J2OccWGvfo5HZ5tsSbuOGAf0bfHfaBQBsvzWet+TO6XX9VrWjnAKl
bH+mInmnd9v2oABFl9Djv/Cw+lEAxxkCTW+DcwdEFJREPab5xhQDEpQQ/Ef0ihvg
/ZtJQeoOYfrLN6K726QmoRWxvqxLyWK3gztcO1svHqr/cMt3ooLJEaqU
-----END CERTIFICATE-----

View File

@ -0,0 +1,52 @@
-----BEGIN PRIVATE KEY-----
MIIJRAIBADANBgkqhkiG9w0BAQEFAASCCS4wggkqAgEAAoICAQDXbo97cngSB49A
X+WijESJoUW5AGkNqek9FrBypYnHdT+Pw+Ze/N/r1bG8Ka0akalwFLFFnFEmCVwV
9arQtjLSqouMWFEwMkFeXYKzKAlfr7WWN9UiuJKJbpndb1TIfqaPiMLni7yzBStb
UBiF5SW5Ug2FWriLea6e/SCIVjmP2KXEMvUWscpJY02E2z4y1pt0/dm6MNZy4eRi
904SRAJCzvoZCb/e9evmEVmLT3KI4hs6S4xYBedDKD+C4pZvVK9rQZ6qb29j8BNZ
JJcvAwwbXIPhzOLYBrG3VqNj4KfNgEVeAy/iZTSfklUGQJk23NyzMMZ1CbbijHod
q8G/RzsrjjV3xdT4hM8g4QJWm8kuraABstZpTAKKdQpTtcFeTgMsuO03g+s4jcjC
ge/1x3VuK29eiuo+hFduwYNmSula35MJt2kW6c7X1UTF/hzFKZLR/ldjsCaD+5e5
c8P6vMjHaKRue80SWtXcfDVj/1CovBI/vOw2MliTg4jRw+zzO/b33eO5TXFscPqe
57RKZ5+hHltq+0pZn4bXSMS9g4k4igKvSrcovXBZxpw30gyUk09RtbyA3Rt91bdm
OfsKswhm2MjLADmjHV2Unp4l1xUeBl+4s/gBqCqIwW5qKPVaj5r27nTVPsrIWuCC
b03znMyWYLLcQlxvjF2xFxUoMrCJhQIDAQABAoICAQCfClIGsoUN2mLZBXLDw4W9
jT+pyjHEEpHLtXphyO+kPlzER71Elq7AriveW24d1TcfNUeBulr2F6bR12FZX4i5
mYoX/AND73Xusl4Q4Re6ej82PNWuIlCcAPi6Trxqn4VbJX2t7q1KBCDz8neIMZjd
7UNqFYV0Akr1uK1RuUYZebk21N+29139O8A4upp6cZCml9kq6W8HtNgkb6pFNcvt
gluELHxnn2mdmWVfwTEu+K1dJfTf7svB+m6Ys6qXWg9+wRzfehDj2JKQFsE9xaQk
dvItulIlZRvB28YXr/xxa6bKNtQc8NYej6sRSJNTu017RCDeumM3cLmeOfR4v59f
tkMWnFcA3ykmsaK2FiQyX+MoWvs5vdT7/yNIfz3a4MErcWg8z3FDbffKfbhgsb+2
z4Ub6fIRKZykW2ajN7t0378bMmJ3rPT66QF40aNNeWasF3EHcwekDPpsHIBJoY4G
9aG6uTUmRkC+NGeP9HroxkvDo2NbXn8XGOEJS64rwsME3CsUi1A5ZY0XLTxYptH6
X2TfC5oTmnsYB/wWqo26bTJc0bwDOueQWYap0aVtv3f/0tzueKepCbxdeG4ikA0U
2t3F+OUmoCZ5D0p+6zLvrTUPhPCFEynp+vGUvmbwozYi0NWzFyFqlvqRG1KLIVLG
ZRyTMYuZ/cWkv1SJYbEcaQKCAQEA/9HaJg2YACv7rx6/FesE/81u16OYTaahHngW
4M+5rT0+fNKYH/fYkwavQ/Gr6FSTls7F+8K9DVwoGLZRQ3t6epCXqGqX0uaY+iSH
O8eezXVnHzUaVE4KlwJY9xZ+K1iIf5zUb5hpaQI0jKS/igcxFAsutWiyenrz8eQp
MAycZmzkQMLbUsa1t6y0VaEaC4YMHyQ9ag2eMfqbG27plFQbYxllHXowGMFXPheY
xACwo5V5tJUgRP+HlrI4rf0vadMgVIKxVSUiqIzGREIkYrTAshFjkpHR5/R8s/kH
Xm8q2gdoJltBFJzA2B8MHXVi7mYDBlUmBoRKhzkl/TSray9j7wKCAQEA15VsNQZu
cZluboz/R4EDbEm1po2UBcNNiu/fgJ8BDUkLzJESIITY41fgvBbTun1fiuGeE+El
0o1w4hQhIiV1KAB44w69fJR0VELfMZiIcd8kd0sDgPPVrd1MzzKPZ9yg4mbEkCCO
V/EoTi8Ut27sMcl8059qm1qq7I5pzHwSziNa087m+5VdfmvJZJVipudngZ3QmRgU
KKcBhgFFSkncYezoq2XQfRcqkk0sORxDvsMmRInyHZh0l9zv46ihgTvErlCHtizV
V4HNO4OPz7FxUZ04iWSGZs4snu1cW2j+lbKuOkADveBYVmCcdZ3R0SH+A5skL0zG
tm6z0TNP/kFlywKCAQEA+lTdFu2od0qTADujG4yemL7rn2J8EEhlU86J/LXo6UiM
FFNz/5xltwIMkf00jqXswt9WR9W5cBBlQEFwZgu3v6YscebU6NE0k1sZZnshv8YK
AjTRrfusSzdF3YyKLFp3QAE0tHs9cz9wMsyojiYZdZa3v1dTh503h9YQI+/DQEuA
VIsZWfgPLEx5L231cZ9bz0GEQ3pN+nRUQdUYB0kCf8gC9YRy+lZ/y8gFeo9+SqVj
sj1XlY1DnkiKRGAEfJbYBTra0woCz1LqVTMwLdLY2adAe9XrxQKu4OJovpUkJrSm
yxnzJnt6DkLbdRxAki8K+LBsBGaCE67tqMhYkguOywKCAQAslEl77YiJFSEw2xcu
wg7jJZrahgxF5Mz0HgYporek96Xo91a4QsBWwqVGP7IoriRDo8P8eGJJ19Wv6lmv
pe9EBlT5HuMwD8K+adWde907Ltlrkad30vQsr8ZiUiI1Z/oc1wNuikzlAolDIZk3
FUjiQrf9SsnQtj8CC7D1B/MbjVQK2I4LGCftLHzIv9tWiCNvOiMYhVIl1eMKwtiB
NCTOWx8B0lv6gf/boPm0FZQsrk4LfjsCw7PYc2dnvEcpYiKZqS1nDn5PShgWZm4m
lJrKNairQI5KU/gGJS8j9+ItMnW0tegQK4QY2IGCENCCXnUYacxhu46byuiEKggw
m3VhAoIBAQCQa90StsZHqZ+J83do3kpvD+O5nURPnckznC2WJgraW49k5vltnJTT
zkFTqHMLfmYwAz1o15sPCqlkMD+fEUzg6Hpzxm7dOUppkf5KFbD7AnsYU9U8LamJ
HaET7Dq5TpjG7uoaHZZjs7cCHcWu2E8nIezyAtZ+rbTg/qW7bYMAlJTkerznGuDU
v0hNzCr/81o5rbX0UhetcmKVOprUSWzfrw5ElLhAtzM7zivbZSnsOny8pC33FtQ5
iQbVcNGUjfFCM95ZipxxN9z0FwxpJ1paCPGYA86u2olWl/VnVPqEj7WYzO8H5W2q
aXpWH6HVf6B10pQrWWwUAAHyqYS5bZkQ
-----END PRIVATE KEY-----

View File

@ -0,0 +1,209 @@
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
package org.elasticsearch.tracing.apm;
import org.elasticsearch.client.Request;
import org.elasticsearch.client.Response;
import org.elasticsearch.common.settings.SecureString;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.concurrent.ThreadContext;
import org.elasticsearch.common.xcontent.support.XContentMapValues;
import org.elasticsearch.core.CheckedRunnable;
import org.elasticsearch.test.rest.ESRestTestCase;
import java.io.IOException;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
import java.util.stream.Collectors;
import static org.hamcrest.Matchers.empty;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.hasKey;
import static org.hamcrest.Matchers.not;
/**
* Tests around Elasticsearch's tracing support using APM.
*/
public class ApmIT extends ESRestTestCase {
private static final String DATA_STREAM = "traces-apm-default";
/**
* Check that if we send HTTP traffic to Elasticsearch, then traces are captured in APM server. The traces are generated in
* a separate Docker container, which continually fetches `/_nodes/stats`. We check for the following:
* <ul>
* <li>A transaction for the REST API call
* <li>A span for the task started by the REST call
* <li>A child span started by the above span
* </ul>
* <p>This proves that the hierarchy of spans is being correctly captured.
*/
public void testCapturesTracesForHttpTraffic() throws Exception {
checkTracesDataStream();
assertTracesExist();
}
private void checkTracesDataStream() throws Exception {
assertBusy(() -> {
final Response response = performRequestTolerantly(new Request("GET", "/_data_stream/" + DATA_STREAM));
assertOK(response);
}, 1, TimeUnit.MINUTES);
}
private void assertTracesExist() throws Exception {
// First look for a transaction for the REST calls that we make via the `tracegenerator` Docker container
final AtomicReference<String> transactionId = new AtomicReference<>();
assertBusy(() -> {
final Request tracesSearchRequest = new Request("GET", "/" + DATA_STREAM + "/_search");
tracesSearchRequest.setJsonEntity("""
{
"query": {
"match": { "transaction.name": "GET /_nodes/stats" }
}
}""");
final Response tracesSearchResponse = performRequestTolerantly(tracesSearchRequest);
assertOK(tracesSearchResponse);
final List<Map<String, Object>> documents = getDocuments(tracesSearchResponse);
assertThat(documents, not(empty()));
final Map<String, Object> tx = documents.get(0);
check(tx, "http.request.method", "GET");
check(tx, "http.response.status_code", 200);
check(tx, "labels.es_cluster_name", "docker-cluster");
check(tx, "labels.http_request_headers_authorization", "[REDACTED]");
check(tx, "span.kind", "SERVER");
check(tx, "transaction.result", "HTTP 2xx");
check(tx, "url.path", "/_nodes/stats");
final String txId = pluck(tx, "transaction.id");
transactionId.set(txId);
}, 1, TimeUnit.MINUTES);
// Then look for the task that the REST call starts
final AtomicReference<String> monitorNodeStatsSpanId = new AtomicReference<>();
assertBusy(() -> {
final List<Map<String, Object>> documents = searchByParentId(transactionId.get());
assertThat(documents, not(empty()));
final Map<String, Object> spansByName = documents.stream().collect(Collectors.toMap(d -> pluck(d, "span.name"), d -> d));
assertThat(spansByName, hasKey("cluster:monitor/nodes/stats"));
@SuppressWarnings("unchecked")
final Map<String, Object> span = (Map<String, Object>) spansByName.get("cluster:monitor/nodes/stats");
check(span, "span.kind", "INTERNAL");
final String spanId = pluck(span, "span.id");
monitorNodeStatsSpanId.set(spanId);
}, 1, TimeUnit.MINUTES);
// Finally look for the child task that the task above started
assertBusy(() -> {
final List<Map<String, Object>> documents = searchByParentId(monitorNodeStatsSpanId.get());
assertThat(documents, not(empty()));
final Map<String, Object> spansByName = documents.stream().collect(Collectors.toMap(d -> pluck(d, "span.name"), d -> d));
assertThat(spansByName, hasKey("cluster:monitor/nodes/stats[n]"));
}, 1, TimeUnit.MINUTES);
}
@SuppressWarnings("unchecked")
private <T> T pluck(Map<String, Object> map, String path) {
String[] parts = path.split("\\.");
Object result = map;
for (String part : parts) {
result = ((Map<String, ?>) result).get(part);
}
return (T) result;
}
private List<Map<String, Object>> searchByParentId(String parentId) throws IOException {
final Request searchRequest = new Request("GET", "/" + DATA_STREAM + "/_search");
searchRequest.setJsonEntity("""
{
"query": {
"match": { "parent.id": "%s" }
}
}""".formatted(parentId));
final Response response = performRequestTolerantly(searchRequest);
assertOK(response);
return getDocuments(response);
}
/**
* We don't need to clean up the cluster, particularly as we have Kibana and APM server using ES as well as our test, so declare
* that we need to preserve the cluster in order to prevent the usual cleanup logic from running (and inevitably failing).
*/
@Override
protected boolean preserveClusterUponCompletion() {
return true;
}
/**
* Turns exceptions into assertion failures so that {@link #assertBusy(CheckedRunnable)} can still retry.
*/
private Response performRequestTolerantly(Request request) {
try {
return client().performRequest(request);
} catch (Exception e) {
throw new AssertionError(e);
}
}
/**
* Customizes the client settings to use the same username / password that is configured in Docke.r
*/
@Override
protected Settings restClientSettings() {
String token = basicAuthHeaderValue("admin", new SecureString("changeme".toCharArray()));
return Settings.builder().put(ThreadContext.PREFIX + ".Authorization", token).build();
}
/**
* Constructs the correct cluster address by looking up the dynamic port that Elasticsearch is exposed on.
*/
@Override
protected String getTestRestCluster() {
return "localhost:" + getProperty("test.fixtures.elasticsearch.tcp.9200");
}
@SuppressWarnings("unchecked")
private List<Map<String, Object>> getDocuments(Response response) throws IOException {
final Map<String, Object> stringObjectMap = ESRestTestCase.entityAsMap(response);
return (List<Map<String, Object>>) XContentMapValues.extractValue("hits.hits._source", stringObjectMap);
}
private String getProperty(String key) {
String value = System.getProperty(key);
if (value == null) {
throw new IllegalStateException(
"Could not find system properties from test.fixtures. "
+ "This test expects to run with the elasticsearch.test.fixtures Gradle plugin"
);
}
return value;
}
private <T> void check(Map<String, Object> doc, String path, T expected) {
assertThat(pluck(doc, path), equalTo(expected));
}
}

View File

@ -485,6 +485,7 @@ public class ActionModule extends AbstractModule {
actionPlugins.stream().flatMap(p -> p.getRestHeaders().stream()),
Stream.of(
new RestHeaderDefinition(Task.X_OPAQUE_ID_HTTP_HEADER, false),
new RestHeaderDefinition(Task.TRACE_STATE, false),
new RestHeaderDefinition(Task.TRACE_PARENT_HTTP_HEADER, false),
new RestHeaderDefinition(Task.X_ELASTIC_PRODUCT_ORIGIN_HTTP_HEADER, false)
)

View File

@ -184,7 +184,8 @@ public class PolicyUtil {
new RuntimePermission("createClassLoader"),
new RuntimePermission("getFileStoreAttributes"),
new RuntimePermission("accessUserInformation"),
new AuthPermission("modifyPrivateCredentials")
new AuthPermission("modifyPrivateCredentials"),
new RuntimePermission("accessSystemModules")
);
PermissionCollection modulePermissionCollection = new Permissions();
namedPermissions.forEach(modulePermissionCollection::add);

View File

@ -87,7 +87,9 @@ import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.common.util.BigArrays;
import org.elasticsearch.common.util.PageCacheRecycler;
import org.elasticsearch.core.IOUtils;
import org.elasticsearch.core.PathUtils;
import org.elasticsearch.core.Releasables;
import org.elasticsearch.core.SuppressForbidden;
import org.elasticsearch.core.TimeValue;
import org.elasticsearch.discovery.DiscoveryModule;
import org.elasticsearch.env.Environment;
@ -397,6 +399,8 @@ public class Node implements Closeable {
);
}
deleteTemporaryApmConfig(jvmInfo);
this.pluginsService = pluginServiceCtor.apply(tmpSettings);
final Settings settings = mergePluginSettings(pluginsService.pluginMap(), tmpSettings);
@ -422,7 +426,9 @@ public class Node implements Closeable {
Task.HEADERS_TO_COPY.stream()
).collect(Collectors.toSet());
final TaskManager taskManager = new TaskManager(settings, threadPool, taskHeaders);
final Tracer tracer = getTracer(pluginsService, settings);
final TaskManager taskManager = new TaskManager(settings, threadPool, taskHeaders, tracer);
// register the node.data, node.ingest, node.master, node.remote_cluster_client settings here so we can mark them private
final List<Setting<?>> additionalSettings = new ArrayList<>(pluginsService.flatMap(Plugin::getSettings).toList());
@ -691,8 +697,6 @@ public class Node implements Closeable {
shardLimitValidator
);
final Tracer tracer = getTracer(pluginsService, clusterService, settings);
Collection<Object> pluginComponents = pluginsService.flatMap(
p -> p.createComponents(
client,
@ -1101,14 +1105,45 @@ public class Node implements Closeable {
}
}
private Tracer getTracer(PluginsService pluginsService, ClusterService clusterService, Settings settings) {
/**
* If the JVM was started with the Elastic APM agent and a config file argument was specified, then
* delete the config file. The agent only reads it once, when supplied in this fashion, and it
* may contain a secret token.
*/
@SuppressForbidden(reason = "Cannot guarantee that the temp config path is relative to the environment")
private void deleteTemporaryApmConfig(JvmInfo jvmInfo) {
for (String inputArgument : jvmInfo.getInputArguments()) {
if (inputArgument.startsWith("-javaagent:")) {
final String agentArg = inputArgument.substring(11);
final String[] parts = agentArg.split("=", 2);
if (parts[0].matches("modules/x-pack-apm-integration/elastic-apm-agent-\\d+\\.\\d+\\.\\d+\\.jar")) {
if (parts.length == 2 && parts[1].startsWith("c=")) {
final Path apmConfig = PathUtils.get(parts[1].substring(2));
if (apmConfig.getFileName().toString().matches("^\\.elstcapm\\..*\\.tmp")) {
try {
Files.deleteIfExists(apmConfig);
} catch (IOException e) {
logger.error(
"Failed to delete temporary APM config file [" + apmConfig + "], reason: [" + e.getMessage() + "]",
e
);
}
}
}
return;
}
}
}
}
private Tracer getTracer(PluginsService pluginsService, Settings settings) {
final List<TracerPlugin> tracerPlugins = pluginsService.filterPlugins(TracerPlugin.class);
if (tracerPlugins.size() > 1) {
throw new IllegalStateException("A single TracerPlugin was expected but got: " + tracerPlugins);
}
return tracerPlugins.isEmpty() ? Tracer.NOOP : tracerPlugins.get(0).getTracer(clusterService, settings);
return tracerPlugins.isEmpty() ? Tracer.NOOP : tracerPlugins.get(0).getTracer(settings);
}
private HealthService createHealthService(

View File

@ -8,10 +8,9 @@
package org.elasticsearch.plugins;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.tracing.Tracer;
public interface TracerPlugin {
Tracer getTracer(ClusterService clusterService, Settings settings);
Tracer getTracer(Settings settings);
}

View File

@ -71,6 +71,10 @@ grant codeBase "${codebase.jna}" {
permission java.lang.RuntimePermission "accessDeclaredMembers";
};
grant codeBase "${codebase.log4j-api}" {
permission java.lang.RuntimePermission "getClassLoader";
};
//// Everything else:
grant {

View File

@ -60,6 +60,7 @@ import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.hasItem;
import static org.hamcrest.Matchers.is;
import static org.hamcrest.Matchers.nullValue;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.anyMap;
import static org.mockito.ArgumentMatchers.anyString;
@ -128,22 +129,6 @@ public class RestControllerTests extends ESTestCase {
restHeaders.put("header.2", Collections.singletonList("true"));
restHeaders.put("header.3", Collections.singletonList("false"));
RestRequest fakeRequest = new FakeRestRequest.Builder(xContentRegistry()).withHeaders(restHeaders).build();
final RestController spyRestController = spy(restController);
when(spyRestController.getAllHandlers(null, fakeRequest.rawPath())).thenReturn(new Iterator<>() {
@Override
public boolean hasNext() {
return false;
}
@Override
public MethodHandlers next() {
return new MethodHandlers("/").addMethod(GET, RestApiVersion.current(), (request, channel, client) -> {
assertEquals("true", threadContext.getHeader("header.1"));
assertEquals("true", threadContext.getHeader("header.2"));
assertNull(threadContext.getHeader("header.3"));
});
}
});
AssertingChannel channel = new AssertingChannel(fakeRequest, false, RestStatus.BAD_REQUEST);
restController.dispatchRequest(fakeRequest, channel, threadContext);
// the rest controller relies on the caller to stash the context, so we should expect these values here as we didn't stash the
@ -204,39 +189,26 @@ public class RestControllerTests extends ESTestCase {
}
/**
* Check that dispatching a request causes relevant trace headers to be put into the thread context.
* Check that the REST controller picks up and propagates W3C trace context headers via the {@link ThreadContext}.
* @see <a href="https://www.w3.org/TR/trace-context/">Trace Context - W3C Recommendation</a>
*/
public void testTraceParentAndTraceId() {
final ThreadContext threadContext = client.threadPool().getThreadContext();
Set<RestHeaderDefinition> headers = Set.of(new RestHeaderDefinition(Task.TRACE_PARENT_HTTP_HEADER, false));
final RestController restController = new RestController(headers, null, null, circuitBreakerService, usageService, tracer);
Map<String, List<String>> restHeaders = new HashMap<>();
restHeaders.put(
Task.TRACE_PARENT_HTTP_HEADER,
Collections.singletonList("00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01")
);
final String traceParentValue = "00-0af7651916cd43dd8448eb211c80319c-b7ad6b7169203331-01";
restHeaders.put(Task.TRACE_PARENT_HTTP_HEADER, Collections.singletonList(traceParentValue));
RestRequest fakeRequest = new FakeRestRequest.Builder(xContentRegistry()).withHeaders(restHeaders).build();
final RestController spyRestController = spy(restController);
when(spyRestController.getAllHandlers(null, fakeRequest.rawPath())).thenReturn(new Iterator<>() {
@Override
public boolean hasNext() {
return false;
}
@Override
public MethodHandlers next() {
return new MethodHandlers("/").addMethod(GET, RestApiVersion.current(), (request, channel, client) -> {
assertEquals("0af7651916cd43dd8448eb211c80319c", threadContext.getHeader(Task.TRACE_ID));
assertNull(threadContext.getHeader(Task.TRACE_PARENT_HTTP_HEADER));
});
}
});
AssertingChannel channel = new AssertingChannel(fakeRequest, false, RestStatus.BAD_REQUEST);
restController.dispatchRequest(fakeRequest, channel, threadContext);
// the rest controller relies on the caller to stash the context, so we should expect these values here as we didn't stash the
// context in this test
assertEquals("0af7651916cd43dd8448eb211c80319c", threadContext.getHeader(Task.TRACE_ID));
assertNull(threadContext.getHeader(Task.TRACE_PARENT_HTTP_HEADER));
assertThat(threadContext.getHeader(Task.TRACE_ID), equalTo("0af7651916cd43dd8448eb211c80319c"));
assertThat(threadContext.getHeader(Task.TRACE_PARENT_HTTP_HEADER), nullValue());
assertThat(threadContext.getTransient("parent_" + Task.TRACE_PARENT_HTTP_HEADER), equalTo(traceParentValue));
}
public void testRequestWithDisallowedMultiValuedHeaderButSameValues() {
@ -985,7 +957,6 @@ public class RestControllerTests extends ESTestCase {
boolean getSendResponseCalled() {
return getRestResponse() != null;
}
}
private static final class ExceptionThrowingChannel extends AbstractRestChannel {