diff --git a/content/en/administrators_guide/plan.md b/content/en/administrators_guide/plan.md index 5ae799f7fd8..eeb39f18be1 100644 --- a/content/en/administrators_guide/plan.md +++ b/content/en/administrators_guide/plan.md @@ -256,7 +256,7 @@ Centrally administer and manage all of your Datadog Agents with [Fleet Automatio ### Remote Configuration -Use Datadog's [Remote Configuration][35] (enabled by default), to remotely configure and change the behavior of Datadog components (for example, Agents, tracing libraries, and Observability Pipelines Worker) deployed in your infrastructure. For more information, see [supported products and capabilities][36]. +Use Datadog's [Remote Configuration][35] (enabled by default), to remotely configure and change the behavior of Datadog components (for example, Agents, SDKs, and Observability Pipelines Worker) deployed in your infrastructure. For more information, see [supported products and capabilities][36]. ### Notebooks diff --git a/content/en/agent/configuration/network.md b/content/en/agent/configuration/network.md index f3d82a9d3cd..8d54770c882 100644 --- a/content/en/agent/configuration/network.md +++ b/content/en/agent/configuration/network.md @@ -268,7 +268,7 @@ The APM receiver and the DogStatsD ports are located in the **Trace Collection C # receiver_port: 8126 {{< /code-block >}} -
context_info for instrumentation, the APM tracer overwrites it.context_info for instrumentation, the Datadog SDK overwrites it.DD_REMOTE_CONFIG_ENABLED=true and DD_EXPERIMENTAL_FLAGGING_PROVIDER_ENABLED=true. Without the experimental flag, the feature flagging system does not start and the Provider returns the programmatic default.CORECLR_PROFILER or COR_PROFILER if you installed the tracer using the MSI.CORECLR_PROFILER or COR_PROFILER if you installed the SDK using the MSI.CORECLR_PROFILER or COR_PROFILER if you installed the tracer using the MSI.CORECLR_PROFILER or COR_PROFILER if you installed the SDK using the MSI.CORECLR_PROFILER or COR_PROFILER if you installed the tracer using the MSI.CORECLR_PROFILER or COR_PROFILER if you installed the SDK using the MSI.traceSampleRate to the previously configured sessionSampleRate. For instance, if you used to have sessionSampleRate set to 10% and you bump it to 100% for RUM without Limits, decrease the traceSampleRate from 100% to 10% accordingly to ingest the same amount of traces.v2.2.2). In some cases, early release versions might be published between official tracer releases, and these images are tagged with a suffix such as -docker.1.
+ Note: As the Datadog External Processor is built on top of the Datadog Go Tracer, it generally follows the same release process as the SDK, and its Docker images are tagged with the corresponding tracer version (for example, v2.2.2). In some cases, early release versions might be published between official tracer releases, and these images are tagged with a suffix such as -docker.1.
v2.2.2). In some cases, early release versions might be published between official tracer releases, and these images are tagged with a suffix such as -docker.1.
+ Note: As the App and API Protection GCP Service Extensions integration is built on top of the Datadog Go Tracer, it generally follows the same release process as the SDK, and its Docker images are tagged with the corresponding tracer version (for example, v2.2.2). In some cases, early release versions might be published between official tracer releases, and these images are tagged with a suffix such as -docker.1.
v2.4.0). In some cases, early release versions might be published between official tracer releases, and these images are tagged with a suffix such as -docker.1.
+ Note: As the Datadog SPOA is built on top of the Datadog Go Tracer, it generally follows the same release process as the SDK, and its Docker images are tagged with the corresponding tracer version (for example, v2.4.0). In some cases, early release versions might be published between official tracer releases, and these images are tagged with a suffix such as -docker.1.
v2.2.2). In some cases, early release versions might be published between official tracer releases, and these images are tagged with a suffix such as -docker.1.
+ Note: As the Datadog External Processor is built on top of the Datadog Go Tracer, it generally follows the same release process as the SDK, and its Docker images are tagged with the corresponding tracer version (for example, v2.2.2). In some cases, early release versions might be published between official tracer releases, and these images are tagged with a suffix such as -docker.1.
v2.2.2). In some cases, early release versions might be published between official tracer releases, and these images are tagged with a suffix such as -docker.1.
+ Note: As the Datadog External Processor is built on top of the Datadog Go Tracer, it generally follows the same release process as the SDK, and its Docker images are tagged with the corresponding tracer version (for example, v2.2.2). In some cases, early release versions might be published between official tracer releases, and these images are tagged with a suffix such as -docker.1.
}}
1. Restart your Agent.
-## Review Remote Configuration status of tracing libraries
+## Review Remote Configuration status of SDKs
-You can gain visibility into the Remote Configuration status of your Tracer libraries through the [Remote Configuration UI][5].
+You can gain visibility into the Remote Configuration status of your SDKs through the [Remote Configuration UI][5].
-The following table describes the meaning of each tracing library status:
+The following table describes the meaning of each SDK status:
- | Tracing library Status| Description |
+ | SDK Status| Description |
|------------------|--------------------------------------------------|
- | CONNECTED | The tracing library is successfully connected to the Remote Configuration service through the associated Agent. This is the optimal state you want your tracing library to be in for Remote Configuration. |
- | UNAUTHORIZED | The tracing library is associated with an Agent which doesn't have `APM Remote Configuration Read` permission on its API key. To fix the issue, you need to enable Remote Configuration capability on the API Key used by the Agent associated with the tracing library.|
- | CONNECTION ERROR | The tracing library deployed in your environment is associated with an Agent that has `remote_config.enabled` set to true in its `datadog.yaml` configuration file, however, the Agent cannot be found in the Remote Configuration service. The most likely cause of this is that the associated Agent is unable to reach Remote Configuration [endpoints][6]. To fix the issue, you need to allow outbound HTTPS access to Remote Configuration endpoints from your environment.
- | DISABLED | The tracing library deployed in your environment is associated with an Agent that has `remote_config.enabled` set to false in its `datadog.yaml` configuration file. This could be set deliberately or mistakenly. To enable Remote Configuration on the associated Agent, set `remote_config.enabled` to true. |
- | NOT CONNECTED | The tracing library cannot be found in the Remote Configuration service and is associated with an Agent that could have `remote_config.enabled` set to true or false in its `datadog.yaml` configuration file. Check your local Agent configuration or your proxy settings.|
- | UNSUPPORTED AGENT | The tracing library is associated with an Agent which is not Remote Configuration capable. To fix this issue, update the associated Agent software to the latest available version. |
- | NOT DETECTED | The tracing library does not support Remote Configuration. To fix this issue, update the tracing library software to the latest available version. |
- | UNKNOWN | The tracing library status is unknown, and it can't be determined if an Agent is associated with the tracing library. For example, this could be because the Agent is deployed on a fully managed serverless container service like AWS Fargate. |
+ | CONNECTED | The SDK is successfully connected to the Remote Configuration service through the associated Agent. This is the optimal state you want your SDK to be in for Remote Configuration. |
+ | UNAUTHORIZED | The SDK is associated with an Agent which doesn't have `APM Remote Configuration Read` permission on its API key. To fix the issue, you need to enable Remote Configuration capability on the API Key used by the Agent associated with the SDK.|
+ | CONNECTION ERROR | The SDK deployed in your environment is associated with an Agent that has `remote_config.enabled` set to true in its `datadog.yaml` configuration file, however, the Agent cannot be found in the Remote Configuration service. The most likely cause of this is that the associated Agent is unable to reach Remote Configuration [endpoints][6]. To fix the issue, you need to allow outbound HTTPS access to Remote Configuration endpoints from your environment.
+ | DISABLED | The SDK deployed in your environment is associated with an Agent that has `remote_config.enabled` set to false in its `datadog.yaml` configuration file. This could be set deliberately or mistakenly. To enable Remote Configuration on the associated Agent, set `remote_config.enabled` to true. |
+ | NOT CONNECTED | The SDK cannot be found in the Remote Configuration service and is associated with an Agent that could have `remote_config.enabled` set to true or false in its `datadog.yaml` configuration file. Check your local Agent configuration or your proxy settings.|
+ | UNSUPPORTED AGENT | The SDK is associated with an Agent which is not Remote Configuration capable. To fix this issue, update the associated Agent software to the latest available version. |
+ | NOT DETECTED | The SDK does not support Remote Configuration. To fix this issue, update the SDK software to the latest available version. |
+ | UNKNOWN | The SDK status is unknown, and it can't be determined if an Agent is associated with the SDK. For example, this could be because the Agent is deployed on a fully managed serverless container service like AWS Fargate. |
## Opting out of Remote Configuration for Fleet Automation
diff --git a/content/en/tracing/guide/resource_based_sampling.md b/content/en/tracing/guide/resource_based_sampling.md
index 50dec3d77f7..b0cce542f06 100644
--- a/content/en/tracing/guide/resource_based_sampling.md
+++ b/content/en/tracing/guide/resource_based_sampling.md
@@ -25,7 +25,7 @@ Remote configuration allows you to dynamically set ingestion [sampling rates by
### Tracing library version
-Find below the minimum tracing library version required for the feature:
+Find below the minimum SDK version required for the feature:
Language | Minimum version required
----------|--------------------------
@@ -47,7 +47,7 @@ To see configured sampling rates by resource, navigate to the Ingestion controls
- The `Ingested bytes` column surfaces the ingested bytes from spans of the service and resource, while the `Downstream bytes` column surfaces the ingested bytes from spans where the sampling decision is made starting from that service and resource, including bytes from downstream services in the call chain.
- The `Configuration` column surfaces where the resource sampling rate is being applied from:
- `Automatic` if the [default head-based sampling mechanism][8] from the Agent applies.
- - `Local Configured` if a [sampling rule][7] was set locally in the tracing library.
+ - `Local Configured` if a [sampling rule][7] was set locally in the SDK.
- `Remote Configured` if a remote sampling rule was set from the Datadog UI. To learn how to configure sampling rules from the Ingestion Control page, read the section on [remotely configuring sampling rules](#remotely-configure-sampling-rules-for-the-service).
## Remotely configure sampling rules for the service
diff --git a/content/en/tracing/guide/send_traces_to_agent_by_api.md b/content/en/tracing/guide/send_traces_to_agent_by_api.md
index c041d6175de..cd44e0b82ad 100644
--- a/content/en/tracing/guide/send_traces_to_agent_by_api.md
+++ b/content/en/tracing/guide/send_traces_to_agent_by_api.md
@@ -17,7 +17,7 @@ aliases:
Datadog APM allows you to collect performance metrics by tracing your code to determine which parts of your application are slow or inefficient.
-Tracing data is sent from your instrumented code to the Datadog Agent through an HTTP API. Datadog tracing libraries simplify sending metrics to the Datadog Agent. However you might want to interact directly with the API to instrument applications that cannot use the libraries or are written in languages that don't yet have an official Datadog tracing library.
+Tracing data is sent from your instrumented code to the Datadog Agent through an HTTP API. Datadog SDKs simplify sending metrics to the Datadog Agent. However you might want to interact directly with the API to instrument applications that cannot use the libraries or are written in languages that don't yet have an official Datadog SDK.
The tracing API is an Agent API rather than a service side API. Submit your traces to the `http://localhost:8126/v0.3/traces` local endpoint so your Agent can forward them to Datadog.
@@ -43,7 +43,7 @@ and each span is a dictionary with a `trace_id`, `span_id`, `resource` and so on
### Model
-Makefile also sets the DD_TRACE_SAMPLE_RATE environment variable to 1, which represents a 100% sample rate. A 100% sample rate ensures that all requests to the notes service are sent to the Datadog backend for analysis and display for the purposes of this tutorial. In an actual production or high-volume environment, you wouldn't specify this high of a rate. Setting a high sample rate with this variable in the application overrides the Agent configuration and results in a very large volume of data being sent to Datadog. For most use cases, allow the Agent to automatically determine the sampling rate.service:<DD_SERVICE>).iisreset.exe.
+ Note: Always use the commands above to completely stop and restart IIS to enable the SDK. Avoid using the IIS Manager GUI application or iisreset.exe.
iisreset.exe.
+ Note: Always use the commands above to completely stop and restart IIS to enable the SDK. Avoid using the IIS Manager GUI application or iisreset.exe.
TracerSettings before creating the Tracer. Changes made to TracerSettings properties after the Tracer is created are ignored.
@@ -83,7 +83,7 @@ Tracer.Configure(settings);
{{% tab "JSON file" %}}
-To configure the tracer using a JSON file, create `datadog.json` in the instrumented application's directory. The root JSON object must be an object with a key-value pair for each setting. For example:
+To configure the SDK using a JSON file, create `datadog.json` in the instrumented application's directory. The root JSON object must be an object with a key-value pair for each setting. For example:
```json
{
@@ -234,7 +234,7 @@ Available since version `2.42.0`
**Default**: `%ProgramData%\Datadog .NET Tracer\logs\` on Windows, `/var/log/datadog/dotnet` on Linux
`DD_TRACE_LOGFILE_RETENTION_DAYS`
-: During the tracer's startup, this configuration uses the tracer's current log directory to delete log files the same age and older than the given number of days. Added in version 2.19.0. [sum of the language library sizes]
@@ -169,7 +169,7 @@ The value should be a JSON string that applies the necessary security context to
### Custom instrumentation
-Custom instrumentation still requires you to import the tracing library. Configuration variables like .NET's `DD_TRACE_METHODS` remain available for defining custom spans.
+Custom instrumentation still requires you to import the SDK. Configuration variables like .NET's `DD_TRACE_METHODS` remain available for defining custom spans.
## Environment-specific troubleshooting
@@ -300,7 +300,7 @@ SSI does not inject into applications that already use a `-javaagent` option or
### Ruby
-Ruby injection modifies the `Gemfile` to add the Datadog tracing library. If injection support is later removed, the application may fail to start due to the missing dependency.
+Ruby injection modifies the `Gemfile` to add the Datadog SDK. If injection support is later removed, the application may fail to start due to the missing dependency.
To resolve this, restore the original `Gemfile`. If you still want to use APM after removing injection, run `bundle install` to download the gem.
diff --git a/content/en/tracing/trace_collection/span_links/_index.md b/content/en/tracing/trace_collection/span_links/_index.md
index 0e49f5d77d3..ef07a677473 100644
--- a/content/en/tracing/trace_collection/span_links/_index.md
+++ b/content/en/tracing/trace_collection/span_links/_index.md
@@ -56,9 +56,9 @@ If your application is instrumented with:
**Note***: This section documents minimum support for generating span links with Datadog APM client libraries (with the OpenTelemetry API). Span links generated by the OpenTelemetry SDK are sent to Datadog through [OTLP Ingest][8].
-Agent v7.52.0 or greater is required to generate span links using [Datadog tracing libraries][7]. Support for span links was introduced in the following releases:
+Agent v7.52.0 or greater is required to generate span links using [Datadog SDKs][7]. Support for span links was introduced in the following releases:
-| Language | Minimum tracing library version |
+| Language | Minimum SDK version |
|-----------|---------------------------------|
| C++/Proxy | Not yet supported |
| Go | 1.61.0 |
diff --git a/content/en/tracing/trace_collection/trace_context_propagation/_index.md b/content/en/tracing/trace_collection/trace_context_propagation/_index.md
index 9aa3c7684fe..064279f1574 100644
--- a/content/en/tracing/trace_collection/trace_context_propagation/_index.md
+++ b/content/en/tracing/trace_collection/trace_context_propagation/_index.md
@@ -23,7 +23,7 @@ further_reading:
text: 'Interoperability of OpenTelemetry API and Datadog instrumented traces'
---
-Trace Context propagation is the mechanism of passing tracing information like Trace ID, Span ID, and sampling decisions from one part of a distributed application to another. This enables all traces (and additional telemetry) in a request to be correlated. When automatic instrumentation is enabled, trace context propagation is handled automatically by the APM SDK.
+Trace Context propagation is the mechanism of passing tracing information like Trace ID, Span ID, and sampling decisions from one part of a distributed application to another. This enables all traces (and additional telemetry) in a request to be correlated. When automatic instrumentation is enabled, trace context propagation is handled automatically by the Datadog SDK.
By default, the Datadog SDK extracts and injects distributed tracing headers using the following formats:
@@ -311,7 +311,7 @@ This function's optional argument accepts an array of injection style names. It
{{% collapse-content title="RabbitMQ" level="h4" %}}
-The PHP APM SDK supports automatic tracing of the `php-amqplib/php-amqplib` library (version 0.87.0+). However, in some cases, your distributed trace may be disconnected. For example, when reading messages from a distributed queue using the `basic_get` method outside an existing trace, you need to add a custom trace around the `basic_get` call and corresponding message processing:
+The PHP SDK supports automatic tracing of the `php-amqplib/php-amqplib` library (version 0.87.0+). However, in some cases, your distributed trace may be disconnected. For example, when reading messages from a distributed queue using the `basic_get` method outside an existing trace, you need to add a custom trace around the `basic_get` call and corresponding message processing:
```php
// Create a surrounding trace
diff --git a/content/en/tracing/trace_collection/trace_context_propagation/ruby_v1.md b/content/en/tracing/trace_collection/trace_context_propagation/ruby_v1.md
index 06a71590806..ee518b677c0 100644
--- a/content/en/tracing/trace_collection/trace_context_propagation/ruby_v1.md
+++ b/content/en/tracing/trace_collection/trace_context_propagation/ruby_v1.md
@@ -13,7 +13,7 @@ further_reading:
### Headers extraction and injection
-Datadog APM tracer supports [B3][6] and [W3C Trace Context][7] header extraction and injection for distributed tracing.
+Datadog SDK supports [B3][6] and [W3C Trace Context][7] header extraction and injection for distributed tracing.
Distributed headers injection and extraction is controlled by configuring injection and extraction styles. The following styles are supported:
@@ -51,7 +51,7 @@ Datadog.configure do |c|
end
```
-For more information about trace context propagation configuration, read [the Distributed Tracing section][1] in the Ruby Tracing Library Configuration docs.
+For more information about trace context propagation configuration, read [the Distributed Tracing section][1] in the Ruby SDK Configuration docs.
## Further Reading
diff --git a/content/en/tracing/trace_collection/tracing_naming_convention/_index.md b/content/en/tracing/trace_collection/tracing_naming_convention/_index.md
index 0fdd784494b..8e52e905c6d 100644
--- a/content/en/tracing/trace_collection/tracing_naming_convention/_index.md
+++ b/content/en/tracing/trace_collection/tracing_naming_convention/_index.md
@@ -14,7 +14,7 @@ further_reading:
## Overview
-[Datadog tracing libraries][1] provide out-of-the-box support for instrumenting a variety of libraries.
+[Datadog SDKs][1] provide out-of-the-box support for instrumenting a variety of libraries.
These instrumentations generate spans to represent logical units of work in distributed systems.
Each span consists of [span tags][2] to provide additional information on the unit of work happening in the system. Naming conventions describe the name and content that can be used in span events.
diff --git a/content/en/tracing/trace_explorer/span_tags_attributes.md b/content/en/tracing/trace_explorer/span_tags_attributes.md
index 7c9c0917e34..aed21ccc8af 100644
--- a/content/en/tracing/trace_explorer/span_tags_attributes.md
+++ b/content/en/tracing/trace_explorer/span_tags_attributes.md
@@ -25,7 +25,7 @@ Reserved attributes are a subset of span attributes that are present on every sp
### Span attributes
-Span attributes are the content of your span. These are collected out-of-the-box in tracing libraries using automatic instrumentation, manually using custom instrumentation, or remapped in the Datadog backend based on source attributes (see [peer attributes][11], remapped from some source attributes). To search on a specific span attribute, you must prepend an `@` character at the beginning of the attribute key.
+Span attributes are the content of your span. These are collected out-of-the-box in SDKs using automatic instrumentation, manually using custom instrumentation, or remapped in the Datadog backend based on source attributes (see [peer attributes][11], remapped from some source attributes). To search on a specific span attribute, you must prepend an `@` character at the beginning of the attribute key.
For instance, to find spans representing calls to a `users` table from a postgres database, use the following query: `@peer.db.name:users @peer.db.system:postgres`.
diff --git a/content/en/tracing/trace_pipeline/adaptive_sampling.md b/content/en/tracing/trace_pipeline/adaptive_sampling.md
index 792d24d1031..36e8c3bcff4 100644
--- a/content/en/tracing/trace_pipeline/adaptive_sampling.md
+++ b/content/en/tracing/trace_pipeline/adaptive_sampling.md
@@ -35,7 +35,7 @@ To configure services to use adaptive sampling, follow the instructions listed b
### Tracing library versions
-The following table lists minimum tracing library versions required for adaptive sampling:
+The following table lists minimum SDK versions required for adaptive sampling:
| Language | Minimum version required |
|-------------|--------------------------|
@@ -109,7 +109,7 @@ That monthly target volume is recomputed every 30 minutes.
{{< img src="/tracing/guide/adaptive_sampling/volume_based_target_setting.png" alt="Volume based target setting" style="width:100%;">}}
If you are configuring the first service to adaptive sampling, ensure that the ingestion volume target is `>0`. For subsequent services, you should increase the allocated budget after the new service is onboarded to account for the new volume.
- The configured budget is only allocated to services enrolled in adaptive sampling. It does not include ingested volume from services not enrolled in adaptive sampling, local sampling rules, or other sampling mechanisms configured locally in the Agent or tracing libraries.
+ The configured budget is only allocated to services enrolled in adaptive sampling. It does not include ingested volume from services not enrolled in adaptive sampling, local sampling rules, or other sampling mechanisms configured locally in the Agent or SDKs.
## Configure adaptive sampling for a service
@@ -130,7 +130,7 @@ The table includes:
- **Downstream bytes**: Ingested bytes from spans where the sampling decision starts from that service and resource, including downstream services.
- **Configuration**: Source of the resource sampling rate:
- `AUTOMATIC`: [Default head-based sampling mechanism][8] from the Agent.
- - `CONFIGURED LOCAL`: [Sampling rule][7] set locally in the tracing library.
+ - `CONFIGURED LOCAL`: [Sampling rule][7] set locally in the SDK.
- `CONFIGURED REMOTE`: Remote sampling rule set from the Datadog UI.
- `ADAPTIVE REMOTE`: Adaptive sampling rules set by Datadog.
diff --git a/content/en/tracing/trace_pipeline/ingestion_controls.md b/content/en/tracing/trace_pipeline/ingestion_controls.md
index fe3337fa024..351036d6840 100644
--- a/content/en/tracing/trace_pipeline/ingestion_controls.md
+++ b/content/en/tracing/trace_pipeline/ingestion_controls.md
@@ -60,7 +60,7 @@ Traffic Breakdown
: A detailed breakdown of traffic sampled and unsampled for traces starting from the service. See [Traffic breakdown](#traffic-breakdown) for more information.
Ingestion Configuration
-: Shows `Automatic` if the [default head-based sampling mechanism][4] from the Agent applies. If the ingestion was configured with [trace sampling rules][8], the service is marked as `Configured`; a `Local` label is set when the sampling rule is applied from configuration in the tracing library, a `Remote` label is set when the sampling rule is applied remotely, from the UI. For more information about configuring ingestion for a service, read about [changing the default ingestion rate](#configure-the-service-ingestion-rate).
+: Shows `Automatic` if the [default head-based sampling mechanism][4] from the Agent applies. If the ingestion was configured with [trace sampling rules][8], the service is marked as `Configured`; a `Local` label is set when the sampling rule is applied from configuration in the SDK, a `Remote` label is set when the sampling rule is applied remotely, from the UI. For more information about configuring ingestion for a service, read about [changing the default ingestion rate](#configure-the-service-ingestion-rate).
Infrastructure
: Hosts, containers, and functions on which the service is running.
@@ -86,7 +86,7 @@ The breakdown is composed of the following parts:
1. By default, the [Agent automatically sets a sampling rate][4] on services, depending on service traffic.
2. The service is configured to ingest a certain percentage of traces using [sampling rules][8].
-- **Complete traces dropped by the tracer rate limiter** (orange): When you choose to manually set the service ingestion rate as a percentage with trace sampling rules, a rate limiter is automatically enabled, set to 100 traces per second by default. See the [rate limiter][8] documentation to change this rate.
+- **Complete traces dropped by the SDK rate limiter** (orange): When you choose to manually set the service ingestion rate as a percentage with trace sampling rules, a rate limiter is automatically enabled, set to 100 traces per second by default. See the [rate limiter][8] documentation to change this rate.
- **Traces dropped due to the Agent CPU or RAM limit** (red): This mechanism may drop spans and create incomplete traces. To fix this, increase the CPU and memory allocation for the infrastructure that the Agent runs on.
@@ -105,7 +105,7 @@ The table lists the applied sampling rates by resource of the service.
- The `Ingested bytes` column surfaces the ingested bytes from spans of the service and resource, while the `Downstream bytes` column surfaces the ingested bytes from spans where the sampling decision is made starting from that service and resource, including bytes from downstream services in the call chain.
- The `Configuration` column surfaces where the resource sampling rate is being applied from:
- `Automatic` if the [default head-based sampling mechanism][4] from the Agent applies.
- - `Local Configured` if a [sampling rule][8] was set locally in the tracing library.
+ - `Local Configured` if a [sampling rule][8] was set locally in the SDK.
- `Remote Configured` if a remote sampling rule was set from the Datadog UI. To learn how to configure sampling rules from the Ingestion Control page, read the section on [remotely configuring sampling rules](#configure-the-service-ingestion-rates-by-resource).
**Note**: If the service is not making sampling decisions, the service's resources will be collapsed under the `Resources not making sampling decisions` row.
@@ -120,11 +120,11 @@ If most of your service ingestion volume is due to decisions taken by upstream s
For further investigations, use the [APM Trace - Estimated Usage Dashboard][12], which provides global ingestion information as well as breakdown graphs by `service`, `env` and `ingestion reason`.
-#### Agent and tracing library versions
+#### Agent and SDK versions
-See the **Datadog Agent and tracing library versions** your service is using. Compare the versions in use to the latest released versions to make sure you are running recent and up-to-date Agents and libraries.
+See the **Datadog Agent and SDK versions** your service is using. Compare the versions in use to the latest released versions to make sure you are running recent and up-to-date Agents and libraries.
-{{< img src="tracing/trace_indexing_and_ingestion/agent_tracer_version.png" style="width:90%;" alt="Agent and tracing library versions" >}}
+{{< img src="tracing/trace_indexing_and_ingestion/agent_tracer_version.png" style="width:90%;" alt="Agent and SDK versions" >}}
### Managing services' sampling rates
@@ -142,7 +142,7 @@ Using **Remote Configuration** for service ingestion rates has specific requirem
- [Remote Configuration][3] enabled for your Agent.
- `APM Remote Configuration Write` [permissions][20]. If you don't have these permissions, ask your Datadog admin to update your permissions from your organization settings.
-Find below the minimum tracing library version required for the feature:
+Find below the minimum SDK version required for the feature:
| Language | Minimum version required |
|----------|--------------------------|
diff --git a/content/en/tracing/trace_pipeline/ingestion_mechanisms.md b/content/en/tracing/trace_pipeline/ingestion_mechanisms.md
index eeaa6b0dbe8..3814331ec31 100644
--- a/content/en/tracing/trace_pipeline/ingestion_mechanisms.md
+++ b/content/en/tracing/trace_pipeline/ingestion_mechanisms.md
@@ -1,6 +1,6 @@
---
title: Ingestion Mechanisms
-description: "Overview of the mechanisms in the tracer and the Agent that control trace ingestion."
+description: "Overview of the mechanisms in the SDK and the Agent that control trace ingestion."
aliases:
- /tracing/trace_ingestion/mechanisms
further_reading:
@@ -21,7 +21,7 @@ further_reading:
{{< img src="tracing/apm_lifecycle/ingestion_sampling_rules.png" style="width:100%; background:none; border:none; box-shadow:none;" alt="Ingestion Sampling Rules" >}}
-Multiple mechanisms are responsible for choosing if spans generated by your applications are sent to Datadog (_ingested_). The logic behind these mechanisms lie in the [tracing libraries][1] and in the Datadog Agent. Depending on the configuration, all or some the traffic generated by instrumented services is ingested.
+Multiple mechanisms are responsible for choosing if spans generated by your applications are sent to Datadog (_ingested_). The logic behind these mechanisms lie in the [SDKs][1] and in the Datadog Agent. Depending on the configuration, all or some the traffic generated by instrumented services is ingested.
To each span ingested, there is attached a unique **ingestion reason** referring to one of the mechanisms described in this page. [Usage metrics][2] `datadog.estimated_usage.apm.ingested_bytes` and `datadog.estimated_usage.apm.ingested_spans` are tagged by `ingestion_reason`.
@@ -37,12 +37,12 @@ Because the decision is made at the beginning of the trace and then conveyed to
You can set sampling rates for head-based sampling in two places:
- At the **[Agent](#in-the-agent)** level (default)
-- At the **[Tracing Library](#in-tracing-libraries-user-defined-rules)** level: any tracing library mechanism overrides the Agent setup.
+- At the **[SDK](#in-tracing-libraries-user-defined-rules)** level: any SDK mechanism overrides the Agent setup.
### In the Agent
`ingestion_reason: auto`
-The Datadog Agent continuously sends sampling rates to tracing libraries to apply at the root of traces. The Agent adjusts rates to achieve a target of overall ten traces per second, distributed to services depending on the traffic.
+The Datadog Agent continuously sends sampling rates to SDKs to apply at the root of traces. The Agent adjusts rates to achieve a target of overall ten traces per second, distributed to services depending on the traffic.
For instance, if service `A` has more traffic than service `B`, the Agent might vary the sampling rate for `A` such that `A` keeps no more than seven traces per second, and similarly adjust the sampling rate for `B` such that `B` keeps no more than three traces per second, for a total of 10 traces per second.
@@ -59,15 +59,15 @@ Set Agent's target traces-per-second in its main configuration file (`datadog.ya
```
**Notes**:
-- The traces-per-second sampling rate set in the Agent only applies to Datadog tracing libraries. It has no effect on other tracing libraries such as OpenTelemetry SDKs.
+- The traces-per-second sampling rate set in the Agent only applies to Datadog SDKs. It has no effect on other SDKs such as OpenTelemetry SDKs.
- The target is not a fixed value. In reality, it fluctuates depending on traffic spikes and other factors.
All the spans from a trace sampled using the Datadog Agent [automatically computed sampling rates](#in-the-agent) are tagged with the ingestion reason `auto`. The `ingestion_reason` tag is also set on [usage metrics][2]. Services using the Datadog Agent default mechanism are labeled as `Automatic` in the [Ingestion Control Page][5] Configuration column.
-### In tracing libraries: user-defined rules
+### In SDKs: user-defined rules
`ingestion_reason: rule`
-For more granular control, use tracing library sampling configuration options:
+For more granular control, use SDK sampling configuration options:
- Set a specific **sampling rate to apply to the root of the trace**, by service, and/or resource name, overriding the Agent's [default mechanism](#in-the-agent).
- Set a **rate limit** on the number of ingested traces per second. The default rate limit is 100 traces per second per service instance (when using the Agent [default mechanism](#in-the-agent), the rate limiter is ignored).
@@ -105,7 +105,7 @@ Configure a rate limit by setting the environment variable `DD_TRACE_RATE_LIMIT`
**Note**: The use of `DD_TRACE_SAMPLE_RATE` is deprecated. Use `DD_TRACE_SAMPLING_RULES` instead. For instance, if you already set `DD_TRACE_SAMPLE_RATE` to `0.1`, set `DD_TRACE_SAMPLING_RULES` to `[{"sample_rate":0.1}]` instead.
-Read more about sampling controls in the [Java tracing library documentation][2].
+Read more about sampling controls in the [Java SDK documentation][2].
[1]: /tracing/guide/resource_based_sampling
[2]: /tracing/trace_collection/dd_libraries/java
@@ -133,7 +133,7 @@ Configure a rate limit by setting the environment variable `DD_TRACE_RATE_LIMIT`
**Note**: The use of `DD_TRACE_SAMPLE_RATE` is deprecated. Use `DD_TRACE_SAMPLING_RULES` instead. For instance, if you already set `DD_TRACE_SAMPLE_RATE` to `0.1`, set `DD_TRACE_SAMPLING_RULES` to `[{"sample_rate":0.1}]` instead.
-Read more about sampling controls in the [Python tracing library documentation][2].
+Read more about sampling controls in the [Python SDK documentation][2].
[1]: https://github.com/DataDog/dd-trace-py/releases/tag/v2.8.0
[2]: /tracing/trace_collection/dd_libraries/python
@@ -160,7 +160,7 @@ export DD_TRACE_SAMPLING_RULES='[{"service": "my-service", "sample_rate": 0.5}]'
Configure a rate limit by setting the environment variable `DD_TRACE_RATE_LIMIT` to a number of traces per second per service instance. If no `DD_TRACE_RATE_LIMIT` value is set, a limit of 100 traces per second is applied.
-Read more about sampling controls in the [Ruby tracing library documentation][1].
+Read more about sampling controls in the [Ruby SDK documentation][1].
[1]: /tracing/trace_collection/dd_libraries/ruby#sampling
{{% /tab %}}
@@ -187,7 +187,7 @@ Configure a rate limit by setting the environment variable `DD_TRACE_RATE_LIMIT`
**Note**: The use of `DD_TRACE_SAMPLE_RATE` is deprecated. Use `DD_TRACE_SAMPLING_RULES` instead. For instance, if you already set `DD_TRACE_SAMPLE_RATE` to `0.1`, set `DD_TRACE_SAMPLING_RULES` to `[{"sample_rate":0.1}]` instead.
-Read more about sampling controls in the [Go tracing library documentation][1].
+Read more about sampling controls in the [Go SDK documentation][1].
[1]: /tracing/trace_collection/dd_libraries/go
[2]: https://github.com/DataDog/dd-trace-go/releases/tag/v1.60.0
@@ -223,7 +223,7 @@ tracer.init({
Configure a rate limit by setting the environment variable `DD_TRACE_RATE_LIMIT` to a number of traces per second per service instance. If no `DD_TRACE_RATE_LIMIT` value is set, a limit of 100 traces per second is applied.
-Read more about sampling controls in the [Node.js tracing library documentation][1].
+Read more about sampling controls in the [Node.js SDK documentation][1].
[1]: /tracing/trace_collection/dd_libraries/nodejs
{{% /tab %}}
@@ -247,7 +247,7 @@ export DD_TRACE_SAMPLE_RATE=0.1
export DD_TRACE_SAMPLING_RULES='[{"service": "my-service", "resource":"GET /checkout", "sample_rate": 1},{"service": "my-service", "sample_rate": 0.2}]'
```
-Read more about sampling controls in the [PHP tracing library documentation][1].
+Read more about sampling controls in the [PHP SDK documentation][1].
[1]: /tracing/trace_collection/dd_libraries/php
{{% /tab %}}
@@ -299,7 +299,7 @@ $env:DD_TRACE_SAMPLING_RULES='[{"service": "my-service", "sample_rate": 0.5}]'
Configure a rate limit by setting the environment variable `DD_TRACE_RATE_LIMIT` to a number of traces per second per service instance. If no `DD_TRACE_RATE_LIMIT` value is set, a limit of 100 traces per second is applied.
-Read more about sampling controls in the [.NET tracing library documentation][1].\
+Read more about sampling controls in the [.NET SDK documentation][1].\
Read more about [configuring environment variables for .NET][2].
[1]: /tracing/trace_collection/automatic_instrumentation/dd_libraries/dotnet-core
@@ -307,7 +307,7 @@ Read more about [configuring environment variables for .NET][2].
{{% /tab %}}
{{< /tabs >}}
-**Note**: All the spans from a trace sampled using a tracing library configuration are tagged with the ingestion reason `rule`. Services configured with user-defined sampling rules are marked as `Configured` in the [Ingestion Control Page][5] Configuration column.
+**Note**: All the spans from a trace sampled using an SDK configuration are tagged with the ingestion reason `rule`. Services configured with user-defined sampling rules are marked as `Configured` in the [Ingestion Control Page][5] Configuration column.
## Error and rare traces
@@ -334,7 +334,7 @@ With Agent version 7.33 and forward, you can configure the error sampler in the
**Notes**:
1. Set the parameter to `0` to disable the error sampler.
2. The error sampler captures local traces with error spans at the Agent level. If the trace is distributed, there is no guarantee that the complete trace is sent to Datadog.
-3. By default, spans dropped by tracing library rules or custom logic such as `manual.drop` are **excluded** under the error sampler.
+3. By default, spans dropped by SDK rules or custom logic such as `manual.drop` are **excluded** under the error sampler.
#### Datadog Agent 7.42.0 and higher
@@ -342,7 +342,7 @@ The error sampling is remotely configurable if you're using the Agent version [7
#### Datadog Agent 6/7.41.0 and higher
-To override the default behavior so that spans dropped by the tracing library rules or custom logic such as `manual.drop` are **included** by the error sampler, enable the feature with: `DD_APM_FEATURES=error_rare_sample_tracer_drop` in the Datadog Agent (or the dedicated Trace Agent container within the Datadog Agent pod in Kubernetes).
+To override the default behavior so that spans dropped by the SDK rules or custom logic such as `manual.drop` are **included** by the error sampler, enable the feature with: `DD_APM_FEATURES=error_rare_sample_tracer_drop` in the Datadog Agent (or the dedicated Trace Agent container within the Datadog Agent pod in Kubernetes).
#### Datadog Agent 6/7.33 to 6/7.40.x
@@ -365,7 +365,7 @@ The rare sampling rate is remotely configurable if you're using the Agent versio
By default, the rare sampler is **not enabled**.
-**Note**: When **enabled**, spans dropped by tracing library rules or custom logic such as `manual.drop` are **excluded** under this sampler.
+**Note**: When **enabled**, spans dropped by SDK rules or custom logic such as `manual.drop` are **excluded** under this sampler.
To configure the rare sampler, update the `apm_config.enable_rare_sampler` setting in the Agent main configuration file (`datadog.yaml`) or with the environment variable `DD_APM_ENABLE_RARE_SAMPLER`:
@@ -374,7 +374,7 @@ To configure the rare sampler, update the `apm_config.enable_rare_sampler` setti
@env DD_APM_ENABLE_RARE_SAMPLER - boolean - optional - default: false
```
-To evaluate spans dropped by tracing library rules or custom logic such as `manual.drop`,
+To evaluate spans dropped by SDK rules or custom logic such as `manual.drop`,
enable the feature with: `DD_APM_FEATURES=error_rare_sample_tracer_drop` in the Trace Agent.
@@ -382,7 +382,7 @@ To evaluate spans dropped by tracing library rules or custom logic such as `manu
By default, the rare sampler is enabled.
-**Note**: When **enabled**, spans dropped by tracing library rules or custom logic such as `manual.drop` **are excluded** under this sampler. To include these spans in this logic, upgrade to Datadog Agent 6.41.0/7.41.0 or higher.
+**Note**: When **enabled**, spans dropped by SDK rules or custom logic such as `manual.drop` **are excluded** under this sampler. To include these spans in this logic, upgrade to Datadog Agent 6.41.0/7.41.0 or higher.
To change the default rare sampler settings, update the `apm_config.disable_rare_sampler` setting in the Agent main configuration file (`datadog.yaml`) or with the environment variable `DD_APM_DISABLE_RARE_SAMPLER`:
@@ -394,7 +394,7 @@ To change the default rare sampler settings, update the `apm_config.disable_rare
## Force keep and drop
`ingestion_reason: manual`
-The head-based sampling mechanism can be overridden at the tracing library level. For example, if you need to monitor a critical transaction, you can force the associated trace to be kept. On the other hand, for unnecessary or repetitive information like health checks, you can force the trace to be dropped.
+The head-based sampling mechanism can be overridden at the SDK level. For example, if you need to monitor a critical transaction, you can force the associated trace to be kept. On the other hand, for unnecessary or repetitive information like health checks, you can force the trace to be dropped.
- Set Manual Keep on a span to indicate that it and all child spans should be ingested. The resulting trace might appear incomplete in the UI if the span in question is not the root span of the trace.
@@ -694,7 +694,7 @@ Manual trace keeping should happen before context propagation. If it is kept aft
## Single spans
`ingestion_reason: single_span`
-If you need to sample a specific span, but don't need the full trace to be available, tracing libraries allow you to set a sampling rate to be configured for a single span.
+If you need to sample a specific span, but don't need the full trace to be available, SDKs allow you to set a sampling rate to be configured for a single span.
For example, if you are building [metrics from spans][6] to monitor specific services, you can configure span sampling rules to ensure that these metrics are based on 100% of the application traffic, without having to ingest 100% of traces for all the requests flowing through the service.
@@ -704,7 +704,7 @@ This feature is available for Datadog Agent v[7.40.0][19]+.
{{< tabs >}}
{{% tab "Java" %}}
-Starting in tracing library [version 1.7.0][1], for Java applications, set by-service and by-operation name **span** sampling rules with the `DD_SPAN_SAMPLING_RULES` environment variable.
+Starting in SDK [version 1.7.0][1], for Java applications, set by-service and by-operation name **span** sampling rules with the `DD_SPAN_SAMPLING_RULES` environment variable.
For example, to collect 100% of the spans from the service named `my-service`, for the operation `http.request`, up to 50 spans per second:
@@ -712,7 +712,7 @@ For example, to collect 100% of the spans from the service named `my-service`, f
@env DD_SPAN_SAMPLING_RULES=[{"service": "my-service", "name": "http.request", "sample_rate":1.0, "max_per_second": 50}]
```
-Read more about sampling controls in the [Java tracing library documentation][2].
+Read more about sampling controls in the [Java SDK documentation][2].
[1]: https://github.com/DataDog/dd-trace-java/releases/tag/v1.7.0
[2]: /tracing/trace_collection/dd_libraries/java
@@ -727,7 +727,7 @@ For example, to collect `100%` of the spans from the service named `my-service`,
```
-Read more about sampling controls in the [Python tracing library documentation][2].
+Read more about sampling controls in the [Python SDK documentation][2].
[1]: https://github.com/DataDog/dd-trace-py/releases/tag/v1.4.0
[2]: /tracing/trace_collection/dd_libraries/python
@@ -741,7 +741,7 @@ For example, to collect `100%` of the spans from the service named `my-service`,
@env DD_SPAN_SAMPLING_RULES=[{"service": "my-service", "name": "http.request", "sample_rate":1.0, "max_per_second": 50}]
```
-Read more about sampling controls in the [Ruby tracing library documentation][2].
+Read more about sampling controls in the [Ruby SDK documentation][2].
[1]: https://github.com/DataDog/dd-trace-rb/releases/tag/v1.5.0
[2]: /tracing/trace_collection/dd_libraries/ruby#sampling
@@ -762,7 +762,7 @@ For example, to collect `100%` of the spans from the service for the resource `P
@env DD_SPAN_SAMPLING_RULES=[{"resource": "POST /api/create_issue", "tags": { "priority":"high" }, "sample_rate":1.0}]
```
-Read more about sampling controls in the [Go tracing library documentation][2].
+Read more about sampling controls in the [Go SDK documentation][2].
[1]: https://github.com/DataDog/dd-trace-go/releases/tag/v1.41.0
[2]: /tracing/trace_collection/dd_libraries/go
@@ -777,7 +777,7 @@ For example, to collect `100%` of the spans from the service named `my-service`,
@env DD_SPAN_SAMPLING_RULES=[{"service": "my-service", "name": "http.request", "sample_rate":1.0, "max_per_second": 50}]
```
-Read more about sampling controls in the [Node.js tracing library documentation][1].
+Read more about sampling controls in the [Node.js SDK documentation][1].
[1]: /tracing/trace_collection/dd_libraries/nodejs
{{% /tab %}}
@@ -790,7 +790,7 @@ For example, to collect `100%` of the spans from the service named `my-service`,
@env DD_SPAN_SAMPLING_RULES=[{"service": "my-service", "name": "http.request", "sample_rate":1.0, "max_per_second": 50}]
```
-Read more about sampling controls in the [PHP tracing library documentation][2].
+Read more about sampling controls in the [PHP SDK documentation][2].
[1]: https://github.com/DataDog/dd-trace-php/releases/tag/0.77.0
[2]: /tracing/trace_collection/dd_libraries/php
@@ -821,7 +821,7 @@ $env:DD_SPAN_SAMPLING_RULES='[{"service": "my-service", "name": "http.request",
}
```
-Read more about sampling controls in the [.NET tracing library documentation][2].
+Read more about sampling controls in the [.NET SDK documentation][2].
[1]: https://github.com/DataDog/dd-trace-dotnet/releases/tag/v2.18.0
[2]: /tracing/trace_collection/dd_libraries/dotnet-core
@@ -863,8 +863,8 @@ Some additional ingestion reasons are attributed to spans that are generated by
| Product | Ingestion Reason | Ingestion Mechanism Description |
|------------|-------------------------------------|---------------------------------|
-| Serverless | `lambda` and `xray` | Your traces received from the [Serverless applications][14] traced with Datadog Tracing Libraries or the AWS X-Ray integration. |
-| App and API Protection | `appsec` | Traces ingested from Datadog tracing libraries and flagged by [AAP][15] as a threat. |
+| Serverless | `lambda` and `xray` | Your traces received from the [Serverless applications][14] traced with Datadog SDKs or the AWS X-Ray integration. |
+| App and API Protection | `appsec` | Traces ingested from Datadog SDKs and flagged by [AAP][15] as a threat. |
| Data Observability: Jobs Monitoring | `data_jobs` | Traces ingested from the Datadog Java Tracer Spark integration or the Databricks integration. |
## Ingestion mechanisms in OpenTelemetry
diff --git a/content/en/tracing/trace_pipeline/metrics.md b/content/en/tracing/trace_pipeline/metrics.md
index 38c95fb8b0c..4d5d472ed5e 100644
--- a/content/en/tracing/trace_pipeline/metrics.md
+++ b/content/en/tracing/trace_pipeline/metrics.md
@@ -45,7 +45,7 @@ The following metrics are associated with ingested spans usage:
To control usage, use `datadog.estimated_usage.apm.ingested_bytes`. Ingestion is metered by volume, not by the number of spans or traces. This metric is tagged with `env`, `service`, and`sampling_service`. These tags help identify which environments and services contribute to the ingestion volume. For more information about the `sampling_service` dimension, read [What is the sampling service?](#what-is-the-sampling-service).
-This metric is also tagged by `ingestion_reason`, reflecting which [ingestion mechanisms][5] are responsible for sending spans to Datadog. These mechanisms are nested in the tracing libraries of the Datadog Agent. For more information about this dimension, see the [Ingestion Reasons dashboard][6].
+This metric is also tagged by `ingestion_reason`, reflecting which [ingestion mechanisms][5] are responsible for sending spans to Datadog. These mechanisms are nested in the SDKs of the Datadog Agent. For more information about this dimension, see the [Ingestion Reasons dashboard][6].
The `datadog.estimated_usage.apm.ingested_traces` metric measures the number of requests sampled per second, and only counts traces sampled by [head-based sampling][7]. This metric is also tagged by `env` and `service` so you can spot which services are starting the most traces.
@@ -78,7 +78,7 @@ In this dashboard, you can find information about:
## APM Ingestion Reasons dashboard
-The [APM Ingestion Reasons dashboard][6] provides insights on each source of ingestion volume. Each ingestion usage metric is tagged with an `ingestion_reason` dimension, so you can see which configuration options (Datadog Agent configuration or tracing library configuration) and products (such as RUM or Synthetic Testing) are generating the most APM data.
+The [APM Ingestion Reasons dashboard][6] provides insights on each source of ingestion volume. Each ingestion usage metric is tagged with an `ingestion_reason` dimension, so you can see which configuration options (Datadog Agent configuration or SDK configuration) and products (such as RUM or Synthetic Testing) are generating the most APM data.
{{< img src="tracing/trace_indexing_and_ingestion/usage_metrics/dashboard_ingestion_reasons.png" style="width:100%;" alt="APM Ingestion Reasons Dashboard" >}}
diff --git a/content/en/tracing/troubleshooting/_index.md b/content/en/tracing/troubleshooting/_index.md
index a8fcb50021b..651e2e6af62 100644
--- a/content/en/tracing/troubleshooting/_index.md
+++ b/content/en/tracing/troubleshooting/_index.md
@@ -12,10 +12,10 @@ further_reading:
text: "Connection Errors"
- link: "/tracing/troubleshooting/tracer_startup_logs/"
tag: "Documentation"
- text: "Datadog tracer startup logs"
+ text: "Datadog SDK startup logs"
- link: "/tracing/troubleshooting/tracer_debug_logs/"
tag: "Documentation"
- text: "Datadog tracer debug logs"
+ text: "Datadog SDK debug logs"
- link: "/tracing/troubleshooting/agent_apm_metrics/"
tag: "Documentation"
text: "APM metrics sent by the Datadog Agent"
@@ -39,7 +39,7 @@ further_reading:
text: Troubleshooting APM Instrumentation on a Host
---
-If you experience unexpected behavior while using Datadog APM, read the information on this page to help resolve the issue. Datadog recommends regularly updating to the latest version of the Datadog tracing libraries you use, as each release contains improvements and fixes. If you continue to experience issues, reach out to [Datadog support][1].
+If you experience unexpected behavior while using Datadog APM, read the information on this page to help resolve the issue. Datadog recommends regularly updating to the latest version of the Datadog SDKs you use, as each release contains improvements and fixes. If you continue to experience issues, reach out to [Datadog support][1].
The following components are involved in sending APM data to Datadog:
@@ -104,7 +104,7 @@ You can use [Inferred Service dependencies (Preview)][30]. Inferred external API
Or, you can merge the service names using an environment variable such as `DD_SERVICE_MAPPING` or `DD_TRACE_SERVICE_MAPPING`, depending on the language.
-For more information, see [Configure the Datadog Tracing Library][27] or choose your language here:
+For more information, see [Configure the Datadog SDK][27] or choose your language here:
{{< tabs >}}
{{% tab "Java" %}}
@@ -300,11 +300,11 @@ There are several configuration options available to scrub sensitive data or dis
## Debugging and logging
-This section explains how to use debug and startup logs to identify and resolve issues with your Datadog tracer.
+This section explains how to use debug and startup logs to identify and resolve issues with your Datadog SDK.
{{% collapse-content title="Debug logs" level="h4" %}}
-To capture full details on the Datadog tracer, enable debug mode on your tracer by using the `DD_TRACE_DEBUG` environment variable. You might enable it for your own investigation or if Datadog support has recommended it for triage purposes. However, be sure to disable debug logging when you are finished testing to avoid the logging overhead it introduces.
+To capture full details on the Datadog SDK, enable debug mode on your tracer by using the `DD_TRACE_DEBUG` environment variable. You might enable it for your own investigation or if Datadog support has recommended it for triage purposes. However, be sure to disable debug logging when you are finished testing to avoid the logging overhead it introduces.
These logs can surface instrumentation errors or integration-specific errors. For details on enabling and capturing these debug logs, see the [debug mode troubleshooting page][5].
@@ -312,7 +312,7 @@ These logs can surface instrumentation errors or integration-specific errors. Fo
{{% collapse-content title="Startup logs" level="h4" %}}
-During startup, Datadog tracing libraries emit logs that reflect the configurations applied in a JSON object, as well as any errors encountered, including if the Agent can be reached in languages where this is possible. Some languages require these startup logs to be enabled with the environment variable `DD_TRACE_STARTUP_LOGS=true`. For more information, see the [Startup logs][3].
+During startup, Datadog SDKs emit logs that reflect the configurations applied in a JSON object, as well as any errors encountered, including if the Agent can be reached in languages where this is possible. Some languages require these startup logs to be enabled with the environment variable `DD_TRACE_STARTUP_LOGS=true`. For more information, see the [Startup logs][3].
{{% /collapse-content %}}
@@ -332,11 +332,11 @@ When you open a [support ticket][1], the Datadog support team may ask for the fo
1. **Links to a trace or screenshots of the issue**: This helps reproduce your issues for troubleshooting purposes.
-2. **Tracer startup logs**: Startup logs help identify tracer misconfiguration or communication issues between the tracer and the Datadog Agent. By comparing the tracer's configuration with the application or container settings, support teams can pinpoint improperly applied settings.
+2. **Tracer startup logs**: Startup logs help identify tracer misconfiguration or communication issues between the SDK and the Datadog Agent. By comparing the SDK's configuration with the application or container settings, support teams can pinpoint improperly applied settings.
3. **Tracer debug logs**: Tracer debug logs provide deeper insights than startup logs, revealing:
- Proper integration instrumentation during application traffic flow
- - Contents of spans created by the tracer
+ - Contents of spans created by the SDK
- Connection errors when sending spans to the Agent
4. **Datadog Agent flare**: [Datadog Agent flares][12] enable you to see what is happening within the Datadog Agent, for example, if traces are being rejected or malformed. This does not help if traces are not reaching the Datadog Agent, but does help identify the source of an issue, or any metric discrepancies.
@@ -345,7 +345,7 @@ When you open a [support ticket][1], the Datadog support team may ask for the fo
6. **Custom tracing code**: Custom instrumentation, configuration, and adding span tags can significantly impact trace visualizations in Datadog.
-7. **Version information**: Knowing what language, framework, Datadog Agent, and Datadog tracer versions you are using allows Support to verify [Compatibility Requirements][15], check for known issues, or recommend a version upgrades. For example:
+7. **Version information**: Knowing what language, framework, Datadog Agent, and Datadog SDK versions you are using allows Support to verify [Compatibility Requirements][15], check for known issues, or recommend a version upgrades. For example:
{{% /collapse-content %}}
diff --git a/content/en/tracing/troubleshooting/connection_errors.md b/content/en/tracing/troubleshooting/connection_errors.md
index f77b180365a..f2e93fc867d 100644
--- a/content/en/tracing/troubleshooting/connection_errors.md
+++ b/content/en/tracing/troubleshooting/connection_errors.md
@@ -1,11 +1,11 @@
---
title: APM Connection Errors
-description: Diagnose and resolve connection errors between tracing libraries and the Datadog Agent in various deployment environments.
+description: Diagnose and resolve connection errors between SDKs and the Datadog Agent in various deployment environments.
aliases:
- /tracing/faq/why-am-i-getting-errno-111-connection-refused-errors-in-my-application-logs/
---
-If the application with the tracing library cannot reach the Datadog Agent, look for connection errors in the [tracer startup logs][1] or [tracer debug logs][2], which can be found with your application logs.
+If the application with the SDK cannot reach the Datadog Agent, look for connection errors in the [tracer startup logs][1] or [tracer debug logs][2], which can be found with your application logs.
## Errors that indicate an APM Connection problem
@@ -172,11 +172,11 @@ APM Agent
```
## Troubleshooting the connection problem
-Whether it's the tracing library or the Datadog Agent displaying the error, there are a few ways to troubleshoot.
+Whether it's the SDK or the Datadog Agent displaying the error, there are a few ways to troubleshoot.
### Host-based setups
-If your application and the Datadog Agent are not containerized, the application with the tracing library should be trying to send traces to `localhost:8126` or `127.0.0.1:8126`, because that is where the Datadog Agent is listening.
+If your application and the Datadog Agent are not containerized, the application with the SDK should be trying to send traces to `localhost:8126` or `127.0.0.1:8126`, because that is where the Datadog Agent is listening.
If the Datadog Agent shows that APM is not listening, check for port conflicts with port 8126, which is what the APM component of the Datadog Agent uses by default.
@@ -205,7 +205,7 @@ If this command fails, your container cannot access the Agent. Refer to the foll
A great place to get started is the [APM in-app setup documentation][6].
-#### Review where your tracing library is trying to send traces
+#### Review where your SDK is trying to send traces
Using the error logs listed above for each language, check to see where your traces are being directed.
@@ -225,7 +225,7 @@ See the table below for example setups. Some require setting up additional netwo
**Note about web servers**: If the `agent_url` section in the [tracer startup logs][1] has a mismatch against the `DD_AGENT_HOST` environment variable that was passed in, review how environment variables are cascaded for that specific server. For example, in PHP, there's an additional setting to ensure that [Apache][18] or [Nginx][19] pick up the `DD_AGENT_HOST` environment variable correctly.
-If your tracing library is sending traces correctly based on your setup, then proceed to the next step.
+If your SDK is sending traces correctly based on your setup, then proceed to the next step.
#### Review your Datadog Agent status and configuration
diff --git a/content/en/tracing/troubleshooting/correlated-logs-not-showing-up-in-the-trace-id-panel.md b/content/en/tracing/troubleshooting/correlated-logs-not-showing-up-in-the-trace-id-panel.md
index 81601989fd7..ff6d3173788 100644
--- a/content/en/tracing/troubleshooting/correlated-logs-not-showing-up-in-the-trace-id-panel.md
+++ b/content/en/tracing/troubleshooting/correlated-logs-not-showing-up-in-the-trace-id-panel.md
@@ -48,7 +48,7 @@ If the **Log** section is empty for the `trace_id` option, ensure you have a sta
{{< tabs >}}
{{% tab "JSON logs" %}}
- For JSON logs, Step 1 and 2 are automatic. The tracer injects the [trace][1] and [span][2] IDs into the logs, which are automatically remapped by the [reserved attribute remappers][3].
+ For JSON logs, Step 1 and 2 are automatic. The SDK injects the [trace][1] and [span][2] IDs into the logs, which are automatically remapped by the [reserved attribute remappers][3].
If this process is not working as expected, ensure the logs attribute's name containing the trace ID is `dd.trace_id` and verify that the attribute is correctly set in the [reserved attributes'][4] Trace ID section.
diff --git a/content/en/tracing/troubleshooting/dotnet_diagnostic_tool.md b/content/en/tracing/troubleshooting/dotnet_diagnostic_tool.md
index 31bb4813a6c..253dff0c761 100644
--- a/content/en/tracing/troubleshooting/dotnet_diagnostic_tool.md
+++ b/content/en/tracing/troubleshooting/dotnet_diagnostic_tool.md
@@ -5,13 +5,13 @@ description: Use the dd-dotnet diagnostic tool to troubleshoot .NET tracing setu
If your application does not produce traces as expected after installing the .NET tracer, run the diagnostic tool `dd-dotnet` described on this page for basic troubleshooting. It can help you determine issues with your setup, such as missing environment variables, incomplete installation, or an unreachable Agent.
-The diagnostic tool `dd-dotnet` is bundled with the tracing library starting with version 2.42.0. It is located in the tracing library's installation folder, and automatically added to the system `PATH` to be invoked from anywhere.
+The diagnostic tool `dd-dotnet` is bundled with the SDK starting with version 2.42.0. It is located in the SDK's installation folder, and automatically added to the system `PATH` to be invoked from anywhere.
## Installing `dd-trace`
-**This section is for versions of the tracer older than 2.42.0.**
+**This section is for versions of the SDK older than 2.42.0.**
-Older versions of the tracer did not include the `dd-dotnet` tool. You can install the `dd-trace` tool instead. Its features and syntax are similar to `dd-dotnet`.
+Older versions of the SDK did not include the `dd-dotnet` tool. You can install the `dd-trace` tool instead. Its features and syntax are similar to `dd-dotnet`.
You can install `dd-trace` in one of the following ways:
@@ -55,7 +55,7 @@ Process name: SimpleApp
Target process is running with .NET Core
1. Checking Modules Needed so the Tracer Loads:
[SUCCESS]: The native library version 2.42.0.0 is loaded into the process.
- [SUCCESS]: The tracer version 2.42.0.0 is loaded into the process.
+ [SUCCESS]: The SDK version 2.42.0.0 is loaded into the process.
2. Checking DD_DOTNET_TRACER_HOME and related configuration value:
[SUCCESS]: DD_DOTNET_TRACER_HOME is set to 'C:\git\dd-trace-dotnet-2\shared\bin\monitoring-home\win-x64\..' and the
directory was found correctly.
@@ -95,7 +95,7 @@ Process name: SimpleApp
Target process is running with .NET Core
1. Checking Modules Needed so the Tracer Loads:
[WARNING]: The native loader library is not loaded into the process
- [WARNING]: The native tracer library is not loaded into the process
+ [WARNING]: The native SDK is not loaded into the process
[WARNING]: Tracer is not loaded into the process
2. Checking DD_DOTNET_TRACER_HOME and related configuration value:
[WARNING]: DD_DOTNET_TRACER_HOME is set to 'C:\Program Files\Datadog\.NET Tracer\' but the directory does not exist.
@@ -111,7 +111,7 @@ Tracer\win-x64\Datadog.Trace.ClrProfiler.Native.dll but the file is missing or y
[FAILURE]: The environment variable CORECLR_ENABLE_PROFILING should be set to '1' (current value: not set)
6. Checking if process tracing configuration matches Installer or Bundler:
Installer/MSI related documentation:
-https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/dotnet-core/?tab=windows#install-the-tracer
+https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/dotnet-core/?tab=windows#install-the-sdk
[FAILURE]: Unable to find Datadog .NET Tracer program, make sure the tracer has been properly installed with the MSI.
[WARNING]: The registry key SOFTWARE\Classes\CLSID\{846F5F1C-F9AE-4B07-969E-05C26BC060D8}\InprocServer32 is missing. If
using the MSI, make sure the installation was completed correctly try to repair/reinstall it.
@@ -159,7 +159,7 @@ Inspecting worker process 39852
Target process is running with .NET Framework
1. Checking Modules Needed so the Tracer Loads:
[SUCCESS]: The native library version 2.42.0.0 is loaded into the process.
- [SUCCESS]: The tracer version 2.42.0.0 is loaded into the process.
+ [SUCCESS]: The SDK version 2.42.0.0 is loaded into the process.
2. Checking DD_DOTNET_TRACER_HOME and related configuration value:
[SUCCESS]: DD_DOTNET_TRACER_HOME is set to 'C:\Program Files\Datadog\.NET Tracer\' and the directory was found
correctly.
@@ -199,7 +199,7 @@ Inspecting worker process 35152
Target process is running with .NET Framework
1. Checking Modules Needed so the Tracer Loads:
[SUCCESS]: The native library version 2.42.0.0 is loaded into the process.
- [SUCCESS]: The tracer version 2.42.0.0 is loaded into the process.
+ [SUCCESS]: The SDK version 2.42.0.0 is loaded into the process.
2. Checking DD_DOTNET_TRACER_HOME and related configuration value:
[SUCCESS]: DD_DOTNET_TRACER_HOME is set to 'C:\Program Files\Datadog\.NET Tracer\' and the directory was found
correctly.
@@ -224,7 +224,7 @@ Detected agent url: http://127.0.0.1:8126/. Note: this url may be incorrect if y
configuration file.
Connecting to Agent at endpoint http://127.0.0.1:8126/ using HTTP
Detected agent version 7.48.0
- [FAILURE]: The Datadog.Trace assembly could not be found in the GAC. Make sure the tracer has been properly installed
+ [FAILURE]: The Datadog.Trace assembly could not be found in the GAC. Make sure the SDK has been properly installed
with the MSI.
```
diff --git a/content/en/tracing/troubleshooting/quantization.md b/content/en/tracing/troubleshooting/quantization.md
index 45c174c3b66..94fb75ffc39 100644
--- a/content/en/tracing/troubleshooting/quantization.md
+++ b/content/en/tracing/troubleshooting/quantization.md
@@ -10,7 +10,7 @@ further_reading:
text: Replace tags in spans
- link: /tracing/trace_collection/library_config/
tag: Documentation
- text: Tracing Library Configuration
+ text: SDK Configuration
---
## Overview
@@ -48,7 +48,7 @@ To search for these spans in trace search, the query is `resource_name:"SELECT ?
### In-code instrumentation
-If your application runs in an agentless setup or if you prefer to make instrumentation changes more directly in your code, see [the tracer documentation of your application's runtime][3] for information on how to create custom configuration for span names and resource names.
+If your application runs in an agentless setup or if you prefer to make instrumentation changes more directly in your code, see [the SDK documentation of your application's runtime][3] for information on how to create custom configuration for span names and resource names.
### Agent configuration
@@ -82,7 +82,7 @@ Some tracers provide options to customize resource name generation directly:
The Java tracer allows customization of HTTP resource names with the `dd.trace.http.server.path-resource-name-mapping` option, which maps HTTP request paths to custom resource names using Ant-style patterns.
-For more information, read [Configuring the Java Tracing Library][4]
+For more information, read [Configuring the Java SDK][4]
[4]: /tracing/trace_collection/library_config/java/#traces
@@ -96,7 +96,7 @@ The PHP tracer provides several options for URI normalization:
- `DD_TRACE_RESOURCE_URI_MAPPING_INCOMING` normalizes resource naming for incoming requests
- `DD_TRACE_RESOURCE_URI_MAPPING_OUTGOING` normalizes resource naming for outgoing requests
-For more information, read [Configuring the PHP Tracing Library][5]
+For more information, read [Configuring the PHP SDK][5]
[5]: /tracing/trace_collection/library_config/php/#traces
diff --git a/content/en/tracing/troubleshooting/tracer_debug_logs.md b/content/en/tracing/troubleshooting/tracer_debug_logs.md
index 4d9e6007d60..7a3a1f33a87 100644
--- a/content/en/tracing/troubleshooting/tracer_debug_logs.md
+++ b/content/en/tracing/troubleshooting/tracer_debug_logs.md
@@ -1,6 +1,6 @@
---
title: Tracer Debug Logs
-description: Enable and collect debug logs from APM tracers to troubleshoot configuration and connectivity issues.
+description: Enable and collect debug logs from Datadog SDKs to troubleshoot configuration and connectivity issues.
further_reading:
- link: "/tracing/troubleshooting/connection_errors/"
tag: "Documentation"
@@ -64,7 +64,7 @@ Since version `1.58.0`, you can use the `DD_LOG_FORMAT_JSON` environment variabl
{{< programming-lang lang="python" >}}
-The steps for enabling debug mode in the Datadog Python Tracer depends on the version of the tracer your application is using. Choose the scenario that applies:
+The steps for enabling debug mode in the Datadog Python Tracer depends on the version of the SDK your application is using. Choose the scenario that applies:
### Scenario 1: ddtrace version 2.x and higher
@@ -133,7 +133,7 @@ By default, all logs are processed by the default Ruby logger. When using Rails,
Datadog client log messages are marked with `[ddtrace]`, so you can isolate them from other messages.
-You can override the default logger and replace it with a custom one by using the tracer's `log` attribute:
+You can override the default logger and replace it with a custom one by using the SDK's `log` attribute:
```ruby
f = File.new(".log", "w+") # Log messages should go there
@@ -197,11 +197,11 @@ func main() {
To enable debug mode for the Datadog Node.js Tracer, use the environment variable `DD_TRACE_DEBUG=true`.
-**Note:** For versions below 2.X, debug mode could be enabled programmatically inside the tracer initialization but this is no longer supported.
+**Note:** For versions below 2.X, debug mode could be enabled programmatically inside the SDK initialization but this is no longer supported.
**Application Logs**
-In debug mode the tracer will log debug information to `console.log()` and errors to `console.error()`. You can change this behavior by passing a custom logger to the tracer. The logger should contain `debug()` and `error()` methods that can handle messages and errors, respectively.
+In debug mode the SDK will log debug information to `console.log()` and errors to `console.error()`. You can change this behavior by passing a custom logger to the SDK. The logger should contain `debug()` and `error()` methods that can handle messages and errors, respectively.
For example:
@@ -222,11 +222,11 @@ const tracer = require('dd-trace').init({
Then check the Agent logs to see if there is more info about your issue:
-* If the trace was sent to the Agent properly, you should see `Response from the Agent: OK` log entries. This indicates that the tracer is working properly, so the problem may be with the Agent itself. Refer to the [Agent troubleshooting guide][1] for more information.
+* If the trace was sent to the Agent properly, you should see `Response from the Agent: OK` log entries. This indicates that the SDK is working properly, so the problem may be with the Agent itself. Refer to the [Agent troubleshooting guide][1] for more information.
* If an error was reported by the Agent (or the Agent could not be reached), you will see `Error from the Agent` log entries. In this case, validate your network configuration to ensure the Agent can be reached. If you are confident the network is functional and that the error is coming from the Agent, refer to the [Agent troubleshooting guide][1].
-If neither of these log entries is present, then no request was sent to the Agent, which means that the tracer is not instrumenting your application. In this case, [contact Datadog support][2] and provide the relevant log entries with [a flare][3].
+If neither of these log entries is present, then no request was sent to the Agent, which means that the SDK is not instrumenting your application. In this case, [contact Datadog support][2] and provide the relevant log entries with [a flare][3].
For more tracer settings, check out the [API documentation][4].
@@ -260,7 +260,7 @@ Logs files are saved in the following directories by default. Use the `DD_TRACE_
**Note:**: On Linux, you must create the logs directory before you enabled debug mode.
-Since version `2.19.0`, you can use the `DD_TRACE_LOGFILE_RETENTION_DAYS` setting to configure the tracer to delete log files from the current logging directory on startup. The tracer deletes log files the same age and older than the given number of days, with a default value of `32`.
+Since version `2.19.0`, you can use the `DD_TRACE_LOGFILE_RETENTION_DAYS` setting to configure the SDK to delete log files from the current logging directory on startup. The SDK deletes log files the same age and older than the given number of days, with a default value of `32`.
For more details on how to configure the .NET Tracer, see the [Configuration][2] section.
@@ -275,7 +275,7 @@ There are two types of logs that are created in these paths:
{{< programming-lang lang="php" >}}
-To enable debug mode for the Datadog PHP Tracer, set the environment variable `DD_TRACE_DEBUG=true`. See the PHP [configuration docs][1] for details about how and when this environment variable value should be set in order to be properly handled by the tracer.
+To enable debug mode for the Datadog PHP Tracer, set the environment variable `DD_TRACE_DEBUG=true`. See the PHP [configuration docs][1] for details about how and when this environment variable value should be set in order to be properly handled by the SDK.
There are two options to route debug tracer logs to a file.
@@ -288,7 +288,7 @@ With dd-trace-php 0.98.0+, you can specify a path to a log file for certain debu
- **INI**: `datadog.trace.log_file`
**Notes**:
- - For details about where to set `DD_TRACE_LOG_FILE`, review [Configuring the PHP Tracing Library][2].
+ - For details about where to set `DD_TRACE_LOG_FILE`, review [Configuring the PHP SDK][2].
- If `DD_TRACE_LOG_FILE` is not specified, logs go to the default PHP error location (See **Option 2** for more details).
**Option 2:**
@@ -321,14 +321,14 @@ cmake --install .build
## Review debug logs
-When debug mode for your tracer is enabled, tracer-specific log messages report how the tracer was initialized and whether traces were sent to the Agent. Debug logs are stored in a separate path depending on your logging configuration. If you enable application-level tracer information, debug logs are also sent in the flare for [supported languages](#prerequisites). The following log examples show what might appear in your log file.
+When debug mode for your tracer is enabled, tracer-specific log messages report how the SDK was initialized and whether traces were sent to the Agent. Debug logs are stored in a separate path depending on your logging configuration. If you enable application-level tracer information, debug logs are also sent in the flare for [supported languages](#prerequisites). The following log examples show what might appear in your log file.
If there are errors that you don't understand, or if traces are reported as flushed to Datadog but you cannot see them in the Datadog UI, [contact Datadog support][1] and provide the relevant log entries with [a flare][2].
{{< programming-lang-wrapper langs="java,python,ruby,go,nodejs,.NET,php" >}}
{{< programming-lang lang="java" >}}
-**Intialization log for the tracer:**
+**Intialization log for the SDK:**
```java
[main] DEBUG datadog.trace.agent.ot.DDTracer - Using config: Config(runtimeId=, serviceName=, traceEnabled=true, writerType=DDAgentWriter, agentHost=, agentPort=8126, agentUnixDomainSocket=null, prioritySamplingEnabled=true, traceResolverEnabled=true, serviceMapping={}, globalTags={env=none}, spanTags={}, jmxTags={}, excludedClasses=[], headerTags={}, httpServerErrorStatuses=[512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511], httpClientErrorStatuses=[400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499], httpClientSplitByDomain=false, partialFlushMinSpans=1000, runtimeContextFieldInjection=true, propagationStylesToExtract=[DATADOG], propagationStylesToInject=[DATADOG], jmxFetchEnabled=true, jmxFetchMetricsConfigs=[], jmxFetchCheckPeriod=null, jmxFetchRefreshBeansPeriod=null, jmxFetchStatsdHost=null, jmxFetchStatsdPort=8125, logsInjectionEnabled=false, reportHostName=false)
diff --git a/content/en/tracing/troubleshooting/tracer_startup_logs.md b/content/en/tracing/troubleshooting/tracer_startup_logs.md
index a41ad4d0fca..0c483b224c2 100644
--- a/content/en/tracing/troubleshooting/tracer_startup_logs.md
+++ b/content/en/tracing/troubleshooting/tracer_startup_logs.md
@@ -14,7 +14,7 @@ Some languages log to a separate file depending on language conventions and the
`CONFIGURATION` logs are a JSON formatted representation of settings applied to your tracer. In languages where an Agent connectivity check is performed, the configuration JSON will also include an `agent_error` key, which indicates whether the Agent is reachable.
-`DIAGNOSTICS` or `ERROR` log entries, in the languages that produce them, happen when the tracer encounters an error during application startup. If you see `DIAGNOSTICS` or `ERROR` log lines, confirm from the indicated log that settings and configurations are applied correctly.
+`DIAGNOSTICS` or `ERROR` log entries, in the languages that produce them, happen when the SDK encounters an error during application startup. If you see `DIAGNOSTICS` or `ERROR` log lines, confirm from the indicated log that settings and configurations are applied correctly.
If you do not see logs at all, ensure that your application logs are not silenced and that your log level is at least `INFO` where applicable.
@@ -29,7 +29,7 @@ If you do not see logs at all, ensure that your application logs are not silence
**Diagnostics:**
-The Java tracer does not output Diagnostics logs. For this check, run the tracer in [debug mode][1].
+The Java tracer does not output Diagnostics logs. For this check, run the SDK in [debug mode][1].
[1]: /tracing/troubleshooting/tracer_debug_logs/
@@ -49,7 +49,7 @@ Log files are saved in the following directories by default. Use the `DD_TRACE_L
**Note:** On Linux, you must create the logs directory before you enable debug mode.
-Since version `2.19.0`, you can use the `DD_TRACE_LOGFILE_RETENTION_DAYS` setting to configure the tracer to delete log files from the current logging directory on startup. The tracer deletes log files the same age and older than the given number of days, with a default value of `32`.
+Since version `2.19.0`, you can use the `DD_TRACE_LOGFILE_RETENTION_DAYS` setting to configure the SDK to delete log files from the current logging directory on startup. The SDK deletes log files the same age and older than the given number of days, with a default value of `32`.
- `dotnet-tracer-managed-{processName}-{timestamp}.log` contains the configuration logs.
@@ -127,7 +127,7 @@ ddtrace.disable => Off => Off
**Configuration:**
-If the tracer is in [DEBUG mode][1], the startup logs will appear in the `error_log` once per process on the first request.
+If the SDK is in [DEBUG mode][1], the startup logs will appear in the `error_log` once per process on the first request.
```text
DATADOG TRACER CONFIGURATION - {"agent_error":"Couldn't connect to server","ddtrace.request_init_hook_reachable":false,"date":"2020-07-01T17:42:50Z","os_name":"Linux 49b1cb4bdd12 4.19.76-linuxkit #1 SMP Tue May 26 11:42:35 UTC 2020 x86_64","os_version":"4.19.76-linuxkit","version":"1.0.0-nightly","lang":"php","lang_version":"7.4.5","env":null,"enabled":true,"service":null,"enabled_cli":false,"agent_url":"https://localhost:8126","debug":false,"analytics_enabled":false,"sample_rate":1.000000,"sampling_rules":null,"tags":null,"service_mapping":null,"distributed_tracing_enabled":true,"priority_sampling_enabled":true,"dd_version":null,"architecture":"x86_64","sapi":"cgi-fcgi","ddtrace.request_init_hook":null,"open_basedir_configured":false,"uri_fragment_regex":null,"uri_mapping_incoming":null,"uri_mapping_outgoing":null,"auto_flush_enabled":false,"generate_root_span":true,"http_client_split_by_domain":false,"measure_compile_time":true,"report_hostname_on_root_span":false,"traced_internal_functions":null,"auto_prepend_file_configured":false,"integrations_disabled":null,"enabled_from_env":true,"opcache.file_cache":null}
@@ -135,7 +135,7 @@ DATADOG TRACER CONFIGURATION - {"agent_error":"Couldn't connect to server","ddtr
**Diagnostics:**
-Failed diagnostics for the PHP tracer print in the `error_log` if the tracer is in [DEBUG mode][1].
+Failed diagnostics for the PHP tracer print in the `error_log` if the SDK is in [DEBUG mode][1].
```text
DATADOG TRACER DIAGNOSTICS - agent_error: Couldn't connect to server
@@ -174,7 +174,7 @@ The Go Tracer prints one of two possible diagnostic lines, one for when the Agen
{{< /programming-lang >}}
{{< programming-lang lang="nodejs" >}}
-Startup logs are disabled by default starting in version 2.x of the tracer. They can be enabled using the environment variable `DD_TRACE_STARTUP_LOGS=true`.
+Startup logs are disabled by default starting in version 2.x of the SDK. They can be enabled using the environment variable `DD_TRACE_STARTUP_LOGS=true`.
**Configuration:**
@@ -250,7 +250,7 @@ export DD_TRACE_STARTUP_LOGS=true
### Output
-When startup logs are enabled, the tracer outputs configuration and diagnostic information.
+When startup logs are enabled, the SDK outputs configuration and diagnostic information.
**Configuration:**
@@ -283,14 +283,14 @@ W, [2020-07-08T21:19:05.765994 #143] WARN -- ddtrace: [ddtrace] DATADOG ERROR -
**Diagnostics:**
-For C++, there are no `DATADOG TRACER DIAGNOSTICS` lines output to the tracer logs. However, if the Agent is not reachable, errors appear in your application logs. In Envoy there is an increase in the metrics `tracing.datadog.reports_failed` and `tracing.datadog.reports_dropped`.
+For C++, there are no `DATADOG TRACER DIAGNOSTICS` lines output to the SDK logs. However, if the Agent is not reachable, errors appear in your application logs. In Envoy there is an increase in the metrics `tracing.datadog.reports_failed` and `tracing.datadog.reports_dropped`.
{{< /programming-lang >}}
{{< /programming-lang-wrapper >}}
## Connection errors
-If your application or startup logs contain `DIAGNOSTICS` errors or messages that the Agent cannot be reached or connected to (varying depending on your language), it means the tracer is unable to send traces to the Datadog Agent.
+If your application or startup logs contain `DIAGNOSTICS` errors or messages that the Agent cannot be reached or connected to (varying depending on your language), it means the SDK is unable to send traces to the Datadog Agent.
If you have these errors, check that your Agent is set up to receive traces for [ECS][1], [Kubernetes][2], [Docker][3] or [any other option][4], or [contact support][5] to review your tracer and Agent configuration.
@@ -298,7 +298,7 @@ See [Connection Errors][6] for information about errors indicating that your ins
## Configuration settings
-If your logs contain only `CONFIGURATION` lines, a useful troubleshooting step is to confirm that the settings output by the tracer match the settings from your deployment and configuration of the Datadog Tracer. Additionally, if you are not seeing specific traces in Datadog, review the [Compatibility Requirements][7] section of the documentation to confirm these integrations are supported.
+If your logs contain only `CONFIGURATION` lines, a useful troubleshooting step is to confirm that the settings output by the SDK match the settings from your deployment and configuration of the Datadog Tracer. Additionally, if you are not seeing specific traces in Datadog, review the [Compatibility Requirements][7] section of the documentation to confirm these integrations are supported.
If an integration you are using is not supported, or you want a fresh pair of eyes on your configuration output to understand why traces are not appearing as expected in Datadog, [contact support][5] who can help you diagnose and create a Feature Request for a new integration.
diff --git a/content/en/universal_service_monitoring/setup.md b/content/en/universal_service_monitoring/setup.md
index 06d04980820..863c1403a55 100644
--- a/content/en/universal_service_monitoring/setup.md
+++ b/content/en/universal_service_monitoring/setup.md
@@ -41,7 +41,7 @@ Additional protocols and traffic encryption methods are in