Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
7c60f7a
Revise dotnet-trace documentation for accuracy (#52276)
gewarren Mar 11, 2026
e67f3ce
Update package index with latest published versions (#52278)
azure-sdk Mar 11, 2026
efa296b
Add Orleans Transport Layer Security (TLS) documentation (#49512)
Copilot Mar 11, 2026
35b0328
Update package index with latest published versions (#52281)
azure-sdk Mar 11, 2026
5d67cd4
fix typo in retrieving information stored in attributes (#52272)
zamadye Mar 11, 2026
6ab2a4c
fast follow for unexpected merge (#52274)
gewarren Mar 11, 2026
b480fbb
Update package index with latest published versions (#52282)
azure-sdk Mar 12, 2026
de92d0e
Add IdnMapping.GetAscii and currency symbol behavioral difference exa…
Copilot Mar 12, 2026
1da5da1
Add advanced inheritance topic links to VB Inheritance Basics (#52261)
Copilot Mar 12, 2026
31489cf
Freshness pass: Introduction to .NET (#52136)
Copilot Mar 12, 2026
ae34b48
docs: Breaking change - ConfigurationBinder silently skips array elem…
Copilot Mar 12, 2026
1e50cda
AI freshness pass: update remaining 13 articles (ms.date, content, st…
Copilot Mar 12, 2026
a4fcf22
Update package index with latest published versions (#52289)
azure-sdk Mar 13, 2026
af400da
Add static virtual method example to static interface members tutoria…
kovan Mar 13, 2026
a3dcfea
rebrand for dotnet app mod (#52003)
ninpan-ms Mar 13, 2026
e106316
Update package index with latest published versions (#52294)
azure-sdk Mar 13, 2026
2be95ac
Bump the dotnet group with 3 updates (#51928)
dependabot[bot] Mar 13, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions docs/ai/azure-ai-services-authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ description: Learn about the different options to authenticate to Azure OpenAI a
author: alexwolfmsft
ms.topic: concept-article
ms.date: 03/06/2026
ai-usage: ai-assisted
---

# Foundry tools authentication and authorization using .NET
Expand Down Expand Up @@ -34,10 +35,10 @@ builder.Services.AddAzureOpenAIChatCompletion(
var kernel = builder.Build();
```

Using keys is a straightforward option, but this approach should be used with caution. Keys aren't the recommended authentication option because they:
Keys are straightforward to use, but treat them with caution. Keys aren't the recommended authentication option because they:

- Don't follow [the principle of least privilege](/entra/identity-platform/secure-least-privileged-access). They provide elevated permissions regardless of who uses them or for what task.
- Can accidentally be checked into source control or stored in unsafe locations.
- Can accidentally end up in source control or unsafe storage locations.
- Can easily be shared with or sent to parties who shouldn't have access.
- Often require manual administration and rotation.

Expand All @@ -49,7 +50,7 @@ Microsoft Entra ID is a cloud-based identity and access management service that

- Keyless authentication using [identities](/entra/fundamentals/identity-fundamental-concepts).
- Role-based access control (RBAC) to assign identities the minimum required permissions.
- Can use the [`Azure.Identity`](/dotnet/api/overview/azure/identity-readme) client library to detect [different credentials across environments](/dotnet/api/azure.identity.defaultazurecredential) without requiring code changes.
- Lets you use the [`Azure.Identity`](/dotnet/api/overview/azure/identity-readme) client library to detect [different credentials across environments](/dotnet/api/azure.identity.defaultazurecredential) without requiring code changes.
- Automatically handles administrative maintenance tasks such as rotating underlying keys.

The workflow to implement Microsoft Entra authentication in your app generally includes the following steps:
Expand Down Expand Up @@ -80,7 +81,7 @@ az login

### Configure the app code

Use the [`Azure.Identity`](/dotnet/api/overview/azure/identity-readme) client library from the Azure SDK to implement Microsoft Entra authentication in your code. The `Azure.Identity` libraries include the `DefaultAzureCredential` class, which automatically discovers available Azure credentials based on the current environment and tooling available. For the full set of supported environment credentials and the order in which they are searched, see the [Azure SDK for .NET](/dotnet/api/azure.identity.defaultazurecredential) documentation.
Use the [`Azure.Identity`](/dotnet/api/overview/azure/identity-readme) client library from the Azure SDK to implement Microsoft Entra authentication in your code. The `Azure.Identity` libraries include the `DefaultAzureCredential` class, which automatically discovers available Azure credentials based on the current environment and tooling available. For the full set of supported environment credentials and the order in which `DefaultAzureCredential` searches them, see the [Azure SDK for .NET](/dotnet/api/azure.identity.defaultazurecredential) documentation.

For example, configure Azure OpenAI to authenticate using `DefaultAzureCredential` using the following code:

Expand Down Expand Up @@ -125,7 +126,7 @@ There are two types of managed identities you can assign to your app:
- A **system-assigned identity** is tied to your application and is deleted if your app is deleted. An app can only have one system-assigned identity.
- A **user-assigned identity** is a standalone Azure resource that can be assigned to your app. An app can have multiple user-assigned identities.

Assign roles to a managed identity just like you would an individual user account, such as the **Cognitive Services OpenAI User** role. learn more about working with managed identities using the following resources:
Assign roles to a managed identity just like you would an individual user account, such as the **Cognitive Services OpenAI User** role. Learn more about working with managed identities using the following resources:

- [Managed identities overview](/entra/identity/managed-identities-azure-resources/overview)
- [Authenticate App Service to Azure OpenAI using Microsoft Entra ID](/dotnet/ai/how-to/app-service-aoai-auth?pivots=azure-portal)
Expand Down
11 changes: 6 additions & 5 deletions docs/ai/conceptual/chain-of-thought-prompting.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,21 @@
---
title: "Chain of Thought Prompting - .NET"
title: "Chain of thought prompting - .NET"
description: "Learn how chain of thought prompting can simplify prompt engineering."
ms.topic: concept-article #Don't change.
ms.date: 05/29/2025
ms.date: 03/04/2026
ai-usage: ai-assisted

#customer intent: As a .NET developer, I want to understand what chain-of-thought prompting is and how it can help me save time and get better completions out of prompt engineering.

---

# Chain of thought prompting

GPT model performance and response quality benefits from *prompt engineering*, which is the practice of providing instructions and examples to a model to prime or refine its output. As they process instructions, models make more reasoning errors when they try to answer right away rather than taking time to work out an answer. You can help the model reason its way toward correct answers more reliably by asking for the model to include its chain of thought—that is, the steps it took to follow an instruction, along with the results of each step.
GPT model performance and response quality benefit from *prompt engineering*, which is the practice of providing instructions and examples to a model to prime or refine its output. As they process instructions, models make more reasoning errors when they try to answer right away rather than taking time to work out an answer. Help the model reason its way toward correct answers more reliably by asking the model to include its chain of thought—that is, the steps it took to follow an instruction, along with the results of each step.

*Chain of thought prompting* is the practice of prompting a model to perform a task step-by-step and to present each step and its result in order in the output. This simplifies prompt engineering by offloading some execution planning to the model, and makes it easier to connect any problem to a specific step so you know where to focus further efforts.

It's generally simpler to just instruct the model to include its chain of thought, but you can use examples to show the model how to break down tasks. The following sections show both ways.
It's generally simpler to instruct the model to include its chain of thought, but you can also use examples to show the model how to break down tasks. The following sections show both ways.

## Use chain of thought prompting in instructions

Expand All @@ -27,7 +28,7 @@ Break the task into steps, and output the result of each step as you perform it.

## Use chain of thought prompting in examples

You can use examples to indicate the steps for chain of thought prompting, which the model will interpret to mean it should also output step results. Steps can include formatting cues.
Use examples to indicate the steps for chain of thought prompting, which the model interprets to mean it should also output step results. Steps can include formatting cues.

```csharp
prompt= """
Expand Down
13 changes: 6 additions & 7 deletions docs/ai/conceptual/embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,18 @@
title: "How Embeddings Extend Your AI Model's Reach"
description: "Learn how embeddings extend the limits and capabilities of AI models in .NET."
ms.topic: concept-article #Don't change.
ms.date: 05/29/2025
ms.date: 03/04/2026
ai-usage: ai-assisted

#customer intent: As a .NET developer, I want to understand how embeddings extend LLM limits and capabilities in .NET so that I have more semantic context and better outcomes for my AI apps.

---
# Embeddings in .NET

Embeddings are the way LLMs capture semantic meaning. They are numeric representations of non-numeric data that an LLM can use to determine relationships between concepts. You can use embeddings to help an AI model understand the meaning of inputs so that it can perform comparisons and transformations, such as summarizing text or creating images from text descriptions. LLMs can use embeddings immediately, and you can store embeddings in vector databases to provide semantic memory for LLMs as-needed.
Embeddings are the way LLMs capture semantic meaning. They're numeric representations of non-numeric data that an LLM can use to determine relationships between concepts. Use embeddings to help an AI model understand the meaning of inputs so that it can perform comparisons and transformations, such as summarizing text or creating images from text descriptions. LLMs can use embeddings immediately, and you can store embeddings in vector databases to provide semantic memory for LLMs as needed.

## Use cases for embeddings

This section lists the main use cases for embeddings.

### Use your own data to improve completion relevance

Use your own databases to generate embeddings for your data and integrate it with an LLM to make it available for completions. This use of embeddings is an important component of [retrieval-augmented generation](rag.md).
Expand All @@ -23,7 +22,7 @@ Use your own databases to generate embeddings for your data and integrate it wit

Use embeddings to increase the amount of context you can fit in a prompt without increasing the number of tokens required.

For example, suppose you want to include 500 pages of text in a prompt. The number of tokens for that much raw text will exceed the input token limit, making it impossible to directly include in a prompt. You can use embeddings to summarize and break down large amounts of that text into pieces that are small enough to fit in one input, and then assess the similarity of each piece to the entire raw text. Then you can choose a piece that best preserves the semantic meaning of the raw text and use it in your prompt without hitting the token limit.
For example, suppose you want to include 500 pages of text in a prompt. The number of tokens for that much raw text exceeds the input token limit, making it impossible to directly include in a prompt. You can use embeddings to summarize and break down large amounts of that text into pieces that are small enough to fit in one input, and then assess the similarity of each piece to the entire raw text. Then you can choose a piece that best preserves the semantic meaning of the raw text and use it in your prompt without hitting the token limit.

### Perform text classification, summarization, or translation

Expand All @@ -45,11 +44,11 @@ Use embeddings to help a model create code from text or vice versa, by convertin

## Choose an embedding model

You generate embeddings for your raw data by using an AI embedding model, which can encode non-numeric data into a vector (a long array of numbers). The model can also decode an embedding into non-numeric data that has the same or similar meaning as the original, raw data. There are many embedding models available for you to use, with OpenAI's `text-embedding-ada-002` model being one of the common models that's used. For more examples, see the list of [Embedding models available on Azure OpenAI](/azure/ai-services/openai/concepts/models#embeddings).
You generate embeddings for your raw data by using an AI embedding model, which can encode non-numeric data into a vector (a long array of numbers). The model can also decode an embedding into non-numeric data that has the same or similar meaning as the original, raw data. OpenAI's `text-embedding-3-small` and `text-embedding-3-large` are the currently recommended embedding models, replacing the older `text-embedding-ada-002`. For more examples, see the list of [Embedding models available on Azure OpenAI](/azure/ai-services/openai/concepts/models#embeddings).

### Store and process embeddings in a vector database

After you generate embeddings, you'll need a way to store them so you can later retrieve them with calls to an LLM. Vector databases are designed to store and process vectors, so they're a natural home for embeddings. Different vector databases offer different processing capabilities, so you should choose one based on your raw data and your goals. For information about your options, see [Vector databases for .NET + AI](vector-databases.md).
After you generate embeddings, you need a way to store them so you can later retrieve them with calls to an LLM. Vector databases are designed to store and process vectors, so they're a natural home for embeddings. Different vector databases offer different processing capabilities. Choose one based on your raw data and your goals. For information about your options, see [Vector databases for .NET + AI](vector-databases.md).

### Using embeddings in your LLM solution

Expand Down
Loading
Loading