Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 54 additions & 36 deletions comfyui_embedded_docs/docs/ClipTextEncode/en.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,65 @@
`CLIP Text Encode (CLIPTextEncode)` acts like a translator, converting your creative text descriptions into a special "language" that AI can understand, helping the AI accurately interpret what kind of image you want to create.
`CLIP Text Encode (CLIPTextEncode)` acts as a translator, converting your text descriptions into a format that AI can understand. This helps the AI interpret your input and generate the desired image.

Imagine communicating with a foreign artist - you need a translator to help accurately convey the artwork you want. This node acts as that translator, using the CLIP model (an AI model trained on vast amounts of image-text pairs) to understand your text descriptions and convert them into "instructions" that the AI art model can understand.
Think of it as communicating with an artist who speaks a different language. The CLIP model, trained on vast image-text pairs, bridges this gap by converting your descriptions into "instructions" that the AI model can follow.

## Inputs

| Parameter | Data Type | Input Method | Default | Range | Description |
|-----------|-----------|--------------|---------|--------|-------------|
| text | STRING | Text Input | Empty | Any text | Like detailed instructions to an artist, enter your image description here. Supports multi-line text for detailed descriptions. |
| clip | CLIP | Model Selection | None | Loaded CLIP models | Like choosing a specific translator, different CLIP models are like different translators with slightly different understandings of artistic styles. |
| text | STRING | Text Input | Empty | Any text | Enter the description (prompt) for the image you want to create. Supports multi-line input for detailed descriptions. |
| clip | CLIP | Model Selection | None | Loaded CLIP models | Select the CLIP model to use when translating your description into instructions for the AI model. |

## Outputs

| Output Name | Data Type | Description |
|-------------|-----------|-------------|
| CONDITIONING | CONDITIONING | These are the translated "painting instructions" containing detailed creative guidance that the AI model can understand. These instructions tell the AI model how to create an image matching your description. |

## Usage Tips

1. **Basic Text Prompt Usage**
- Write detailed descriptions like you're writing a short essay
- More specific descriptions lead to more accurate results
- Use English commas to separate different descriptive elements

2. **Special Feature: Using Embedding Models**
- Embedding models are like preset art style packages that can quickly apply specific artistic effects
- Currently supports .safetensors, .pt, and .bin file formats, and you don't necessarily need to use the complete model name
- How to use:
1. Place the embedding model file (in .pt format) in the `ComfyUI/models/embeddings` folder
2. Use `embedding:model_name` in your text
Example: If you have a model called `EasyNegative.pt`, you can use it like this:

```
a beautiful landscape, embedding:EasyNegative, high quality
```

3. **Prompt Weight Adjustment**
- Use parentheses to adjust the importance of certain descriptions
- For example: `(beautiful:1.2)` will make the "beautiful" feature more prominent
- Regular parentheses `()` have a default weight of 1.1
- Use keyboard shortcuts `ctrl + up/down arrow` to quickly adjust weights
- The weight adjustment step size can be modified in settings

4. **Important Notes**
- Ensure the CLIP model is properly loaded
- Use positive and clear text descriptions
- When using embedding models, make sure the file name is correct and compatible with your current main model's architecture
| CONDITIONING | CONDITIONING | The processed "instructions" of your description that guide the AI model when generating an image. |

## Prompt Features

### Embedding Models

Embedding models allow you to apply specific artistic effects or styles. Supported formats include `.safetensors`, `.pt`, and `.bin`. To use an embedding model:

1. Place the file in the `ComfyUI/models/embeddings` folder.
2. Reference it in your text using `embedding:model_name`.

Example: If you have a model named `EasyNegative.pt` in your `ComfyUI/models/embeddings` folder, then you can use it like this:

```
worst quality, embedding:EasyNegative, bad quality
```

**IMPORTANT**: When using embedding models, verify that the file name matches and is compatible with your model's architecture. For example an embedding designed for SD1.5 will not work correctly for a SDXL model.

### Prompt Weight Adjustment

You can adjust the importance of certain parts of your description using parentheses. For example:

- `(beautiful:1.2)` increases the weight of "beautiful".
- `(beautiful:0.8)` decrease the weight of "beautiful".
- Plain parentheses `(beautiful)` will apply a default weight of 1.1.

You can use the keyboard shortcuts `ctrl + up/down arrow` to quickly adjust weights. The weight adjustment step size can be modified in the settings.

If you want to include literal parentheses in your prompt without changing the weight, you can escape them using a backslash e.g. `\(word\)`.

### Wildcard/Dynamic Prompts

Use `{}` to create dynamic prompts. For example, `{day|night|morning}` will randomly select one option each time the prompt is processed.

If you want to include literal curly braces in your prompt without triggering dynamic behavior, you can escape them using a backslash e.g. `\{word\}`.

### Comments in Prompts

You can add comments that are excluded from the prompt by using:

- `//` to comment out a single line.
- `/* */` to comment out a section or multiple lines.

Example:

```
// this line is excluded from the prompt.
a beautiful landscape, /* this part is ignored */ high quality
```
Loading