Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
162 changes: 141 additions & 21 deletions v2/developers/developer-journey.mdx
Original file line number Diff line number Diff line change
@@ -1,31 +1,151 @@
---
title: 'Developer Journey'
sidebarTitle: 'Developer Journey'
description: 'A guide to building on Livepeer for video streaming and AI pipelines'
keywords: ["livepeer", "developers", "building on livepeer", "developer journey", "developer", "journey", "guide", "building"]
description: 'Choose your path as a Livepeer developer — provide workloads, consume pipelines, or contribute to the core protocol'
keywords: ["livepeer", "developers", "building on livepeer", "developer journey", "workload provider", "workload consumer", "core contributor", "BYOC", "ai pipelines"]
og:image: "/snippets/assets/domain/SHARED/LivepeerDocsLogo.svg"
---

| Stage | Name | Purpose | Outcomes |
| ----- | ----------- | --------------------------------------------------- | ---------------------------------------------------------------------------------- |
| 0 | Awareness | Understand Livepeer, compute model, ecosystem roles | Clarity on Protocol → Network → Apps; basic mental model |
| 1 | Orientation | Identify which builder persona fits their goals | Path chosen: App Dev, Gateway Operator, GPU Node, Protocol Dev, Tooling, Community |
| 2 | Activation | Perform first meaningful action in chosen path | "First win" achieved: app built, node deployed, contract written, tool created |
| 3 | Progression | Increase expertise and contribution | Contributions, optimizations, mentoring, advanced workflows |
| 4 | Hero | Become a leader/steward in the ecosystem | Operate at scale, publish tools, author proposals, run programs |
Livepeer offers multiple paths for developers depending on how you want to engage with the network. Whether you're bringing compute workloads, consuming existing AI pipelines, or contributing to the core Go implementation, there's a clear path for you.

<Tip>
Looking to run an orchestrator? Head to the [Orchestrator section](/v2/orchestrators/quickstart/overview) for setup guides and options.
</Tip>

## Pick Your Path

<Columns cols={3}>
<Card title="Workload Provider" icon="server" href="#path-1-workload-provider" arrow>
Create workloads that run on Livepeer orchestrators — build containers, deploy pipelines, and leverage the network's GPU compute.
</Card>
<Card title="Workload Consumer" icon="wand-magic-sparkles" href="#path-2-workload-consumer" arrow>
Consume existing pipeline workloads running on the Livepeer network — no infrastructure setup required.
</Card>
<Card title="Core Contributor" icon="code-branch" href="#path-3-core-contributor" arrow>
Contribute directly to go-livepeer, the Go implementation that powers the Livepeer network.
</Card>
</Columns>

---

```mermaid
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#1a1a1a', 'primaryTextColor': '#fff', 'primaryBorderColor': '#2d9a67', 'lineColor': '#2d9a67', 'secondaryColor': '#0d0d0d', 'tertiaryColor': '#1a1a1a', 'background': '#0d0d0d', 'fontFamily': 'system-ui', 'clusterBkg': 'rgba(255,255,255,0.05)', 'clusterBorder': '#2d9a67' }}}%%
flowchart TD
classDef stage fill:#0d0d0d,color:#fff,stroke:#2d9a67,stroke-width:1px;
classDef phase fill:#0d0d0d,color:#fff,stroke:#2d9a67,stroke-width:1px;

A["Stage 0: Awareness<br/>What is Livepeer?"]:::stage --> B["Stage 1: Orientation<br/>Choose builder path"]:::stage
B --> C["Stage 2: Activation<br/>First wins"]:::stage
C --> D["Stage 3: Progression<br/>Deepening skills"]:::stage
D --> E["Stage 4: Hero<br/>Ecosystem leadership"]:::stage

B --> B1["Paths:<br/>• Application Developer<br/>• Gateway Operator<br/>• GPU Node Operator<br/>• Protocol Developer<br/>• Tooling Developer<br/>• Community Builder"]:::phase
C --> C1["Each path executes its first real success milestone"]:::phase
D --> D1["Contribute, optimize, mentor, build advanced infra"]:::phase
E --> E1["Heroes lead apps, gateways, nodes, protocol, tooling, gov"]:::phase
classDef default stroke-width:2px

Start["Developer"] --> WP["Workload Provider"]
Start --> WC["Workload Consumer"]
Start --> CC["Core Contributor"]

WP --> TR["Traditional Route<br/>BYOC + Gateway"]
WP --> SC["Direct Smart Contract<br/>Interaction"]

TR --> BYOC["Build BYOC Container"]
BYOC --> COORD["Coordinate with<br/>Orchestrators"]
COORD --> GW["Run a Gateway"]

SC --> OPS["Fork livepeer-ops"]
SC --> CUSTOM["Build Your Own<br/>Tooling"]

WC --> DD["Daydream"]
WC --> EP["Embody Pipeline<br/>Consumer"]

CC --> REPO["go-livepeer Repo"]
CC --> CG["Contribution Guide"]
```

---

## Path 1: Workload Provider

As a **Workload Provider**, you create workloads that run on Livepeer orchestrators. You build the containers and pipelines — orchestrators on the network provide the GPU compute to execute them. Whether it's an AI inference pipeline, a video transcoding job, or something entirely custom, you define the workload and the network runs it.

There are two approaches depending on how much control you need.

### Option A: Traditional Route (Gateway + BYOC)

The standard path for getting your workloads running on orchestrators. You develop a BYOC (Bring Your Own Container) workload, run a gateway to route jobs, and orchestrators pick up and execute your containers on their GPUs.

<Steps>
<Step title="Understand the BYOC model" icon="boxes">
BYOC lets you package your workload as a sidecar container that runs alongside the go-livepeer main container on orchestrator nodes. You define what the container does — the orchestrators provide the compute.

<Card title="BYOC Documentation" icon="book-open" href="/v2/developers/ai-pipelines/byoc" arrow horizontal>
Learn how BYOC containers work and how to build one.
</Card>
</Step>
<Step title="Build your BYOC container" icon="docker">
Develop and test your sidecar container locally. This is where your workload logic lives — inference models, processing pipelines, or any custom compute task.

<Card title="BYOC Examples & Integrations" icon="github" href="https://github.com/ad-astra-video/livepeer-app-pipelines" arrow horizontal>
Reference implementations and example pipelines for building BYOC containers.
</Card>
</Step>
<Step title="Run your own gateway" icon="tower-broadcast">
Set up a Livepeer gateway node. The gateway is how you submit jobs to orchestrators and receive results back.

<Card title="Gateway Quickstart" icon="rocket" href="/v2/gateways/run-a-gateway/quickstart/quickstart-a-gateway" arrow horizontal>
Get your gateway node running.
</Card>
</Step>
<Step title="Coordinate with orchestrators" icon="arrow-up-right-from-square">
Contact orchestrators directly to get your BYOC container running on their nodes. Once they're running your container, you can route jobs to them through your gateway.

<Card title="AI Pipelines Overview" icon="brain-circuit" href="/v2/developers/ai-pipelines/overview" arrow horizontal>
Understand the full pipeline architecture.
</Card>
</Step>
</Steps>

### Option B: Direct Smart Contract Interaction

If you want full control over orchestrator management, you can interact with Livepeer's smart contracts directly using your own tooling. This lets you onboard orchestrators, control nodes remotely, manage payments, and build custom orchestration logic — all without going through the standard gateway flow.

A good starting point is forking **livepeer-ops**, which provides infrastructure tooling for exactly this: onboarding orchestrators, remote node management, and payment handling through direct smart contract interaction.

<Columns cols={2}>
<Card title="livepeer-ops" icon="github" href="https://github.com/its-DeFine/livepeer-ops" arrow>
Fork this to get started — includes orchestrator onboarding, remote node control, and smart contract payment tooling.
</Card>
<Card title="Embody Pipeline" icon="github" href="https://github.com/its-DeFine/Unreal_Vtuber" arrow>
A reference implementation that uses direct smart contract interaction to run a real-time avatar pipeline on Livepeer.
</Card>
</Columns>

<Tip>
You're not limited to these two options. The smart contract interface is open — you can fork livepeer-ops as a foundation, extend the Embody pipeline, or build your own tooling from scratch. Use whatever fits your architecture.
</Tip>

---

## Path 2: Workload Consumer

As a **Workload Consumer**, you use existing pipeline workloads that are already running on the Livepeer network. You don't need to set up infrastructure or deploy containers — you connect to available pipelines and consume their output.

### Available Pipelines

<Columns cols={2}>
<Card title="Daydream (DaS Scope)" icon="wand-sparkles" href="#">
Consume Daydream pipeline workloads on the Livepeer network.
<Note>Link coming soon</Note>
</Card>
<Card title="Embody Pipeline" icon="user-robot" href="#">
Consume Embody pipeline workloads for real-time avatar and VTuber applications.
<Note>Link coming soon</Note>
</Card>
</Columns>

---

## Path 3: Core Contributor

As a **Core Contributor**, you work directly on go-livepeer — the Go implementation that powers gateways, orchestrators, and the protocol itself. This path is for developers who want to improve the network at the infrastructure level.

<Columns cols={2}>
<Card title="go-livepeer" icon="github" href="https://github.com/livepeer/go-livepeer" arrow>
The official Go implementation of the Livepeer protocol. Clone the repo and start exploring.
</Card>
<Card title="Contribution Guide" icon="book-open" href="/v2/developers/guides-and-resources/contribution-guide" arrow>
Guidelines for contributing to Livepeer — coding standards, PR process, and how to get your changes merged.
</Card>
</Columns>
99 changes: 99 additions & 0 deletions v2/orchestrators/orchestrator-journey.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
---
title: 'Orchestrator Journey'
sidebarTitle: 'Orchestrator Journey'
description: 'Choose your path as a Livepeer orchestrator — run a full go-livepeer node or use a lightweight split setup'
keywords: ["livepeer", "orchestrator", "go-livepeer", "OrchestratorSiphon", "GPU", "setup", "rewards", "staking"]
og:image: "/snippets/assets/domain/SHARED/LivepeerDocsLogo.svg"
---

As an orchestrator, you run a node that provides GPU compute to the Livepeer network. Orchestrators process workloads submitted by gateways — transcoding video, running AI inference, or executing BYOC containers — and earn LPT rewards and ETH fees in return.

There are two approaches depending on whether you want full control of a single setup or prefer to separate rewards management from workload processing.

## Pick Your Setup

<Columns cols={2}>
<Card title="Full go-livepeer Setup" icon="microchip" href="#option-a-full-go-livepeer-setup" arrow>
Run the full go-livepeer binary on one machine — register on-chain, stake LPT, and process workloads directly.
</Card>
<Card title="Split Setup (Siphon + go-livepeer)" icon="shield-check" href="#option-b-split-setup-orchestratorsiphon--go-livepeer" arrow>
Separate rewards and keystore management from workload processing across two machines. Avoid missing rewards.
</Card>
</Columns>

---

```mermaid
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#1a1a1a', 'primaryTextColor': '#fff', 'primaryBorderColor': '#2d9a67', 'lineColor': '#2d9a67', 'secondaryColor': '#0d0d0d', 'tertiaryColor': '#1a1a1a', 'background': '#0d0d0d', 'fontFamily': 'system-ui', 'clusterBkg': 'rgba(255,255,255,0.05)', 'clusterBorder': '#2d9a67' }}}%%
flowchart TD
classDef default stroke-width:2px

Start["Orchestrator"] --> FULL["Option A:<br/>Full go-livepeer Setup"]
Start --> SPLIT["Option B:<br/>Split Setup"]

FULL --> INSTALL["Install go-livepeer"]
INSTALL --> REGISTER["Register &<br/>Stake On-Chain"]
REGISTER --> PROCESS["Process Workloads"]

SPLIT --> SIPHON["Secure Machine:<br/>OrchestratorSiphon"]
SPLIT --> GPU["GPU Machine:<br/>go-livepeer"]

SIPHON --> REWARDS["Claim Rewards<br/>Vote on Proposals"]
GPU --> WORKLOADS["Process Workloads"]
```

---

## Option A: Full go-livepeer Setup

The standard path runs the full go-livepeer binary, registers on-chain, and actively processes workloads. Everything runs on one machine. This is the path for operators who want a straightforward, all-in-one orchestrator.

<Steps>
<Step title="Install go-livepeer" icon="download">
Build from source or download a release binary. The go-livepeer CLI includes everything needed to run as an orchestrator — no additional tooling required.

<Card title="go-livepeer Repository" icon="github" href="https://github.com/livepeer/go-livepeer" arrow horizontal>
Source code, releases, and build instructions.
</Card>
</Step>
<Step title="Register and activate on-chain" icon="link">
Use the go-livepeer CLI to register as an orchestrator on the Livepeer protocol. You'll need to stake LPT and activate your node on the Arbitrum L2.

<Card title="go-livepeer Technical Docs" icon="book-open" href="https://github.com/livepeer/go-livepeer/tree/master/doc" arrow horizontal>
Docs for networking, GPU setup, payments, and Ethereum/Arbitrum configuration.
</Card>
</Step>
<Step title="Configure your GPU and start processing" icon="microchip">
Set up your GPU configuration and start your orchestrator. Once active, your node receives workloads from gateways and processes them.
</Step>
</Steps>

---

## Option B: Split Setup (OrchestratorSiphon + go-livepeer)

The split setup separates two concerns across different machines:

- **Secure machine** — runs **OrchestratorSiphon**, a lightweight Python toolkit that manages your orchestrator keystore. Handles on-chain actions like claiming rewards, voting on proposals, and updating your service URI. Your keystore stays on one secure, isolated machine.
- **GPU machine** — runs **go-livepeer** to actively process workloads. No keystore needed on this box.

This avoids a common problem: missing rewards because your orchestrator node was busy processing workloads or went down temporarily. With the split setup, rewards claiming runs independently on its own machine.

<Columns cols={2}>
<Card title="OrchestratorSiphon" icon="github" href="https://github.com/Stronk-Tech/OrchestratorSiphon" arrow>
Lightweight keystore management — claim rewards, vote on proposals, update service URI. Runs on your secure machine.
</Card>
<Card title="go-livepeer" icon="github" href="https://github.com/livepeer/go-livepeer" arrow>
Deploy on your GPU machine for active workload processing. Point your service URI to this box.
</Card>
</Columns>

<Tip>
You can start with OrchestratorSiphon alone as a passive orchestrator to earn rewards while you set up your GPU infrastructure. When you're ready to process workloads, deploy go-livepeer on a separate machine and update your service URI to point to it.
</Tip>

---

## Not sure which setup?

If you're new to Livepeer and just want to contribute a GPU without running a full orchestrator, consider [joining an existing pool](/v2/orchestrators/quickstart/join-a-pool) instead. The [Orchestrator Quickstart](/v2/orchestrators/quickstart/overview) has a decision tree to help you choose.
Loading