Skip to content

Conversation

@liguilong256
Copy link
Collaborator

Description

This PR adds a new Reinforcement Learning task environment PushTRL (Push-T).

Changes include:
Environment Implementation: Added PushTEnv class which defines the task of pushing a T-shaped block to a goal using a UR10 manipulator. This also provides the relevant parameters for the task.

Checklist

  • Added PushTRL task environment.



@register_env("PushTRL", max_episode_steps=50, override=True)
class PushTEnv(EmbodiedEnv):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should inherit RLEnv for now.

super()._initialize_episode(env_ids=env_ids, **kwargs)
# self._draw_goal_marker()

def _step_action(self, action: EnvAction) -> EnvAction:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We now have standard preprocess_action function, so _step_action should not be modified.

self.robot.set_qpos(qpos=target_qpos)
return scaled_action

def _get_eef_pos(self) -> torch.Tensor:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should implement a standard eef_pose computation method. cc @yhnsu

def evaluate(self, **kwargs) -> Dict[str, Any]:
info = self.get_info(**kwargs)
return {
"success": info["success"][0].item(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

success should be a vector?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants