-
Notifications
You must be signed in to change notification settings - Fork 0
Mello Teleop #16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Mello Teleop #16
Conversation
*Referenced ur5e usd file from gcu_objects instead of downloads folder *Separated teleop mode code into their own cfgs for point_pos and pack_env *Added code back to make non teleop modes work again
Gripper not working correctly |
gcu_objects references local commit in submodule, push these changes first! |
Created pull request to merge the |
*Fixed import errors *Switched to binary joint for claw *Added totes
scripts/record_demos.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove file, incomplete
scripts/teleop_se3_agent.py
Outdated
).repeat(num_envs, 1) | ||
|
||
|
||
# gripper_vel = torch.zeros((delta_pose.shape[0], 1), dtype=torch.float, device=device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove comments
scripts/zero_agent.py
Outdated
# compute zero actions | ||
actions = torch.zeros(env.action_space.shape, device=env.unwrapped.device) | ||
|
||
actions[:, 6]= 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove all changes in this file
""" | ||
# process actions | ||
self.action_manager.process_action(action.to(self.device)) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
undo whitespace changes, use the formatting tool pre-commit run --all-files
to format
* Enhance BPP agent with seed initialization and modify packing parameters - Refactor bpp_utils * Add rsl_rl submodule with initial commit reference * Introduce new GCU Lab RL extension for enhanced reinforcement learning utilities. Remove deprecated RL training and playing scripts for various frameworks (RL-Games, Stable-Baselines3, skrl) to streamline the codebase. * Formatting * feat: Add NoArmPack environment with camera support - Use depth images instead of container voxelized heightmaps for speedup - not moving the objects in reserve to source totes to speed up, implementation for the sped up batched version is also there for the future - Logging is now on its own thread and won't be blocking simulation steps - Tested on cracker box placement and achieves 0.7 GCU consistently * fix: Adjust tote dimensions and decimation settings for improved packing behavior * feat: Enhance packing functionality and introduce BPP utility - Added BPP utility class for managing 3D packing operations with multiprocessing support. - Integrated BPP into ManagerBasedRLGCUEnv for improved object packing. - Updated RslRlGCUVecEnvWrapper to accommodate new packing logic and object handling. - Introduced refill_source_totes function to manage object replenishment in totes. - Enhanced object property retrieval with obs_dims function for better observation handling. - Adjusted environment configuration to increase object capacity and optimize packing processes. - Refactored event handling to streamline object detection and randomization processes. - Improved performance profiling in ToteManager for better debugging and optimization. * Record episode rewards * feat: Add inverse wasted volume calculation and improve GCU computation in ToteManager * fix: Fan Wang: Disable subset sum packing method and pack only in first tote * Update gcu_objects submodule - fixes absolute paths in textures to relative paths * feat: Add new script for printing placed objects in a container and enhance view_container_gif.py with truncation and padding options Add seeding and random actions in play.py * 4 90 deg rotations configurations fix: Update submodules and adjust configurations for packing environment * Refactor and clean up codebase - Removed the `random_agent.py` script as it is now replaced by play.py --random - Minor formatting adjustments in `play.py` to improve readability. - Updated `manager_based_rl_gcu_env.py` to enable multiprocessing for packing tasks and removed unnecessary timing logs. - Cleaned up `gcu_vecenv_wrapper.py` by removing unused imports and improving code structure. - Fixed formatting issues in `events.py` and ensured consistent return statements. - Refactored `bpp_utils.py` for better readability and updated type hints for consistency. - Cleaned up `tote_manager.py` by removing commented-out profiling code and ensuring clarity in object placement logic. * feat: Add disable_logging option to ToteManagerCfg and update logging initialization in ToteManager * Update source/tote_consolidation/tote_consolidation/tasks/manager_based/pack/utils/tote_statistics.py Co-authored-by: Copilot <[email protected]> * Update source/tote_consolidation/tote_consolidation/tasks/manager_based/pack/utils/bpp_utils.py Co-authored-by: Copilot <[email protected]> * Update source/tote_consolidation/tote_consolidation/tasks/manager_based/pack/mdp/events.py Co-authored-by: Copilot <[email protected]> * Update scripts/rsl_rl/play.py Co-authored-by: Copilot <[email protected]> * fix: Update depth image reshaping logic and clean up hardcoded values in RslRlGCUVecEnvWrapper --------- Co-authored-by: Copilot <[email protected]>
- Updated gcu_objects submodule to the latest commit for improved integration. - Modified rsl_rl submodule to indicate a dirty state. - Added matplotlib import in play.py for image observation visualization. - Commented out unused logging and stats code in play.py to streamline execution. - Adjusted environment configuration in manager_based_rl_gcu_env.py for improved packing behavior. - Updated setup.py dependencies for compatibility with newer versions of PyTorch and Protobuf. - Enhanced BPP utility in bpp_utils.py with caching and multiprocessing support for packing components. - Cleaned up ToteManager logic to improve object reset handling in overfilled environments.
* Updated bowl collisions to lightwheel * Change to MRCNet SO(3) discretization * Add observation normalization (heightmap sensor)
Deleted record_demos Linted files Undid zero_agent changes Deleted broken commented out code
Remove commented-out code related to event and reward overrides.
Removed commented-out code related to robot joint configurations and event overrides.
Added comments for clarity on gripper command handling.
This update primarily adds the teleoperation agent Mello and analog gripper command support. A "Teleoperation and Imitation Learning" section has been added to the README for Mello, but imitation learning content has not yet been included, as data collection for it has not been implemented.