-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data storage format #67
Comments
Hi @COST-97 , I assume you had a look at this README and you know the structure of the data for each frame. If sth is unclear, don't hesitate to ask. Apart from the frames, there is:
|
Are you planning to record data with teleoperation using a VR headset? |
Glad to hear from you! |
In principle, it's not too difficult to replace the robot manipulator with a different one in our simulation (it would be more straightforward, if it also had 7 DOF). Due to the different kinematics you might have to play around with the placement of the robot in the environment, to ensure all positions are reachable without leading the robot into singularities. Recording data without VR is a bit more challenging, another alternative we have tested is using a 3D mouse. Scripting the policy would be going away from play data and it is also quite hard to script a policy for complex contact-rich manipulations. |
Hello: |
You probably refer to the language annotations, for which we use our automatic labeling tool with a sequence length of 64. That means we sample random sequences of lengths 64 and check if any task was solved in that interval, in which case that sequence gets a language label. Since most tasks need less than 64 frames to be solved, there are usually some frames at the beginning or at the end of the sequence that are not strictly task-related (for example the locomotion in the direction of the handles or switches). |
Hello: |
For the conversion from absolute metric space to relative normalized actions, the position component (x,y,z) is clipped at the interval [-0.02, 0.02] and the orientation component (euler angles) at [-0.05, 0.05]. After the clipping, we normalize to the range [-1, 1]. This happens in the calvin_env at the time of rendering. To convert them to back metric space, the position component (x,y,z) is multiplied by 0.02 and the orientation component (euler angles) by 0.05. This happens in calvin_env here, when we do a rollout and want to control the robot with relative actions. |
Hello: |
I don't know the min and max values for the state observation, you would have to go through the dataset and check.
If you want to normalize to -1 and 1, just modify the script I linked, but it will take some time to run through the whole dataset. |
Hello:
If not, could you share your code for recording data? |
Hello:
We want to collect more data based on this dataset.
Can you elaborate on the role of each file in the training and validation files and the tutorial on data collection?
Thank you so much!
The text was updated successfully, but these errors were encountered: