Skip to content

Commit 0b47857

Browse files
Isaac ROS 0.30.0 (DP3)
1 parent 2d73f40 commit 0b47857

30 files changed

+3398
-100
lines changed

.gitattributes

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
# Ignore Python files in linguist
2+
*.py linguist-detectable=false
3+
14
# Images
25
*.gif filter=lfs diff=lfs merge=lfs -text
36
*.jpg filter=lfs diff=lfs merge=lfs -text

README.md

Lines changed: 38 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,26 @@
11
# Isaac ROS Object Detection
22

3-
<div align="center"><img alt="Isaac ROS DetectNet Sample Output" src="resources/header-image.png" width="600px"/></div>
3+
<div align="center"><img alt="original image" src="resources/isaac_ros_object_detection_example.png" width="300px"/> <img alt="bounding box predictions using DetectNet" src="resources/isaac_ros_object_detection_example_bbox.png" width="300px"/></div>
44

55
## Overview
66

7-
This repository provides a GPU-accelerated package for object detection based on [DetectNet](https://developer.nvidia.com/blog/detectnet-deep-neural-network-object-detection-digits/). Using a trained deep-learning model and a monocular camera, the `isaac_ros_detectnet` package can detect objects of interest in an image and provide bounding boxes. DetectNet is similar to other popular object detection models such as YOLOV3, FasterRCNN, SSD, and others while being efficient with multiple object classes in large images.
7+
Isaac ROS Object Detection contains an ROS 2 package to perform object detection. `isaac_ros_detectnet` provides a method for spatial classification using bounding boxes with an input image. Classification is performed by a GPU-accelerated [DetectNet](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/pretrained_detectnet_v2) model. The output prediction can be used by perception functions to understand the presence and spatial location of an object in an image.
8+
9+
<div align="center"><img alt="graph of nodes using DetectNet" src="resources/isaac_ros_object_detection_nodegraph.png" width="500px"/></div>
10+
11+
`isaac_ros_detectnet` is used in a graph of nodes to provide a bounding box detection array with object classes from an input image. A [DetectNet](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/pretrained_detectnet_v2) model is required to produce the detection array. Input images may need to be cropped and resized to maintain the aspect ratio and match the input resolution of DetectNet; image resolution may be reduced to improve DNN inference performance, which typically scales directly with the number of pixels in the image. `isaac_ros_dnn_image_encoder` provides a DNN encoder to process the input image into Tensors for the DetectNet model. Prediction results are clustered in the DNN decoder to group multiple detections on the same object. Output is provided as a detection array with object classes.
12+
13+
DNNs have a minimum number of pixels that need to be visible on the object to provide a classification prediction. If a person cannot see the object in the image, it’s unlikely the DNN will. Reducing input resolution to reduce compute may reduce what is detected in the image. For example, a 1920x1080 image containing a distant person occupying 1k pixels (64x16) would have 0.25K pixels (32x8) when downscaled by 1/2 in both X and Y. The DNN may detect the person with the original input image, which provides 1K pixels for the person, and fail to detect the same person in the downscaled resolution, which only provides 0.25K pixels for the person.
14+
15+
> **Note**: DetectNet is similar to other popular object detection models such as YOLOV3, FasterRCNN, and SSD, while being efficient at detecting multiple object classes in large images.
16+
17+
<div align="center"><img alt="comparison of bounding box detection to segmentation" src="resources/isaac_ros_object_detection_example_bboxseg.png" width="300px"/></div>
18+
19+
Object detection classifies a rectangle of pixels as containing an object, whereas image segmentation provides more information and uses more compute to produce a classification per pixel. Object detection is used to know if, and where in a 2D image, the object exists. If a 3D spacial understanding or size of an object in pixels is required, use image segmentation.
20+
21+
### DNN Models
22+
23+
To perform DNN inferencing a DNN model is required. NGC provides [DetectNet pre-trained models](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/pretrained_detectnet_v2) for use in your robotics application. Using [TAO](https://developer.nvidia.com/tao-toolkit) these pre-trained models can be fine-tuned for your application.
824

925
### Isaac ROS NITROS Acceleration
1026

@@ -18,20 +34,20 @@ The performance results of benchmarking the prepared pipelines in this package o
1834
| -------------------------- | ------------------ | ---------------- | --------------------- |
1935
| Isaac ROS Detectnet (544p) | 225 fps <br> 7.7ms | 72 fps <br> 18ms | 450 fps <br> 3.2ms |
2036

21-
> **Note:** These numbers are reported with defaults parameter values found in [params.yaml](./isaac_ros_detectnet/config/params.yaml).
37+
> **Note**: These numbers are reported with defaults parameter values found in [params.yaml](./isaac_ros_detectnet/config/params.yaml).
2238
2339
These data have been collected per the methodology described [here](https://github.com/NVIDIA-ISAAC-ROS/.github/blob/main/profile/performance-summary.md#methodology).
2440

25-
### ROS2 Graph Configuration
41+
### ROS 2 Graph Configuration
2642

27-
To run the DetectNet object detection inference, the following ROS2 nodes should be set up and running:
43+
To run the DetectNet object detection inference, the following ROS 2 nodes should be set up and running:
2844

2945
![DetectNet output image showing 2 tennis balls correctly identified](resources/ros2_detectnet_node_setup.svg "Tennis balls detected in image using DetectNet")
3046

3147
1. **Isaac ROS DNN Image encoder**: This will take an image message and convert it to a tensor ([`TensorList`](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/isaac_ros_tensor_list_interfaces/msg/TensorList.msg) that can be
3248
processed by the network.
3349
2. **Isaac ROS DNN Inference - Triton**: This will execute the DetectNet network and take as input the tensor from the DNN Image Encoder.
34-
> **Note:** The [Isaac ROS TensorRT](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference/tree/main/isaac_ros_tensor_rt) package is not able to perform inference with DetectNet models at this time.
50+
> **Note**: The [Isaac ROS TensorRT](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference/tree/main/isaac_ros_tensor_rt) package is not able to perform inference with DetectNet models at this time.
3551
3652
The output will be a TensorList message containing the encoded detections. Use the parameters `model_name` and `model_repository_paths` to point to the model folder and set the model name. The `.plan` file should be located at `$model_repository_path/$model_name/1/model.plan`
3753
3. **Isaac ROS Detectnet Decoder**: This node will take the TensorList with encoded detections as input, and output `Detection2DArray` messages for each frame. See the following section for the parameters.
@@ -40,9 +56,10 @@ To run the DetectNet object detection inference, the following ROS2 nodes should
4056

4157
- [Isaac ROS Object Detection](#isaac-ros-object-detection)
4258
- [Overview](#overview)
59+
- [DNN Models](#dnn-models)
4360
- [Isaac ROS NITROS Acceleration](#isaac-ros-nitros-acceleration)
4461
- [Performance](#performance)
45-
- [ROS2 Graph Configuration](#ros2-graph-configuration)
62+
- [ROS 2 Graph Configuration](#ros-2-graph-configuration)
4663
- [Table of Contents](#table-of-contents)
4764
- [Latest Update](#latest-update)
4865
- [Supported Platforms](#supported-platforms)
@@ -64,24 +81,24 @@ To run the DetectNet object detection inference, the following ROS2 nodes should
6481

6582
## Latest Update
6683

67-
Update 2022-10-19: Updated OSS licensing
84+
Update 2023-04-05: Source available GXF extensions
6885

6986
## Supported Platforms
7087

71-
This package is designed and tested to be compatible with ROS2 Humble running on [Jetson](https://developer.nvidia.com/embedded-computing) or an x86_64 system with an NVIDIA GPU.
88+
This package is designed and tested to be compatible with ROS 2 Humble running on [Jetson](https://developer.nvidia.com/embedded-computing) or an x86_64 system with an NVIDIA GPU.
7289

73-
> **Note**: Versions of ROS2 earlier than Humble are **not** supported. This package depends on specific ROS2 implementation features that were only introduced beginning with the Humble release.
90+
> **Note**: Versions of ROS 2 earlier than Humble are **not** supported. This package depends on specific ROS 2 implementation features that were only introduced beginning with the Humble release.
7491
75-
| Platform | Hardware | Software | Notes |
76-
| -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
77-
| Jetson | [Jetson Orin](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/) <br> [Jetson Xavier](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/) | [JetPack 5.0.2](https://developer.nvidia.com/embedded/jetpack) | For best performance, ensure that [power settings](https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/SD/PlatformPowerAndPerformance.html) are configured appropriately. |
78-
| x86_64 | NVIDIA GPU | [Ubuntu 20.04+](https://releases.ubuntu.com/20.04/) <br> [CUDA 11.6.1+](https://developer.nvidia.com/cuda-downloads) |
92+
| Platform | Hardware | Software | Notes |
93+
| -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
94+
| Jetson | [Jetson Orin](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/) <br> [Jetson Xavier](https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/) | [JetPack 5.1.1](https://developer.nvidia.com/embedded/jetpack) | For best performance, ensure that [power settings](https://docs.nvidia.com/jetson/archives/r34.1/DeveloperGuide/text/SD/PlatformPowerAndPerformance.html) are configured appropriately. |
95+
| x86_64 | NVIDIA GPU | [Ubuntu 20.04+](https://releases.ubuntu.com/20.04/) <br> [CUDA 11.8](https://developer.nvidia.com/cuda-downloads) |
7996

8097
### Docker
8198

8299
To simplify development, we strongly recommend leveraging the Isaac ROS Dev Docker images by following [these steps](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/dev-env-setup.md). This will streamline your development environment setup with the correct versions of dependencies on both Jetson and x86_64 platforms.
83100

84-
> **Note:** All Isaac ROS Quickstarts, tutorials, and examples have been designed with the Isaac ROS Docker images as a prerequisite.
101+
> **Note**: All Isaac ROS Quickstarts, tutorials, and examples have been designed with the Isaac ROS Docker images as a prerequisite.
85102
86103
## Quickstart
87104

@@ -100,6 +117,10 @@ To simplify development, we strongly recommend leveraging the Isaac ROS Dev Dock
100117
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_dnn_inference
101118
```
102119

120+
```bash
121+
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline
122+
```
123+
103124
```bash
104125
git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros
105126
```
@@ -228,7 +249,8 @@ For solutions to problems with using DNN models, please check [here](https://git
228249

229250
| Date | Changes |
230251
| ---------- | ------------------------------------------------------------------------------------- |
252+
| 2023-04-05 | Source available GXF extensions |
231253
| 2022-10-19 | Updated OSS licensing |
232254
| 2022-08-31 | Update to use NITROS for improved performance and to be compatible with JetPack 5.0.2 |
233-
| 2022-06-30 | Support for ROS2 Humble and miscellaneous bug fixes |
255+
| 2022-06-30 | Support for ROS 2 Humble and miscellaneous bug fixes |
234256
| 2022-03-21 | Initial release |

docs/tutorial-isaac-sim.md

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@
66

77
This tutorial walks you through a pipeline for object(people) detection using [DetectNet](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_object_detection) consuming images from Isaac Sim.
88

9+
Last validated with [Isaac Sim 2022.2.1](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/release_notes.html#id1)
10+
911
## Tutorial Walkthrough
1012

1113
1. Complete the [Quickstart section](../README.md#quickstart) in the main README.
@@ -39,16 +41,13 @@ This tutorial walks you through a pipeline for object(people) detection using [D
3941
```
4042

4143
6. Install and launch Isaac Sim following the steps in the [Isaac ROS Isaac Sim Setup Guide](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/blob/main/docs/isaac-sim-sil-setup.md)
42-
7. Open up the Isaac ROS Common USD scene (using the "content" window) located at:
43-
44-
`omniverse://localhost/NVIDIA/Assets/Isaac/2022.1/Isaac/Samples/ROS2/Scenario/carter_warehouse_apriltags_worker.usd`
45-
46-
Wait for it to load completely.
47-
> **Note:** To use a different server, replace `localhost` with `<your_nucleus_server>`
48-
49-
8. Change the Translate values for the Transform box inside the Property section of the Carter_ROS object to
50-
`X=0.0 , Y=0.0`
51-
<div align="center"><img src="../resources/change_translate.png" width="400px"/></div>
44+
7. Open up the Isaac ROS Common USD scene (using the *Content* tab) located at:
45+
```text
46+
http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/2022.2.1/Isaac/Samples/ROS2/Scenario/carter_warehouse_apriltags_worker.usd
47+
```
48+
And wait for it to load completely.
49+
8. Go to the *Stage* tab and select `/World/Carter_ROS`, then in *Property* tab *-> Transform -> Translate* set *X* and *Y* both to `0.0`.
50+
<div align="center"><img src="../resources/Isaac_sim_change_translate.png" width="400px"/></div>
5251
9. Press **Play** to start publishing data from Isaac Sim.
53-
<div align="center"><img src="../resources/isaac_sim_play.png" width="600px"/></div>
52+
<div align="center"><img src="../resources/Isaac_sim_play.png" width="600px"/></div>
5453
10. You should see the image from Isaac Sim with the rectangles overlayed over detected people in the frame.

isaac_ros_detectnet/CMakeLists.txt

Lines changed: 12 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -15,59 +15,39 @@
1515
#
1616
# SPDX-License-Identifier: Apache-2.0
1717

18-
cmake_minimum_required(VERSION 3.5)
18+
cmake_minimum_required(VERSION 3.23.2)
1919
project(isaac_ros_detectnet LANGUAGES C CXX)
2020

21-
# Default to C++17
22-
if(NOT CMAKE_CXX_STANDARD)
23-
set(CMAKE_CXX_STANDARD 17)
24-
endif()
25-
2621
if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
2722
add_compile_options(-Wall -Wextra -Wpedantic)
2823
endif()
2924

3025
find_package(ament_cmake_auto REQUIRED)
31-
find_package(ament_cmake_python REQUIRED)
32-
3326
ament_auto_find_build_dependencies()
27+
find_package(ament_cmake_python REQUIRED)
3428

35-
execute_process(COMMAND uname -m COMMAND tr -d '\n'
36-
OUTPUT_VARIABLE ARCHITECTURE
37-
)
38-
message( STATUS "Architecture: ${ARCHITECTURE}" )
39-
40-
# Install config directory
41-
install(
42-
DIRECTORY config
43-
DESTINATION share/${PROJECT_NAME}
44-
)
45-
46-
# Decoder node
29+
# DetectNetDecoderNode
4730
ament_auto_add_library(detectnet_decoder_node SHARED src/detectnet_decoder_node.cpp)
48-
target_compile_definitions(detectnet_decoder_node
49-
PRIVATE "COMPOSITION_BUILDING_DLL"
50-
)
51-
5231
rclcpp_components_register_nodes(detectnet_decoder_node "nvidia::isaac_ros::detectnet::DetectNetDecoderNode")
5332
set(node_plugins "${node_plugins}nvidia::isaac_ros::detectnet::DetectNetDecoderNode;$<TARGET_FILE:detectnet_decoder_node>\n")
5433

55-
install(TARGETS detectnet_decoder_node
56-
ARCHIVE DESTINATION lib
57-
LIBRARY DESTINATION lib
58-
RUNTIME DESTINATION bin
59-
)
34+
### Install extensions built from source
35+
36+
# DetectNet
37+
add_subdirectory(gxf/detectnet)
38+
install(TARGETS gxf_detectnet DESTINATION share/${PROJECT_NAME}/gxf/lib/detectnet)
39+
40+
### End extensions
6041

6142
if(BUILD_TESTING)
6243
find_package(ament_lint_auto REQUIRED)
6344
ament_lint_auto_find_test_dependencies()
6445

6546
find_package(launch_testing_ament_cmake REQUIRED)
66-
add_launch_test(test/isaac_ros_detectnet_pol_test.py TIMEOUT "400")
47+
add_launch_test(test/isaac_ros_detectnet_pol_test.py TIMEOUT "600")
6748
endif()
6849

6950
# Visualizer python scripts
70-
7151
ament_python_install_package(${PROJECT_NAME})
7252

7353
install(PROGRAMS
@@ -80,6 +60,4 @@ install(DIRECTORY
8060
DESTINATION share/${PROJECT_NAME}
8161
)
8262

83-
ament_auto_package(INSTALL_TO_SHARE launch config)
84-
85-
find_package(vision_msgs REQUIRED)
63+
ament_auto_package(INSTALL_TO_SHARE config launch)

isaac_ros_detectnet/gxf/AMENT_IGNORE

Whitespace-only changes.

0 commit comments

Comments
 (0)