Aditya Nisal
Email : anisal@wpi.edu
About Me
I am a Robotics Engineer with experience in AI, computer vision, and deep learning. My work focuses on building innovative solutions in robotics, integrating advanced perception systems, and optimizing real-time multi-object tracking. With a strong foundation in kinematics, dynamics, and real-world applications from internships and projects, I am passionate about leveraging cutting-edge technology to solve complex problems. Currently working at Dexmate, I am always eager to learn, grow, and contribute to impactful projects in robotics and AI.
My Experience
June 2024 -- October 2024
Onki.AI
Robotics Software Intern
• Developed and deployed a multi-object tracking algorithm integrating models for emotion (FER), demographic
attributes (FairFace), pose, hand waving (MediaPipe), gaze direction, and motion (Optical Flow), optimized on
NVIDIA Jetson with TensorRT for improved inference time.
• Implemented real-time image stitching using SIFT detection, FLANN Matcher, and homography via RANSAC for
dual camera feeds with ROS2, achieving a frame rate of approximately 5 FPS, enhancing real-time monitoring.
June 2023 -- December 2023
Monarch Tractor
Perception Intern
• Developed a CUDA script for efficient conversion of RGB images to YUV2 format and back to RGB, optimizing image
processing workflows.
• Developed a Vision and Large Language Model-based Image Retrieval System using RAG (Retrieval Augmented
Generation) on AWS, enhancing image retrieval and generation from user prompts: text and image, and preparing to
publish research on the same.
• Developed a multi-object tracking and person reidentification system using deep learning models for object detection and
Transformer-based ReID. Implemented feature extraction, optical flow, and Kalman filter for tracking and state
prediction, optimized with TensorRT on NVIDIA Jetson Orin.
• Developed and implemented a ROS-based script for simultaneous data collection and synchronization from various
sensors, including Lidar, Zed Camera, and monocular cameras, to bolster the research and development initiatives of the
Autonomy team.
• Spearheaded the transition of perception pipeline codes from ROS to ROS2, automating the process with custom scripts
and enhancing functionality, which streamlined and fortified the R&D optimization efforts.
• Implemented image stitching using ORB detection, BFMatcher, and homography via RANSAC while wrapping
perspective to seamlessly integrate multiple camera views on the tractor interface, improving real-time monitoring.
• Engineered and implemented GStreamer pipelines for video processing, slashing Jetson CPU