Robotics_resume
no image
Robotics_resume

Yikuang Yang
1669 APT E Belleville Way, Sunnyvale, CA, 94087 | ------------ | ------------

EDUCATION
Johns Hopkins University Baltimore, MD 2017 – 2019
M.S.E candidate, Robotics GPA:3.95/4.0
Coursework: Computer Vision; Computer Graphics, Augmented Reality; Machine Learning; Deep Learning; Robot System programming; Sensor-Based Robotics; Robot Kinematics, Dynamics and Control
University of Southern California, Los Angeles, CA 2013 – 2017
B.S., Mechanical Engineering GPA:3.55/4.0

TECHNICAL SKILLS
• Programming: C++, Python (openCV, Pytorch) JAVA, ROS, Matlab/Simulink
• Utilities: Arduino, Labview, Git, CAD(Solidworks, NX), 3D printing, laser-cutter

WORK EXPERIENCE
Robotics Manipulation Intern, BSH by Bosch.LLC, Sunnyvale, CA 05/2018 – 08/2018
• Utilized move group interface in MoveIt! to build a sampling-based path planner that generated collision-free paths from the outside to the inside of home appliances(e.g. dishwasher)
• Manipulated Baxter robot to flip plastic bowls by implementing control algorithms (inverse-kinematics, resolved rate, gradient-based) in ROS
• Implemented a tracker in ROS to record the Baxter position and pose with respect to home appliances using DART, a 3D tracking library
• Integrated depth information from Kinect v2 and with the Baxter poses to update the planning path
• Visualized the tracked model in RViz by adding .urdf file support to DART
Robotics Researcher, CAM Lab, USC, Los Angeles, CA 08/2016 – 05/2017
• Constructed and programmed a four-legged robot platform to traverse complex terrains and avoid obstacles
• Designed and optimized gait pattern and structure of the robot based on alligator with Solidworks
• Created parts from Delrin/Acetal and ABS using a laser cutter, 3D printer, Hand Drill, and Dremel Tool
• Built a closed-loop control system with Arduino controller, servo motors and IR sensors
• Presented the design process to audiences during the Center of Manufacturing showcase

PROJECTS
Fully Pelvis Segmentation in 2D X-Ray [08/2018 – 12/2018]
• Trained a deep learning model to segment pelvis bones from the 2D X-ray images using U-net structure
• Built the training set from digitally reconstructed radiographs of the pelvis using DeepDRR and extended the training set with data augmentation.
• Reduced outliers and noise by replacing the loss function from the binary cross-entropy to confidence mask
• Trained 12 epochs on NVIDIA P6000 and achieved a validation accuracy of 97.6%

Turtlebot SLAM with Visual Landmarks [01/2018 – 05/2018]
• Implemented a ROS node to receive images from Kinect and publish a map with scene labels
• Built a label publisher in python and classify scenes based on input images using a pre-trained Caffe model
• Synchronized point cloud from Kinect and generated Octomap for mapping visualization and navigation
• Generated a scene-labeled floor plan for Wyman building with correct labels on the lobby, corridors, and labs

Husky Robot Localization and Planning [10/2017 – 12/2017]
• Implemented a sampling-based path planner to avoid collision during paths and adopted A* algorithm to accelerate the searching process.
• Implemented extended Kalman Filter and particle filter to estimate the location of a Husky robot in Gazebo.