top of page
Home: Homepage_about

About

  • Chevening Scholar 2016.

  • Flight control software developer for fixed-wing drones in 5 years.

  • Having been to 12 countries.

  • Passionate about robots, drones and machine learning.

  • [My CV]

Publications

  1. S. Prior, Y. Guo and H. Nguyen. "Aerial Docking between Remotely Piloted Aircraft". Defence Global Publication, Aviation, pp. 16-17, 2017.

  2. H. Nguyen, Hung Manh La, and Matthew Deans. ”Deep Learning with Experience Ranking Convolutional Neural Network for Robot Manipulator.” arXivpreprint arXiv:1809.05819 (2018), Submitted for ICRA 2019 [link].

  3. H. Nguyen and H. M. La, ”Review of deep reinforcement learning methods for robot manipulation”, in 2019 Third IEEE International Conference on Robotic Computing (IRC).

  4. A. Sehgal, H. M. La, S. J. Louis, H. Nguyen, ”Deep Reinforcement Learning using Genetic Algorithm for Parameter Optimization”, in 2019 Third IEEE International Conference on Robotic Computing (IRC).

Contact

cover.jpg

Hoi An, Vietnam

July, 2018

Past Projects

I have 5 years developing flight control software for fixed-wing Unmanned Aerial Vehicles (UAVs) before pursuing a master course in UAV (Unmanned Aircraft Systems Design) at University of Southampton, the UK as a Chevening scholar. I have strong knowledge in different areas which are related to the development of control software for UAV including control theory, embedded programming language (C/C++) or higher-level programming languages (MATLAB, C#) and data fusion (Kalman filter). Most of my past projects are drone-related with the focus on apply control theory for these flying machines to perform autonomous flights. Recently, I am really into reinforcement learning and computer vision with some recent projects in these areas. I firmly believe that truly intelligent robots will be built based on these technologies.

Autopilot Software Development

April, 2013

Responsibilities:

  • Developing control and guidance algorithms for a fixed-wing UAV (26kg MTOM).

  • Developing and analyzing the control performance in Matlab/Simulink using control stability criteria.

  • Performing model identification of the airplane by inputting pre-defined signals and analyzing outputs to estimate the model. Based on this model, parameters for PID controllers are estimated and finely tuned by real flight tests.

  • Developing control algorithms for autonomously taking-off on the runway.

Results:

  • Successfully deploying an autopilot software on an FPGA-based hardware (using FPGA Cyclone Altera).

  • Satisfactory control performance verified by numerous autonomous flights.

Autopilot Software Development

April, 2014

Responsibilities:

Developing control algorithms for the drone to autonomously take off from a pneumatic launcher.

​

Results:

The drone can perform autonomous flights through GPS waypoints and take off autonomously from the pneumatic launcher.

Autopilot Software Development

May, 2015

Objectives:

Adapting existing autopilot to a MALE drone (700 kg MTOM).

​

Responsibilities:

Developing model in simulation for testing control and algorithms.

​

Results:

The drone was able to perform autonomous flights routing via predefined waypoints.

IMechE UAS Challenge 2017, Llanbedr, Wales

​

June, 2017

Representing University of Southampton (2 teams Valkyrie + Olympus) to compete in the IMechE UAS Challenge 2017

(From left to right: Micheal, Abhi, Raam, Colin, Dr. Stephen Prior, Jamies, Franklin, me, Pratik)

​

Responsibilities:

Modifying the open-source code firmware of Pixhawk autopilot to integrate a new feature (with an added camera and an onboard computer) of dropping a payload on visually recognized target programmed in Robotic Operating System (ROS). Using OpenCV and Hu Moments for target recognition.

​

Results:

  • Runner-up IMechE UAS Challenge 2017 (Winner is Bath University with a crew of more than 20 people compared to our 4 !!!).

  • Navigation Accuracy Award.

Autopilot Software Development

April, 2018

Objectives:

Developing the feature of autonomous take-off and land vertically for a quad-plane (22 kg MTOM).

​

Responsibilities:

Implementing and testing control algorithm to take off vertically as a multirotor, transfer to fixed-wing flight and land autonomously on a predefined GPS coordinate as a multirotor.

​

Results:

The quad-plane can perform autonomous flights from taking-off to landing.

MSc Thesis: Aerial Docking between multi-rotor RPAS

July, 2017

Develop control algorithms for autonomous aerial docking between two drones using ROS and augmented reality tags (ArUco). The mother drone with the tags attached is a quadcopter controlled by a Pixhawk autopilot while the child drone is the commercial Parrot AR.Drone 2.0 quadcopter. Image processing is performed on a desktop computer and then movement commands are computed and sent back wirelessly to the child drone to control it to follow the tags (the mother drone) and to dock on the mother drone.

 

​

 

Publication:

S. Prior, Y. Guo and H. H. Nguyen. "Aerial Docking between Remotely Piloted Aircraft". Defence Global Publication, Aviation, pp. 16-17, 2017.

Reinforcement Learning for Manipulator

June, 2018

Responsibilities:

  • Creating a robot urdf model in a simulator (MuJoCo) and perform training using Deep Deterministic Policy (DDPG) and Hindsight Experience Replay (HER).

  • Using a mechanism called Experience Ranking to rank training experiences and allow high scored training experiences to be used for training the control policy. Improved HER in learning performance.

  • Implementing in the training in the real robot using ROS and Python.

​

Results:

  • Successfully training the robot to with four different robotic environments in OpenAI Gym & MuJoCo.

  • Implementing the training with the real robot.

​

Publication:

H. Nguyen, Hung Manh La, and Matthew Deans. ”Deep Learning with Expe-rience Ranking Convolutional Neural Network for Robot Manipulator.” arXivpreprint arXiv:1809.05819 (2018), Submitted for ICRA 2019.

Home: Project
3.jpg
4.jpg

Computer vision project

Responsibilities:

  • Training YOLOv3 and SSD Mobile Net for thermal images (FLIR thermal dataset)

  • Deploying trained networks on Intel's Movidius Neural Computing Stick

Results:

  • 80% mAP with YOLOv3

  • 4 fps when running TinyYOLOv2 on Movidius

  • 9 fps when running SSD

 

​

 

​

Home: Contact

Contact Me

AlphaGo.jpeg

Ke Jie after defeated by AlphaGo

             AI will rise!

Your details were sent successfully!

June, 2018

bottom of page