<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Autonomous Robots Lab</title>
    <link>/</link>
      <atom:link href="index.xml" rel="self" type="application/rss+xml" />
    <description>Autonomous Robots Lab</description>
    <generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><copyright>© 2022 Autonomous Robots Lab</copyright><lastBuildDate>Sat, 01 Jun 2030 13:00:00 +0000</lastBuildDate>
    
    
    <item>
      <title>Motion Planning</title>
      <link>/project/motion-planning/</link>
      <pubDate>Wed, 06 Jul 2022 00:00:00 +0000</pubDate>
      <guid>/project/motion-planning/</guid>
      <description>&lt;p&gt;Applications of autonomous robots in dynamic, uncertain environments, such as crowded traffic, overwhelmed workspace, or unexplored wilderness, has posed significant challenges to the design of &lt;strong&gt;motion planning&lt;/strong&gt; algorithms. We develop efficient trajectory optimization approaches for autonomous vehicles to drive in a smooth and safe manner in structured traffic environments, and for robot swarms to efficiently coordinate and safely navigate in unknown environments.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Active Sensing</title>
      <link>/project/active-sensing/</link>
      <pubDate>Wed, 06 Jul 2022 00:00:00 +0000</pubDate>
      <guid>/project/active-sensing/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Active sensing&lt;/strong&gt; refers to using autonomous robots to proactively explore an unknown environment or gather information of object states, and it holds great potential for both civil and military applications, such as search and rescue, environmental monitoring, surveillance and reconnaissance. In the active sensing, both motion planning and perception are inherently coupled in that trajectories determine the informativeness of observations, and the observations in turn guide the trajectory generation. The abilitiy to generate informative paths/trajectories is the key factor in successful active sensing.&lt;/p&gt;
&lt;p&gt;Among various tasks of active sensing, we focus on target search and tracking using autonomous robots. We develop informative-theoretic trajectory planning approaches for mobile robots to actively search and track moving targets, the motion models of which can be linear, nonlinear, or even unknown a priori.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Distributed Filtering</title>
      <link>/project/optimal-filtering/</link>
      <pubDate>Wed, 06 Jul 2022 00:00:00 +0000</pubDate>
      <guid>/project/optimal-filtering/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Distributed filtering&lt;/strong&gt; using multiple mobile robots has many important applications such as environmental monitoring, SLAM, and target localization. While significant achievements have been made for distributed linear filtering, the counterparts for nonlinear filtering techniques fall short.&lt;/p&gt;
&lt;p&gt;We propose a measurement dissemination-based distributed Bayesian filter for nonlinear estimation of target position under either fixed or time-variant communication topologies. Such method has been shown to generate consistent estimation and fast convergence performance.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Human Robot Collaboration</title>
      <link>/project/hri/</link>
      <pubDate>Wed, 06 Jul 2022 00:00:00 +0000</pubDate>
      <guid>/project/hri/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Intention-Aware Robots&lt;/strong&gt;
Human-robot Interaction (HRI) has been an increasingly popular research area due to the boom in personal and industrial robots. We are especially interested in enabling robots to collaborate as peers with the human. On one hand, we utilize Bayesian inference approach to help robots identify human&amp;rsquo;s intention and then provide assistance accordingly. On the other hand, we model how human anticipates the collaborative robot&amp;rsquo;s plan based on the partial actions that the robot has made. These two aspects close the interaction loop between human and robot and can be useful for developing algorithms to improve human-robot collaboration.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Companion Robot&lt;/strong&gt;
With robots stepping into daily life of human beings, various applications for improving human life qualify has been envisioned. We are interested in enabling robots to work as companions of humans, walking with a target human while carrying items for the human. Especially, we have developed a trajectory planning algorithm for robots to autonomously follow a target human, using onboard cameras.​​Due to the uncertainty in human&amp;rsquo;s motion, we proposed a Parallel Interacting Multiple Model-Unscented Kalman Filter (PIMM-UKF) approch for human motion estimation and prediction. Based on the predicted human states, an MPC path planner is developed to produce safe and comfortable following trajectory of the robot.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Example Event</title>
      <link>/event/example/</link>
      <pubDate>Sat, 01 Jun 2030 13:00:00 +0000</pubDate>
      <guid>/event/example/</guid>
      <description>&lt;p&gt;Slides can be added in a few ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Create&lt;/strong&gt; slides using Wowchemy&amp;rsquo;s &lt;a href=&#34;https://wowchemy.com/docs/managing-content/#create-slides&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;&lt;em&gt;Slides&lt;/em&gt;&lt;/a&gt; feature and link using &lt;code&gt;slides&lt;/code&gt; parameter in the front matter of the talk file&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Upload&lt;/strong&gt; an existing slide deck to &lt;code&gt;static/&lt;/code&gt; and link using &lt;code&gt;url_slides&lt;/code&gt; parameter in the front matter of the talk file&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Embed&lt;/strong&gt; your slides (e.g. Google Slides) or presentation video on this page using &lt;a href=&#34;https://wowchemy.com/docs/writing-markdown-latex/&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;shortcodes&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Further event details, including page elements such as image galleries, can be added to the body of this page.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Online Action Change Detection for Automatic Vision-based Ground Control of Aircraft</title>
      <link>/publication/huo-2022-online/</link>
      <pubDate>Sun, 23 Jan 2022 22:54:47 +0800</pubDate>
      <guid>/publication/huo-2022-online/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Adaptive online distributed optimal control of very-large-scale robotic systems</title>
      <link>/publication/zhu-2021-adaptive/</link>
      <pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate>
      <guid>/publication/zhu-2021-adaptive/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Jian Yang and Monica Hall Win the Best Paper Award at Wowchemy 2020</title>
      <link>/post/20-12-02-icml-best-paper/</link>
      <pubDate>Wed, 02 Dec 2020 00:00:00 +0000</pubDate>
      <guid>/post/20-12-02-icml-best-paper/</guid>
      <description>&lt;p&gt;Congratulations to Jian Yang and Monica Hall for winning the Best Paper Award at the 2020 Conference on Wowchemy for their paper “Learning Wowchemy”.&lt;/p&gt;
&lt;p&gt;Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer tempus augue non tempor egestas. Proin nisl nunc, dignissim in accumsan dapibus, auctor ullamcorper neque. Quisque at elit felis. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Aenean eget elementum odio. Cras interdum eget risus sit amet aliquet. In volutpat, nisl ut fringilla dignissim, arcu nisl suscipit ante, at accumsan sapien nisl eu eros.&lt;/p&gt;
&lt;p&gt;Sed eu dui nec ligula bibendum dapibus. Nullam imperdiet auctor tortor, vel cursus mauris malesuada non. Quisque ultrices euismod dapibus. Aenean sed gravida risus. Sed nisi tortor, vulputate nec quam non, placerat porta nisl. Nunc varius lobortis urna, condimentum facilisis ipsum molestie eu. Ut molestie eleifend ligula sed dignissim. Duis ut tellus turpis. Praesent tincidunt, nunc sed congue malesuada, mauris enim maximus massa, eget interdum turpis urna et ante. Morbi sem nisl, cursus quis mollis et, interdum luctus augue. Aliquam laoreet, leo et accumsan tincidunt, libero neque aliquet lectus, a ultricies lorem mi a orci.&lt;/p&gt;
&lt;p&gt;Mauris dapibus sem vel magna convallis laoreet. Donec in venenatis urna, vitae sodales odio. Praesent tortor diam, varius non luctus nec, bibendum vel est. Quisque id sem enim. Maecenas at est leo. Vestibulum tristique pellentesque ex, blandit placerat nunc eleifend sit amet. Fusce eget lectus bibendum, accumsan mi quis, luctus sem. Etiam vitae nulla scelerisque, eleifend odio in, euismod quam. Etiam porta ullamcorper massa, vitae gravida turpis euismod quis. Mauris sodales sem ac ultrices viverra. In placerat ultrices sapien. Suspendisse eu arcu hendrerit, luctus tortor cursus, maximus dolor. Proin et velit et quam gravida dapibus. Donec blandit justo ut consequat tristique.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Richard Hendricks Wins First Place in the Wowchemy Prize</title>
      <link>/post/20-12-01-wowchemy-prize/</link>
      <pubDate>Tue, 01 Dec 2020 00:00:00 +0000</pubDate>
      <guid>/post/20-12-01-wowchemy-prize/</guid>
      <description>&lt;p&gt;Congratulations to Richard Hendricks for winning first place in the Wowchemy Prize.&lt;/p&gt;
&lt;p&gt;Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer tempus augue non tempor egestas. Proin nisl nunc, dignissim in accumsan dapibus, auctor ullamcorper neque. Quisque at elit felis. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Aenean eget elementum odio. Cras interdum eget risus sit amet aliquet. In volutpat, nisl ut fringilla dignissim, arcu nisl suscipit ante, at accumsan sapien nisl eu eros.&lt;/p&gt;
&lt;p&gt;Sed eu dui nec ligula bibendum dapibus. Nullam imperdiet auctor tortor, vel cursus mauris malesuada non. Quisque ultrices euismod dapibus. Aenean sed gravida risus. Sed nisi tortor, vulputate nec quam non, placerat porta nisl. Nunc varius lobortis urna, condimentum facilisis ipsum molestie eu. Ut molestie eleifend ligula sed dignissim. Duis ut tellus turpis. Praesent tincidunt, nunc sed congue malesuada, mauris enim maximus massa, eget interdum turpis urna et ante. Morbi sem nisl, cursus quis mollis et, interdum luctus augue. Aliquam laoreet, leo et accumsan tincidunt, libero neque aliquet lectus, a ultricies lorem mi a orci.&lt;/p&gt;
&lt;p&gt;Mauris dapibus sem vel magna convallis laoreet. Donec in venenatis urna, vitae sodales odio. Praesent tortor diam, varius non luctus nec, bibendum vel est. Quisque id sem enim. Maecenas at est leo. Vestibulum tristique pellentesque ex, blandit placerat nunc eleifend sit amet. Fusce eget lectus bibendum, accumsan mi quis, luctus sem. Etiam vitae nulla scelerisque, eleifend odio in, euismod quam. Etiam porta ullamcorper massa, vitae gravida turpis euismod quis. Mauris sodales sem ac ultrices viverra. In placerat ultrices sapien. Suspendisse eu arcu hendrerit, luctus tortor cursus, maximus dolor. Proin et velit et quam gravida dapibus. Donec blandit justo ut consequat tristique.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Mixed reinforcement learning for efficient policy optimization in stochastic environments</title>
      <link>/publication/mu-2020-mixed/</link>
      <pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
      <guid>/publication/mu-2020-mixed/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Rumor-robust Decentralized Gaussian Process Learning, Fusion, and Planning for Modeling Multiple Moving Targets</title>
      <link>/publication/liu-2020-rumor/</link>
      <pubDate>Wed, 01 Jan 2020 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2020-rumor/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Learning recursive bayesian nonparametric modeling of moving targets via mobile decentralized sensors</title>
      <link>/publication/liu-2019-learning/</link>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2019-learning/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Vision-guided planning and control for autonomous taxiing via convolutional neural networks</title>
      <link>/publication/liu-2019-vision/</link>
      <pubDate>Tue, 01 Jan 2019 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2019-vision/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Distributed Bayesian Filter Using Measurement Dissemination for Multiple Unmanned Ground Vehicles With Dynamically Changing Interaction Topologies</title>
      <link>/publication/liu-2018-distributed/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2018-distributed/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Kalman filter-based tracking of moving objects using linear ultrasonic sensor array for road vehicles</title>
      <link>/publication/li-2018-kalman/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>/publication/li-2018-kalman/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Scene understanding in deep learning-based end-to-end controllers for autonomous vehicles</title>
      <link>/publication/yang-2018-scene/</link>
      <pubDate>Mon, 01 Jan 2018 00:00:00 +0000</pubDate>
      <guid>/publication/yang-2018-scene/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Distributed Bayesian filters for multi-vehicle network by using Latest-In-and-Full-Out exchange protocol of measurements</title>
      <link>/publication/liu-2017-distributed/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2017-distributed/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Feature analysis and selection for training an end-to-end autonomous vehicle controller using deep learning approach</title>
      <link>/publication/yang-2017-feature/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>/publication/yang-2017-feature/</guid>
      <description></description>
    </item>
    
    <item>
      <title>How much data are enough? A statistical approach with case study on longitudinal driving behavior</title>
      <link>/publication/wang-2017-much/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>/publication/wang-2017-much/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Learning a deep neural net policy for end-to-end control of autonomous vehicles</title>
      <link>/publication/rausch-2017-learning/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>/publication/rausch-2017-learning/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Measurement dissemination-based distributed Bayesian filter using the latest-in-and-full-out exchange protocol for networked unmanned vehicles</title>
      <link>/publication/liu-2017-measurement/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2017-measurement/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Model predictive control-based target search and tracking using autonomous mobile robot with limited sensing domain</title>
      <link>/publication/liu-2017-model/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2017-model/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Path planning for autonomous vehicles using model predictive control</title>
      <link>/publication/liu-2017-path/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2017-path/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Pragmatic-pedagogic value alignment</title>
      <link>/publication/fisac-2017-pragmatic/</link>
      <pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
      <guid>/publication/fisac-2017-pragmatic/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Cooperative search using human-UAV teams</title>
      <link>/publication/liu-2016-cooperative/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2016-cooperative/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Distributed target localization using a group of UGVs under dynamically changing interaction topologies</title>
      <link>/publication/liu-2016-distributed/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2016-distributed/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Dynamical tracking of surrounding objects for road vehicles using linearly-arrayed ultrasonic sensors</title>
      <link>/publication/yu-2016-dynamical/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>/publication/yu-2016-dynamical/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Generating plans that predict themselves</title>
      <link>/publication/fisac-2016-generating/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>/publication/fisac-2016-generating/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Goal Inference Improves Objective and Perceived Performance in Human-Robot Collaboration</title>
      <link>/publication/liu-2016-goal/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2016-goal/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Human-centered feed-forward control of a vehicle steering system based on a driver&#39;s path-following characteristics</title>
      <link>/publication/wang-2016-human/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>/publication/wang-2016-human/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Parallel interacting multiple model-based human motion prediction for motion planning of companion robots</title>
      <link>/publication/lee-2016-parallel/</link>
      <pubDate>Fri, 01 Jan 2016 00:00:00 +0000</pubDate>
      <guid>/publication/lee-2016-parallel/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Interacting multiple model-based human motion prediction for motion planning of companion robots</title>
      <link>/publication/lee-2015-interacting/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>/publication/lee-2015-interacting/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Model predictive control-based probabilistic search method for autonomous ground robot in a dynamic environment</title>
      <link>/publication/liu-2015-model/</link>
      <pubDate>Thu, 01 Jan 2015 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2015-model/</guid>
      <description></description>
    </item>
    
    <item>
      <title>A framework for autonomous vehicles with goal inference and task allocation capabilities to support peer collaboration with human agents</title>
      <link>/publication/liu-2014-framework/</link>
      <pubDate>Wed, 01 Jan 2014 00:00:00 +0000</pubDate>
      <guid>/publication/liu-2014-framework/</guid>
      <description></description>
    </item>
    
    <item>
      <title></title>
      <link>/admin/config.yml</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/admin/config.yml</guid>
      <description></description>
    </item>
    
    <item>
      <title></title>
      <link>/contact/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/contact/</guid>
      <description></description>
    </item>
    
    <item>
      <title></title>
      <link>/people/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>/people/</guid>
      <description></description>
    </item>
    
  </channel>
</rss>
