Title page for ETD etd-04032012-151016

Type of Document Dissertation
Author Kirchner, William
Author's Email Address william.kirchner@gmail.com
URN etd-04032012-151016
Title Anthropomimetic Control Synthesis: Adaptive Vehicle Traction Control
Degree PhD
Department Mechanical Engineering
Advisory Committee
Advisor Name Title
Southward, Steve C. Committee Chair
Ahmadian, Medhi Committee Member
Sandu, Corina Committee Member
Wicks, Alfred L. Committee Member
Woolsey, Craig A. Committee Member
  • Anthropomimetic
  • Traction Control
  • Discrete Adaptive Filter
  • Vehicle Dynamics
  • Adaptive Control
  • Human in the Loop
  • Gradient Estimation
Date of Defense 2012-03-22
Availability unrestricted
Human expert drivers have the unique ability to build complex perceptive models using correlated sensory inputs and outputs. In the case of longitudinal vehicle traction, this work will show a direct correlation in longitudinal acceleration to throttle input in a controlled laboratory environment. In fact, human experts have the ability to control a vehicle at or near the performance limits, with respect to vehicle traction, without direct knowledge of the vehicle states; speed, slip or tractive force. Traditional algorithms such as PID, full state feedback, and even sliding mode control have been very successful at handling low level tasks where the physics of the dynamic system are known and stationary. The ability to learn and adapt to changing environmental conditions, as well as develop perceptive models based on stimulus-response data, provides expert human drivers with significant advantages. When it comes to bandwidth, accuracy, and repeatability, automatic control systems have clear advantages over humans; however, most high performance control systems lack many of the unique abilities of a human expert. The underlying motivation for this work is that there are advantages to framing the traction control problem in a manner that more closely resembles how a human expert drives a vehicle. The fundamental idea is the belief that humans have a unique ability to adapt to uncertain environments that are both temporal and spatially varying. In this work, a novel approach to traction control is developed using an anthropomimetic control synthesis strategy. The proposed anthropomimetic traction control algorithm operates on the same correlated input signals that a human expert driver would in order to maximize traction. A gradient ascent approach is at the heart of the proposed anthropomimetic control algorithm, and a real-time implementation is described using linear operator techniques, even though the tire-ground interface is highly non-linear. Performance of the proposed anthropomimetic traction control algorithm is demonstrated using both a longitudinal traction case study and a combined mode traction case study, in which longitudinal and lateral accelerations are maximized simultaneously. The approach presented in this research should be considered as a first step in the development of a truly anthropomimetic solution, where an advanced control algorithm has been designed to be responsive to the same limited input signals that a human expert would rely on, with the objective of maximizing traction. This work establishes the foundation for a general framework for an anthropomimetic control algorithm that is capable of learning and adapting to an uncertain, time varying environment. The algorithms developed in this work are well suited for efficient real time control in ground vehicles in a variety of applications from a driver assist technology to fully autonomous applications.
  Filename       Size       Approximate Download Time (Hours:Minutes:Seconds) 
 28.8 Modem   56K Modem   ISDN (64 Kb)   ISDN (128 Kb)   Higher-speed Access 
  Kirchner_WT_D_2012.pdf 17.83 Mb 01:22:32 00:42:27 00:37:08 00:18:34 00:01:35

Browse All Available ETDs by ( Author | Department )

dla home
etds imagebase journals news ereserve special collections
virgnia tech home contact dla university libraries

If you have questions or technical problems, please Contact DLA.