• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 32
  • 8
  • 7
  • 4
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 269
  • 269
  • 108
  • 98
  • 60
  • 46
  • 35
  • 32
  • 30
  • 29
  • 24
  • 23
  • 23
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Evolutionary design of robust flight control for a hypersonic aircraft

Austin, K. J. Unknown Date (has links)
No description available.
12

Nonlinear and discrete control laws for spacecraft formation flying.

Pluym, Jeremy P. January 2006 (has links)
Thesis (M.A. Sc.)--University of Toronto, 2006. / Source: Masters Abstracts International, Volume: 44-06, page: 2869. Includes bibliographical references.
13

Pilot in loop assessment of fault tolerant flight control schemes in a motion flight simulator

Sagoo, Girish Kumar. January 2008 (has links)
Thesis (M.S.)--West Virginia University, 2008. / Title from document title page. Document formatted into pages; contains xiv, 121 p. : ill. (some col.), col. map. Includes abstract. Includes bibliographical references (p. 116-121).
14

The use of neural networks in adaptive control

Nedresky, Donald L. January 1990 (has links) (PDF)
Thesis (M.S. in Aeronautical Engineering)--Naval Postgraduate School, September 1990. / Thesis Advisor(s): Collins, Daniel J. Second Reader: Schmidt, Louis V. "September 1990." DTIC Identifier(s): Neural Nets, Flight Control Systems, Adaptive Control Systems, Computer Programs, Parallel Processing, Distributed Data Processing, Theses, Attack Aircraft, Equations of Motion, Control Theory, A-4 Aircraft. Author(s) subject terms: Neural Networks, Adaptive Control, Backpropagation, Parameter Estimation, Parallel Distributed Processing. Includes bibliographical references (p. 41). Also available in print.
15

Reversible flight control identification /

Best, Scott January 1900 (has links)
Thesis (M.App.Sc.) - Carleton University, 2007. / Includes bibliographical references (p. 149-152). Also available in electronic format on the Internet.
16

Acceleration based manoeuvre flight control system for Unmanned Aerial Vehicles /

Peddle, Iain Kenneth. January 2008 (has links)
Dissertation (PhD)--University of Stellenbosch, 2008. / Bibliography. Also available via the Internet.
17

On-line identification investigation

Ture, M. January 1992 (has links)
No description available.
18

Neural control of a sea skimming missile

Jones, Campbell Llyr January 1996 (has links)
No description available.
19

Development of a fault tolerant flight control system

Feldstein, Cary Benjamin. 10 April 2008 (has links)
No description available.
20

Using machine learning to learn from demonstration: application to the AR.Drone quadrotor control

Fu, Kuan-Hsiang 10 May 2016 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. December 14, 2015 / Developing a robot that can operate autonomously is an active area in robotics research. An autonomously operating robot can have a tremendous number of applications such as: surveillance and inspection; search and rescue; and operating in hazardous environments. Reinforcement learning, a branch of machine learning, provides an attractive framework for developing robust control algorithms since it is less demanding in terms of both knowledge and programming effort. Given a reward function, reinforcement learning employs a trial-and-error concept to make an agent learn. It is computationally intractable in practice for an agent to learn “de novo”, thus it is important to provide the learning system with “a priori” knowledge. Such prior knowledge would be in the form of demonstrations performed by the teacher. However, prior knowledge does not necessarily guarantee that the agent will perform well. The performance of the agent usually depends on the reward function, since the reward function describes the formal specification of the control task. However, problems arise with complex reward function that are difficult to specify manually. In order to address these problems, apprenticeship learning via inverse reinforcement learning is used. Apprenticeship learning via inverse reinforcement learning can be used to extract a reward function from the set of demonstrations so that the agent can optimise its performance with respect to that reward function. In this research, a flight controller for the Ar.Drone quadrotor was created using a reinforcement learning algorithm and function approximators with some prior knowledge. The agent was able to perform a manoeuvre that is similar to the one demonstrated by the teacher.

Page generated in 0.0613 seconds