• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3618
  • 216
  • 118
  • 118
  • 118
  • 118
  • 118
  • 110
  • 42
  • 29
  • 23
  • 12
  • 8
  • 6
  • 2
  • Tagged with
  • 4816
  • 3587
  • 702
  • 640
  • 613
  • 408
  • 386
  • 198
  • 174
  • 159
  • 135
  • 126
  • 121
  • 117
  • 114
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Establishment of an experimental method for a grooved composite subjected to out-of-plane contact loading

Kobayashi, Yusuke, S. M. Massachusetts Institute of Technology January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 147-149, 2nd group). / A specimen and an experimental method to observe the behavior of a grooved composite subjected to out-of-plane contact loading is established and verified, and its response is examined. The specimen is designed so that the variability of stress-strain state is negligible across the width of the specimen. The dominant concept of the design is to isolate the response of the specimen around the groove from any other effects. Geometric parameters, stacking sequence (layup), and boundary conditions are determined for the specimen. With simply-supported boundary conditions, specimens fail in a simple beam shear mode as determined from the overall structural response of the specimen, thereby indicating that this configuration is not appropriate for the primary design goal. Thus, the rigid backface boundary condition is chosen and verified as the appropriate configuration. Contact, load transfer, and alignment issues arose in the first set of rigid backface tests and were solved by introducing finer machining, harder material for the indenter, and overall alignment with better accuracy. This resulted in the final test specimen configuration and associated test method, consisting of a specimen with a length of 56.00 mm, a width of 25.00 mm, an approximate thickness of 12.5 mm, and a maximum groove depth of 3.48 mm. The standard layup used for the tests is [F45/0/90]1os, while an alternate layup of [T30/0]13S was also used. In these tests, a number of key behaviors were observed: mode of failure, load-per-stroke slope, and "knee load". / (cont.) Specimens failed in two different modes: a delamination near the bottom of the groove (Mode A), and a crack under the groove propagating to a delamination near the midplane (Mode B). From observations, it is concluded that damage is generated at the bottom of the groove and then propagates in the longitudinal and the thickness direction, resulting in Mode A or B respectively. A "knee load" is defined as a point where the load-per-stroke slope deviates from linear behavior. Failure Mode B and the presence of the "knee load" are observed in the standard layup, but are not applicable to the alternate layup. The presence of 90' plies is indicated as the main cause of the observed differences. The test results clearly show that a specimen and a test method is established and verified for the objectives of the current work, and furthermore is valid for tests with different test parameters. Recommendations are made with regard to extension of the basic testing established herein. / by Yusuke Kobayashi. / S.M.
332

Modeling, system identication, and control for dynamic locomotion of the LittleDog robot on rough terrain

Levashov, Michael Yurievich January 2012 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2012. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student submitted PDF version of thesis. / Includes bibliographical references (p. 76-80). / In this thesis, I present a framework for achieving a stable bounding gait on the LittleDog robot over rough terrain. The framework relies on an accurate planar model of the dynamics, which I assembled from a model of the motors, a rigid body model, and a novel physically-inspired ground interaction model, and then identied using a series of physical measurements and experiments. I then used the RG-RRT algorithm on the model to generate bounding trajectories of LittleDog over a number of sets of rough terrain in simulation. Despite signicant research in the field, there has been little success in combining motion planning and feedback control for a problem that is as kinematically and dynamically challenging as LittleDog. I have constructed a controller based on transverse linearization and used it to stabilize the planned LittleDog trajectories in simulation. The resulting controller reliably stabilized the planned bounding motions and was relatively robust to signicant amounts of time delays in estimation, process and estimation noise, as well as small model errors. In order to estimate the state of the system in real time, I modified the EKF algorithm to compensate for varying delays between the sensors. The EKF-based filter works reasonably well, but when combined with feedback control, simulated delays, and the model it produces unstable behavior, which I was not able to correct. However, the close loop simulation closely resembles the behavior of the control and estimation on the real robot, including the failure modes, which suggests that improving the feedback loop might result in bounding on the real LittleDog. The control framework and many of the methods developed in this thesis are applicable to other walking systems, particularly when operating in the underactuated regime. / by Michael Yurievich Levashov. / S.M.
333

Grid adaptation for functional outputs of compressible flow simulations

Venditti, David Anthony, 1973- January 2002 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2002. / Includes bibliographical references (p. 143-150). / An error correction and grid adaptive method is presented for improving the accuracy of functional outputs of compressible flow simulations. The procedure is based on an adjoint formulation in which the estimated error in the functional can be directly related to the local residual errors of both the primal and adjoint solutions. This relationship allows local error contributions to be used as indicators in a grid adaptive method designed to produce specially tuned grids for accurately estimating the chosen functional. The method is applied to two-dimensional inviscid and viscous (laminar) flows using standard finite volume discretizations, and to scalar convection-diffusion using a Galerkin finite element discretization. Isotropic h-refinement is used to iteratively improve the grids in a series of subsonic, transonic, and supersonic inviscid test cases. A commonly-used adaptive method that employs a curvature sensor based on measures of the local interpolation error in the solution is implemented to comparatively assess the performance of the proposed output-based procedure. In many cases, the curvature-based method fails to terminate or produces erroneous values for the functional at termination. In all test cases, the proposed output-based method succeeds in terminating once the prescribed accuracy level has been achieved for the chosen functional. / (cont.) Output-based adaptive criteria are incorporated into an anisotropic grid-adaptive procedure for laminar Navier-Stokes simulations. The proposed method can be viewed as a merging of Hessian-based adaptation with output error control. A series of airfoil test cases are presented for Reynolds numbers ranging from 5,000 to 100,000. The proposed adaptive method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to pure Hessian-based adaptation. / by David Anthony Venditti. / Ph.D.
334

Modeling the air traffic controller's cognitive projection process

Reynolds, Hayley J. Davison (Hayley Jaye Davison) January 2006 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2006. / Includes bibliographical references (p. 117-123). / Cognitive projection enables the operator of a supervisory control system, such as air traffic control, to use predicted future behavior of the system to make decisions about if and how to control the system. New procedures and technologies being implemented in the air traffic control system innately affect the information used for projection and the type of projection required from the controller. Because cognitive projection is not well-understood, launching these projection-impacting technologies and procedures could result in the reluctance of the air traffic controllers to accept these advancements or limit the system performance. A Projection Process Model and a Projection Error Concept were proposed to describe the controller's projection process and the contextual system influences on the projection process. The two primary influences on the projection process were information/display system and task-based projection requirements. A mismatch between the information/display system states and the task-based projection requirements was described through a cognitive transform concept. The projection process itself is composed of the state mental model and the time into the future over which the projection is made. / (cont.) Hypotheses based on the assumptions of the Projection Process Model and Projection Error Concept were probed through an experiment using an ATC task paradigm. Results were consistent with the proposed models. They suggested that the controllers were able to incorporate higher-level dynamics into the state mental models used for projection and that the quality of the state mental model used was marginally influenced by the error tolerance required in the task. The application of the Projection Process Model and Projection Error Concept was then illustrated through the analysis of the impact on projection from two ATC domain examples of technology and procedure implementation. The Constant Descent Approach Procedure in the TRACON impacted the intent, projection timespan, and abstractions used in the mental model of the controllers. The Oceanic ATC surveillance, communication and workstation improvements resulted in an impact on the states to be projected, intent, projection timespan, and human/automation projection responsibility. Suggestions for improved transition for the projection process were then provided based on the analysis. / by Hayley J. Davison Reynolds. / Ph.D.
335

Path calibration algorithms for many-aperture fiber-linked broadband hypertelescopes

Fitzgerald, Riley McCrea January 2018 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 131-133). / The quest for increased resolution pushes telescope designs toward larger and larger apertures. This has motivated the development and expansion of distributed-aperture imaging. In radio wavelengths, distributed aperture signals can be recorded separately and correlated in software, but optical and infrared interferometry cannot currently use this approach because absolute phase must be captured. Instead, the beams from each sub-telescope must be brought together and interfered pairwise in order to measure relative phases and coherences, or "visibilities," by observing interference fringes. As the number of apertures increases, the number of pairs increases quadratically and pairwise fringe measurement becomes impractical. Direct-imaging interferometers instead combine all beams simultaneously by imaging the outputs onto a detector. Each aperture pair contributes a spatial frequency to the output, and the result is an image of the source instead of pairwise visibility information The densified-pupil direct-imaging interferometer, or hypertelescope, is a promising concept for future high-resolution imaging, both in space and on the ground; it offers the sharp resolution and efficient beam-combination of similar interferometric methods, but increases power in the PSF core through pupil densification. When implemented with single-mode fibers, the hypertelescope offers simplicity and the ability to expand to many-aperture configurations. However, broadband imaging requires that the optical path lengths all be matched to within a fraction of the wavelength. Telescopes of this type have been demonstrated, but generally rely on manual tuning of delay lines and air gaps in order to match the optical path lengths. This is not practically extensible to many-aperture configurations with hundreds or thousands of baselines, and is also difficult to implement in space, where a more automated procedure is required. Many methods for interferometer fiber length measurement have been developed, but most rely on extensive internal metrology or a specific calibration source, such as a polarized laser reference. Starting from the known concept of visibility phase sampling from the frequency-domain representation of a direct image, this work develops and characterizes a set of algorithms for the calibration of absolute path lengths in many-aperture fiber-linked hypertelescopes using only mis-calibrated images taken at a few wavelengths, enabling path-matched broadband imaging at high resolution. The unique baselines, fiber spatial filtering, and densified-pupil architecture of these telescopes enables these methods to be particularly effective, and a many-aperture configuration supports the inference of missing quantities of from the statistical properties of the apertures. An optimized frequency-sampling method extracts baseline phases from the miscalibrated image of a known source, and an efficient method for solving for aperture phases is presented. Observing and solving for the aperture phases at multiple wavelengths extends phase information into path length information using an expanded robust Chinese Remainder Theorem algorithm, and then the absolute offset and telescope pointing errors can be inferred from the statistics of the aperture path length errors. A simulation framework for fiber-linked densified-pupil direct-imaging interferometers is developed in order to test these algorithms and characterize the performance. A Fourier-domain signal-to-noise metric is derived, and ideal performance models for these algorithms are presented. The limits of performance are shown to be well-predicted by this metric and known properties of the telescope configuration. Finally, the simulation is used to characterize the effects of finite bandwidths, extended sources, optical aberrations, and pointing errors on the performance and robustness of these algorithms. Path lengths are shown to be measurable, inside a range determined by the calibration wavelengths, to within the required A/10 even in the presence of these non-idealities. / by Riley McCrea Fitzgerald. / S.M.
336

Coordinated control of a free-flying teleoperator

Spofford, John Rawson January 1988 (has links)
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1988. / Includes bibliographical references. / by John Rawson Spofford. / Sc.D.
337

A systems analysis of humans and machines in space activities

Stuart, David Gordon January 1986 (has links)
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1986. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND AERO / Bibliography: leaves [161]-[168]. / by David Gordon Stuart. / Sc.D.
338

Heat transfer measurements on surfaces with natural and simulated ice accretion roughness

Torres, Benjamin E. (Benjamin Ernesto) January 1997 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1997. / Includes bibliographical references (p. 85-86). / by Benjamin E. Torres. / M.S.
339

Adaptive finite element solutions of the steady Euler equations using a sensitivity approach

Haq, Imran, 1971- January 1997 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1997. / Includes bibliographical references (p. [95]-97). / submitted by Imran Haq. / M.S.
340

Incremental sampling based algorithms for state estimation

Chaudhari, Pratik (Pratik Anil) January 2012 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2012. / Cataloged from department-submitted PDF version of thesis. This electronic version was submitted and approved by the author's academic department as part of an electronic thesis pilot project. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 95-98). / Perception is a crucial aspect of the operation of autonomous vehicles. With a multitude of different sources of sensor data, it becomes important to have algorithms which can process the available information quickly and provide a timely solution. Also, an inherently continuous world is sensed by robot sensors and converted into discrete packets of information. Algorithms that can take advantage of this setup, i.e., which have a sound founding in continuous time formulations but which can effectively discretize the available information in an incremental manner according to different requirements can potentially outperform conventional perception frameworks. Inspired from recent results in motion planning algorithms, this thesis aims to address these two aspects of the problem of robot perception, through novel incremental and anytime algorithms. The first part of the thesis deals with algorithms for different estimation problems, such as filtering, smoothing, and trajectory decoding. They share the basic idea that a general continuous-time system can be approximated by a sequence of discrete Markov chains that converge in a suitable sense to the original continuous time stochastic system. This discretization is obtained through intuitive rules motivated by physics and is very easy to implement in practice. Incremental algorithms for the above problems can then be formulated on these discrete systems whose solutions converge to the solution of the original problem. A similar construction is used to explore control of partially observable processes in the latter part of the thesis. A general continuous time control problem in this case is approximates by a sequence of discrete partially observable Markov decision processes (POMDPs), in such a way that the trajectories of the POMDPs -- i.e., the trajectories of beliefs -- converge to the trajectories of the original continuous problem. Modern point-based solvers are used to approximate control policies for each of these discrete problems and it is shown that these control policies converge to the optimal control policy of the original problem in an appropriate space. This approach is promising because instead of solving a large POMDP problem from scratch, which is PSPACE-hard, approximate solutions of smaller problems can be used to guide the search for the optimal control policy. / by Pratik Chaudhari. / S.M.

Page generated in 0.5147 seconds