• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 768
  • 242
  • 119
  • 117
  • 37
  • 34
  • 16
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1745
  • 355
  • 304
  • 279
  • 262
  • 243
  • 191
  • 191
  • 185
  • 182
  • 181
  • 170
  • 169
  • 168
  • 161
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Adaptive evolution in static and dynamic environments

Hirst, Anthony John January 1998 (has links)
This thesis provides a framework for describing a canonical evolutionary system. Populations of individuals are envisaged as traversing a search space structured by genetic and developmental operators under the influence of selection. Selection acts on individuals' phenotypic expressions, guiding the population over an evaluation landscape, which describes an idealised evaluation surface over the phenotypic space. The corresponding valuation landscape describes evaluations over the genotypic space and may be transformed by within generation adaptive (learning) or maladaptive (fault induction) local search. Populations subjected to particular genetic and selection operators are claimed to evolve towards a region of the valuation landscape with a characteristic local ruggedness, as given by the runtime operator correlation coefficient. This corresponds to the view of evolution discovering an evolutionarily stable population, or quasi-species, held in a state of dynamic equilibrium by the operator set and evaluation function. This is demonstrated by genetic algorithm experiments using the NK landscapes and a novel, evolvable evaluation function, The Tower of Babel. In fluctuating environments of varying temporal ruggedness, different operator sets are correspondingly more or less adapted. Quantitative genetics analyses of populations in sinusoidally fluctuating conditions are shown to describe certain well known electronic filters. This observation suggests the notion of Evolutionary Signal Processing. Genetic algorithm experiments in which a population tracks a sinusoidally fluctuating optimum support this view. Using a self-adaptive mutation rate, it is possible to tune the evolutionary filter to the environmental frequency. For a time varying frequency, the mutation rate reacts accordingly. With local search, the valuation landscape is transformed through temporal smoothing. By coevolving modifier genes for individual learning and the rate at which the benefits may be directly transmitted to the next generation, the relative adaptedness of individual learning and cultural inheritance according to the rate of environmental change is demonstrated.
42

Neural computation of depth from binocular disparity

Reis Goncalves, Nuno January 2018 (has links)
Stereopsis is a par excellence demonstration of the computational power that neural systems can encapsulate. How is the brain capable of swiftly transforming a stream of binocular two-dimensional signals into a cohesive three-dimensional percept? Many brain regions have been implicated in stereoscopic processing, but their roles remain poorly understood. This dissertation focuses on the contributions of primary and dorsomedial visual cortex. Using convolutional neural networks, we found that disparity encoding in primary visual cortex can be explained by shallow, feed-forward networks optimized to extract absolute depth from naturalistic images. These networks develop physiologically plausible receptive fields, and predict neural responses to highly unnatural stimuli commonly used in the laboratory. They do not necessarily relate to our experience of depth, but seem to act as a bottleneck for depth perception. Conversely, neural activity in downstream specialized areas is likely to be a more faithful correlate of depth perception. Using ultra-high field functional magnetic resonance imaging in humans, we revealed systematic and reproducible cortical organization for stereoscopic depth in dorsal visual areas V3A and V3B/KO. Within these regions, depth selectivity was inversely related to depth magnitude — a key characteristic of stereoscopic perception. Finally, we report evidence for a differential contribution of cortical layers in stereoscopic depth perception.
43

Parameter and state model reduction for Bayesian statistical inverse problems

Lieberman, Chad Eric January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student submitted PDF version of thesis. / Includes bibliographical references (p. 113-118). / Decisions based on single-point estimates of uncertain parameters neglect regions of significant probability. We consider a paradigm based on decision-making under uncertainty including three steps: identification of parametric probability by solution of the statistical inverse problem, propagation of that uncertainty through complex models, and solution of the resulting stochastic or robust mathematical programs. In this thesis we consider the first of these steps, solution of the statistical inverse problem, for partial differential equations (PDEs) parameterized by field quantities. When these field variables and forward models are discretized, the resulting system is high-dimensional in both parameter and state space. The system is therefore expensive to solve. The statistical inverse problem is one of Bayesian inference. With assumption on prior belief about the form of the parameter and an assignment of normal error in sensor measurements, we derive the solution to the statistical inverse problem analytically, up to a constant of proportionality. The parametric probability density, or posterior, depends implicitly on the parameter through the forward model. In order to understand the distribution in parameter space, we must sample. Markov chain Monte Carlo (MCMC) sampling provides a method by which a random walk is constructed through parameter space. By following a few simple rules, the random walk converges to the posterior distribution and the resulting samples represent draws from that distribution. This set of samples from the posterior can be used to approximate its moments. / (cont.) In the multi-query setting, it is computationally intractable to utilize the full-order forward model to perform the posterior evaluations required in the MCMC sampling process. Instead, we implement a novel reduced-order model which reduces in parameter and state. The reduced bases are generated by greedy sampling. We iteratively sample the field in parameter space which maximizes the error in full-order and current reduced-order model outputs. The parameter is added to its basis and then a high-fidelity forward model is solved for the state, which is then added to the state basis. The reduction in state accelerates posterior evaluation while the reduction in parameter allows the MCMC sampling to be conducted with a simpler, non-adaptive 3 Metropolis-Hastings algorithm. In contrast, the full-order parameter space is high-dimensional and requires more expensive adaptive methods. We demonstrate for the groundwater inverse problem in 1-D and 2-D that the reduced-order implementation produces accurate results with a factor of three speed up even for the model problems of dimension N ~~500. Our complexity analysis demonstrates that the same approach applied to the large-scale models of interest (e.g. N > 10⁴) results in a speed up of three orders of magnitude. / by Chad Eric Lieberman. / S.M.
44

Data-driven models for reliability prognostics of gas turbines

Kumar, Gaurev January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, School of Engineering, Center for Computational Engineering, Computation for Design and Optimization Program, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 69-70). / This thesis develops three data-driven models of a commercially operating gas turbine, and applies inference techniques for reliability prognostics. The models focus on capturing feature signals (continuous state) and operating modes (discrete state) that are representative of the remaining useful life of the solid welded rotor. The first model derives its structure from a non-Bayesian parametric hidden Markov model. The second and third models are based on Bayesian nonparametric methods, namely the hierarchical Dirchlet process, and can be viewed as extensions of the first model. For all three approaches, the model structure is first prescribed, parameter estimation procedures are then discussed, and lastly validation and prediction results are presented, using proposed degradation metrics. All three models are trained using five years of data, and prediction algorithms are tested on a sixth year of data. Results indicate that model 3 is superior, since it is able to detect new operating modes, which the other models fail to do. The turbine is based on a sequential combustion design and operates in the 50Hz wholesale electricity market. The rotor is the most critical asset of the machine and is subject to nonlinear loadings induced from three sources: i) day-to-day variations in total power generated by the turbine; ii) machine trips in high and low loading conditions; iii) downtimes due to scheduled maintenance and inspection events. These sources naturally lead to dynamics, where random (resp. forced) transitions occur due to switching in the operating mode (resp. trip and/or maintenance events). The degradation of the rotor is modeled by measuring the abnormality witnessed by the cooling air temperature within different modes. Generation companies can utilize these indicators for making strategic decisions such as maintenance scheduling and generation planning. / by Gaurev Kumar. / S.M.
45

Secure electric power grid operation

Foo, Ming Qing January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, School of Engineering, Center for Computational Engineering, Computation for Design and Optimization Program, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 87-91). / This thesis examines two problems concerning the secure and reliable operation of the electric power grid. The first part studies the distributed operation of the electric power grid using the power flow problem, which is vital to the operation of the grid. The power flow problem is a feasibility problem for finding an assignment of complex bus voltages that satisfies the power flow equations and is within operational and safety limits. For reliability and privacy reasons, it is desirable to solve the power flow problem in a distributed manner. Two novel distributed algorithms are presented for solving convex feasibility problems for networks based on the Method of Alternating Projections (MAP) and the Projected Consensus algorithm. These algorithms distribute computation among the nodes of the network and do not require any form of central coordination. The original problem is equivalently split into small local sub-problems, which are coordinated locally via a thin communication protocol. Although the power flow problem is non-convex, the new algorithms are demonstrated to be powerful heuristics using IEEE test beds. Quadratically Constrained Quadratic Programs (QCQP), which occur in the projection sub-problems, are studied and methods for solving them efficiently are developed. The second part addresses the robustness and resiliency of state estimation algorithms for cyber-physical systems. The operation of the electric power grid is modeled as a dynamical system that is supported by numerous feedback control mechanisms, which depend heavily on state estimation algorithms. The electric power grid is constantly under attack and, if left unchecked, these attacks may corrupt state estimates and lead to severe consequences. This thesis proposes a novel dynamic state estimator that is resilient against data injection attacks and robust to modeling errors and additive noise signals. By leveraging principles of robust optimization, the estimator can be formulated as a convex optimization problem and its effectiveness is demonstrated in simulations of an IEEE 14-bus system. / by Ming Qing Foo. / S.M.
46

Blast overpressure relief using air vacated buffer medium

Avasarala, Srikanti Rupa January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student submitted PDF version of thesis. / Includes bibliographical references (p. 85-88). / Blast waves generated by intense explosions cause damage to structures and human injury. In this thesis, a strategy is investigated for relief of blast overpressure resulting from explosions in air. The strategy is based on incorporating a layer of low pressure-low density air in between the blast wave and the target structure. Simulations of blast waves interacting with this air-vacated layer prior to arrival at a fixed wall are conducted using a Computational Fluid Dynamics (CFD) framework. Pressure histories on the wall are recorded from the simulations and used to investigate the potential benefits of vacated air layers in mitigating blast metrics such as peak reflected pressure from the wall and maximum transmitted impulse to the wall. It is observed that these metrics can be reduced by a significant amount by introducing the air-vacated buffer especially for incident overpressures of the order of a few atmospheres. This range of overpressures could be fatal to the human body which makes the concept very relevant for mitigation of human blast injuries. We establish a functional dependence of the mitigation metrics on the blast intensity, the buffer pressure and the buffer length. In addition, Riemann solutions are utilized to analyze the wave structure obtained from the blast-buffer interactions for the interaction of a uniform wave an air-depleted buffer. Exact analytical expressions are obtained for the mitigation obtained in the incident wave momentum in terms of the incident shock pressure and the characteristics of the depleted buffer. The results obtained are verified through numerical simulations. / (cont.) It is found that the numerical results are in excellent agreement with the theory. The work presented could help in the design of effective blast protective materials and systems, for example in the construction of air-vacated sandwich panels. Keywords: Blast Mitigation, Air-depleted Buffer, Low Pressure, Blast Waves, Sandwich Plates, Numerical Simulations / by Srikanti Rupa Avasarala. / S.M.
47

Data-driven revenue management

Uichanco, Joline Ann Villaranda January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 125-127). / In this thesis, we consider the classical newsvendor model and various important extensions. We do not assume that the demand distribution is known, rather the only information available is a set of independent samples drawn from the demand distribution. In particular, the variants of the model we consider are: the classical profit-maximization newsvendor model, the risk-averse newsvendor model and the price-setting newsvendor model. If the explicit demand distribution is known, then the exact solutions to these models can be found either analytically or numerically via simulation methods. However, in most real-life settings, the demand distribution is not available, and usually there is only historical demand data from past periods. Thus, data-driven approaches are appealing in solving these problems. In this thesis, we evaluate the theoretical and empirical performance of nonparametric and parametric approaches for solving the variants of the newsvendor model assuming partial information on the distribution. For the classical profit-maximization newsvendor model and the risk-averse newsvendor model we describe general non-parametric approaches that do not make any prior assumption on the true demand distribution. We extend and significantly improve previous theoretical bounds on the number of samples required to guarantee with high probability that the data-driven approach provides a near-optimal solution. By near-optimal we mean that the approximate solution performs arbitrarily close to the optimal solution that is computed with respect to the true demand distributions. / (cont.) For the price-setting newsvendor problem, we analyze a previously proposed simulation-based approach for a linear-additive demand model, and again derive bounds on the number of samples required to ensure that the simulation-based approach provides a near-optimal solution. We also perform computational experiments to analyze the empirical performance of these data-driven approaches. / by Joline Ann Villaranda Uichanco. / S.M.
48

Approximation of the transient joint queue-length distribution in tandem networks

Yamani, Jana H. (Jana Hashim) January 2013 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 95-97). / This work considers an urban traffic network, and represents it as a Markovian queueing network. This work proposes an analytical approximation of the time-dependent joint queue-length distribution of the network. The challenge is to provide an accurate analytical description of between and within queue (i.e. link) dynamics, while deriving a tractable approach. In order to achieve this, we use an aggregate description of queue states (i.e. state space reduction). These are referred to as aggregate (queue-length) distributions. This reduces the dimensionality of the joint distribution. The proposed method is formulated over three different stages: we approximate the time-dependent aggregate distribution of 1) a single queue, 2) a tandem 3-queue network, 3) a tandem network of arbitrary size. The third stage decomposes the network into overlapping 3-queue sub-networks. The methods are validated versus simulation results. We then use the proposed tandem network model to solve an urban traffic signal control problem, and analyze the added value of accounting for time-dependent between queue dependency in traffic management problems for congested urban networks. / by Jana H. Yamani. / S.M.
49

Design of a composite combat helmet liner for prevention of blast-induced traumatic brain injury

Vechart, Andrew (Andrew Peter) January 2011 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 121-128). / Air blast-induced traumatic brain injuries (TBIs) represent a significant percentage of military personnel injuries observed in Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF). Prevalence of blast-induced TBIs is attributed to several factors, including improved body armor, improved diagnostic techniques, greater awareness, and the increased threat of attack by improvised explosive devices (IEDs). Though the mechanisms of blast-induced TBIs are not fully understood, this is a serious problem that needs to be addressed. The overall goal of the work presented in this report is to explore a possible improvement to the Advanced Combat Helmet (ACH) liner increasing the protection against blast-induced TBIs. The essential new element is the inclusion of moveable or deformable materials sandwiched within foam to dissipate the blast energy, reduce the peak transmitted pressure, and stretch the blast waveform before it reaches the brain. Filler materials explored in this work include glass beads, aerogel, glycerin, and water. To contribute to this goal, the description and validation of a model of the dynamic response of a (modified) helmet and head surrogate to an air blast event is presented in this report. An initial prototype for a liner incorporating the filler material technology is designed and manufactured. The response characteristics of this prototype are then assessed experimentally by collecting pressure data during air blast loading provided by an explosive drive shock tube. Experimental work is carried out at Purdue University. A nonlinear finite element model is then developed using the commercial code ABAQUS* to describe the response observed experimentally. Consistency between results obtained numerically and results obtained experimentally indicates the model accurately describes the physics of a blast event impinging on a helmet and head. Several suggestions are then provided for how the model may be used to optimize the design of a helmet liner providing the maximum protection against airblasts. / by Andrew Vechart. / S.M.
50

Untangling the effects of residential segregation on individual mobility / Quantifying the effects of residential segregation on individual mobility

Desu, Suma January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, School of Engineering, Center for Computational Engineering, Computation for Design and Optimization Program, 2015. / Title as it appears in MIT Commencement Exercises program, June 5, 2015: Quantifying the effects of residential segregation on individual mobility Cataloged from PDF version of thesis. / Includes bibliographical references (pages 60-64). / More than half of today's world population lives in cities and that fraction is steadily growing. Models that accurately capture all segments of the population are necessary in order to design effective policies and new technologies to ensure efficient and stable operations of cities. The current sociology literature has a rich foundation in characterizing the demographics of static population distributions, however, these characterizations fail to account for the reality of dynamic movement. Though there has been recent work in developing models of human mobility, they in turn do not capture demographic differences in the populations of cities. In this work we present a computational approach to reformulating segregation metrics to incorporate dynamic movement patterns and also quantify the effects of introducing demographics into a mobility model. In coupling two fields that are inherently connected but not established as so, we must very carefully consider our experimental set up. The first part of this work deals with understanding our data and its limitations at fine granularities and explicitly measuring segregation metrics at various scales to design a study that will elucidate meaningful aspects of segregation. In the second part of this work we reformulate traditional segregation metrics using topological properties of origin destination networks as input. These measures are flexible in considering many locations that individuals visit and therefore more accurately capture the environments of individuals that traditional segregation literature seeks to characterize. We utilize two rank-based mobility models that implicitly incorporate geographic properties of population distributions to understand the effects of residential segregation on mobility patterns and examine the effect of demographic considerations on model accuracy. In summary, this thesis will focus on synthesizing the rich body of work on static characterizations of socioeconomic structure in cities with dynamic models to better understand different racial segmentations of Boston's population. This work is both an extension to static segregation literature as well as a refinement of current mobility models. / by Suma Desu. / S.M.

Page generated in 0.0932 seconds