• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10895
  • 3795
  • 2311
  • 1242
  • 735
  • 684
  • 517
  • 202
  • 187
  • 186
  • 160
  • 141
  • 139
  • 139
  • 139
  • Tagged with
  • 24852
  • 3977
  • 2595
  • 2497
  • 2452
  • 2288
  • 2164
  • 2120
  • 2016
  • 1734
  • 1699
  • 1513
  • 1455
  • 1412
  • 1308
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

Blurring the line: Durban Mental Health Support and Training Centre

Patel, Rashma Vinod 20 April 2011 (has links)
MArch (Professional), Faculty of Engineering and the Built Environment, School of Architecture and Planning, University of the Witwatersrand
422

Influence of Subject Taught (STEM), Title I, and Grade Level of Instruction for Components in an Effective Professional Development Design

Unknown Date (has links)
Professional development has been deemed ineffective for several decades. This ineffectiveness could stem from the one-size-fits-all professional development designs, and the inconsistencies and contradictions pointed out in professional development research (which is used to create these designs). Investigating how subject taught (STEM and non-STEM), Title I status of the school (Title I and non-Title I), and grade level of instruction (elementary, middle and high) could influence teachers’ preferences regarding components included in an effective design is a step toward resolving some of these inconsistencies. The research design was an embedded mixed method – an overall causal-comparative design embedded with interviews. Interviews determined teachers’ perceptions of an effective professional development design. The survey investigated preferences for nine components: content knowledge, pedagogical knowledge, active learning, duration, alignment with goals and policies, follow-up, collaboration, support, and resources (tangible and intangible). In the interviews, teachers communicated a need for differentiation based on grade level of instruction, Tittle I status of the school, and subject taught, with high percentages of agreement with the final questions of the survey. The ordinal logistic regression indicated that subject taught and Title I status of the school did not have a statistically significant effect on the dependent variable. Breaking up participants according to grade level of instruction (elementary versus secondary) had a statistically significant effect on teachers’ preferences regarding the components included in an effective professional development design. This indicated that professional development should be differentiated based on elementary and secondary instruction. When the researcher reviewed the components, some showed that the independent variables, Title I status of the school and grade level of instruction had a statistically significant effect. Although the ordinal logistic regression revealed a lack of statistical significance, percent differences indicated that factors such as subject taught, Title I status of the school, and grade level of instruction influenced teachers’ preferences regarding specific components in an effective professional development design. These findings illustrate promise that in a larger study, statistical significance might be present. Thus, professional development should be differentiated based on subject taught, Title I status of the school, and grade level of instruction. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
423

The solvent resistance of aromatic polymer composites

Randles, Steven James January 1990 (has links)
The diffusion rate of a large range of solvents into carbon fibre reinforced PEEK (APC-2) has been measured to discover the effect of the physical characteristics of the solvent. Three dimensional graphs have been plotted which correlate four parameters (solvent size, shape, hydrogen bonding capacity and solubility parameter) to solvent uptake. In the composite the effects of: thickness, lay-up, background water content and strain level on solvent diffusion have been assessed. The effect of composite thickness can be predicted using the film thickness scaling law provided the diffusion is Fickian. The effect of background water content is small, tending to make the diffusion profile two-stage. The effect of lay-up has been shown to have a major Affect on diffusion rate, unidirectional lay-ups having a much slower diffusion rate. Several theories have been postulated to explain this behaviour. The effects of stress on diffusion rate can be predicted by free volume models, provided that the stress/strain is kept below a certain critical level. It has been shown that the damage caused by a solvent, provided the stress does not exceed a critical value, is dependent on the amount of solvent in the matrix. This is due to plasticisation effects. Attempts to model this behaviour using free volume models have proved successful. Stress has been shown to enhance environmental attack. With certain solvents, above a critical stress or strain, environmental stress cracking occurs, leading to a considerable reduction in mechanical properties. Photographic evidence shows that cracking is initiated at stress concentrators within the matrix. Crack propagation is entirely matrix related and independent of spherulite boundaries. Overall, APC-2 has been shown to possess excellent environmental resistance when used in aerospace applications.
424

Computational experiments for local search algorithms for binary and mixed integer optimization

Zhou, Jingting, S.M. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 53). / In this thesis, we implement and test two algorithms for binary optimization and mixed integer optimization, respectively. We fine tune the parameters of these two algorithms and achieve satisfactory performance. We also compare our algorithms with CPLEX on large amount of fairly large-size instances. Based on the experimental results, our binary optimization algorithm delivers performance that is strictly better than CPLEX on instances with moderately dense constraint matrices, while for sparse instances, our algorithm delivers performance that is comparable to CPLEX. Our mixed integer optimization algorithm outperforms CPLEX most of the time when the constraint matrices are moderately dense, while for sparse instances, it yields results that are close to CPLEX, and the largest gap relative to the result given by CPLEX is around 5%. Our findings show that these two algorithms, especially the binary optimization algorithm, have practical promise in solving large, dense instances of both set covering and set packing problems. / by Jingting Zhou. / S.M.
425

The design and testing of magnets for nuclear magnetic resonance imaging

Evans, P. R. January 1984 (has links)
Recently, images of the inside of the human body have been produced non-invasively using nuclear magnetic resonance (nmr). The technique involves placing the patient in a strong, homogeneous magnetic field. The heart of any nmr imaging system is the magnet that produces this field and this thesis is concerned with the design and testing of such magnets. Various computer programs have been written that allow the designer to model a magnet either in terms of axisymmetric coils, or in terms of the discrete conductors that simulate the actual form of the winding. The axisymmetric program automatically optimises the design so as to produce a uniform field, and the data from this program may be used directly to generate an appropriate helical or spiral winding. These programs not only allow the designer to produce a suitable design, but also to put tolerances on the dimensions of the conductors and formers that support the winding. The problem of removing inhomogeneities produced by dimensional inaccuracies and surrounding ferromagnetic materials is also considered. A nmr probe system has been developed that allows the homogeneity of a magnet to be assessed independently of the stability of its power supply. The probe has been used for field measurements in a magnet designed using the above techniques, and the results are presented.
426

Parameter and state model reduction for Bayesian statistical inverse problems

Lieberman, Chad Eric January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student submitted PDF version of thesis. / Includes bibliographical references (p. 113-118). / Decisions based on single-point estimates of uncertain parameters neglect regions of significant probability. We consider a paradigm based on decision-making under uncertainty including three steps: identification of parametric probability by solution of the statistical inverse problem, propagation of that uncertainty through complex models, and solution of the resulting stochastic or robust mathematical programs. In this thesis we consider the first of these steps, solution of the statistical inverse problem, for partial differential equations (PDEs) parameterized by field quantities. When these field variables and forward models are discretized, the resulting system is high-dimensional in both parameter and state space. The system is therefore expensive to solve. The statistical inverse problem is one of Bayesian inference. With assumption on prior belief about the form of the parameter and an assignment of normal error in sensor measurements, we derive the solution to the statistical inverse problem analytically, up to a constant of proportionality. The parametric probability density, or posterior, depends implicitly on the parameter through the forward model. In order to understand the distribution in parameter space, we must sample. Markov chain Monte Carlo (MCMC) sampling provides a method by which a random walk is constructed through parameter space. By following a few simple rules, the random walk converges to the posterior distribution and the resulting samples represent draws from that distribution. This set of samples from the posterior can be used to approximate its moments. / (cont.) In the multi-query setting, it is computationally intractable to utilize the full-order forward model to perform the posterior evaluations required in the MCMC sampling process. Instead, we implement a novel reduced-order model which reduces in parameter and state. The reduced bases are generated by greedy sampling. We iteratively sample the field in parameter space which maximizes the error in full-order and current reduced-order model outputs. The parameter is added to its basis and then a high-fidelity forward model is solved for the state, which is then added to the state basis. The reduction in state accelerates posterior evaluation while the reduction in parameter allows the MCMC sampling to be conducted with a simpler, non-adaptive 3 Metropolis-Hastings algorithm. In contrast, the full-order parameter space is high-dimensional and requires more expensive adaptive methods. We demonstrate for the groundwater inverse problem in 1-D and 2-D that the reduced-order implementation produces accurate results with a factor of three speed up even for the model problems of dimension N ~~500. Our complexity analysis demonstrates that the same approach applied to the large-scale models of interest (e.g. N > 10⁴) results in a speed up of three orders of magnitude. / by Chad Eric Lieberman. / S.M.
427

Data-driven models for reliability prognostics of gas turbines

Kumar, Gaurev January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, School of Engineering, Center for Computational Engineering, Computation for Design and Optimization Program, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 69-70). / This thesis develops three data-driven models of a commercially operating gas turbine, and applies inference techniques for reliability prognostics. The models focus on capturing feature signals (continuous state) and operating modes (discrete state) that are representative of the remaining useful life of the solid welded rotor. The first model derives its structure from a non-Bayesian parametric hidden Markov model. The second and third models are based on Bayesian nonparametric methods, namely the hierarchical Dirchlet process, and can be viewed as extensions of the first model. For all three approaches, the model structure is first prescribed, parameter estimation procedures are then discussed, and lastly validation and prediction results are presented, using proposed degradation metrics. All three models are trained using five years of data, and prediction algorithms are tested on a sixth year of data. Results indicate that model 3 is superior, since it is able to detect new operating modes, which the other models fail to do. The turbine is based on a sequential combustion design and operates in the 50Hz wholesale electricity market. The rotor is the most critical asset of the machine and is subject to nonlinear loadings induced from three sources: i) day-to-day variations in total power generated by the turbine; ii) machine trips in high and low loading conditions; iii) downtimes due to scheduled maintenance and inspection events. These sources naturally lead to dynamics, where random (resp. forced) transitions occur due to switching in the operating mode (resp. trip and/or maintenance events). The degradation of the rotor is modeled by measuring the abnormality witnessed by the cooling air temperature within different modes. Generation companies can utilize these indicators for making strategic decisions such as maintenance scheduling and generation planning. / by Gaurev Kumar. / S.M.
428

Secure electric power grid operation

Foo, Ming Qing January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, School of Engineering, Center for Computational Engineering, Computation for Design and Optimization Program, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 87-91). / This thesis examines two problems concerning the secure and reliable operation of the electric power grid. The first part studies the distributed operation of the electric power grid using the power flow problem, which is vital to the operation of the grid. The power flow problem is a feasibility problem for finding an assignment of complex bus voltages that satisfies the power flow equations and is within operational and safety limits. For reliability and privacy reasons, it is desirable to solve the power flow problem in a distributed manner. Two novel distributed algorithms are presented for solving convex feasibility problems for networks based on the Method of Alternating Projections (MAP) and the Projected Consensus algorithm. These algorithms distribute computation among the nodes of the network and do not require any form of central coordination. The original problem is equivalently split into small local sub-problems, which are coordinated locally via a thin communication protocol. Although the power flow problem is non-convex, the new algorithms are demonstrated to be powerful heuristics using IEEE test beds. Quadratically Constrained Quadratic Programs (QCQP), which occur in the projection sub-problems, are studied and methods for solving them efficiently are developed. The second part addresses the robustness and resiliency of state estimation algorithms for cyber-physical systems. The operation of the electric power grid is modeled as a dynamical system that is supported by numerous feedback control mechanisms, which depend heavily on state estimation algorithms. The electric power grid is constantly under attack and, if left unchecked, these attacks may corrupt state estimates and lead to severe consequences. This thesis proposes a novel dynamic state estimator that is resilient against data injection attacks and robust to modeling errors and additive noise signals. By leveraging principles of robust optimization, the estimator can be formulated as a convex optimization problem and its effectiveness is demonstrated in simulations of an IEEE 14-bus system. / by Ming Qing Foo. / S.M.
429

Blast overpressure relief using air vacated buffer medium

Avasarala, Srikanti Rupa January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student submitted PDF version of thesis. / Includes bibliographical references (p. 85-88). / Blast waves generated by intense explosions cause damage to structures and human injury. In this thesis, a strategy is investigated for relief of blast overpressure resulting from explosions in air. The strategy is based on incorporating a layer of low pressure-low density air in between the blast wave and the target structure. Simulations of blast waves interacting with this air-vacated layer prior to arrival at a fixed wall are conducted using a Computational Fluid Dynamics (CFD) framework. Pressure histories on the wall are recorded from the simulations and used to investigate the potential benefits of vacated air layers in mitigating blast metrics such as peak reflected pressure from the wall and maximum transmitted impulse to the wall. It is observed that these metrics can be reduced by a significant amount by introducing the air-vacated buffer especially for incident overpressures of the order of a few atmospheres. This range of overpressures could be fatal to the human body which makes the concept very relevant for mitigation of human blast injuries. We establish a functional dependence of the mitigation metrics on the blast intensity, the buffer pressure and the buffer length. In addition, Riemann solutions are utilized to analyze the wave structure obtained from the blast-buffer interactions for the interaction of a uniform wave an air-depleted buffer. Exact analytical expressions are obtained for the mitigation obtained in the incident wave momentum in terms of the incident shock pressure and the characteristics of the depleted buffer. The results obtained are verified through numerical simulations. / (cont.) It is found that the numerical results are in excellent agreement with the theory. The work presented could help in the design of effective blast protective materials and systems, for example in the construction of air-vacated sandwich panels. Keywords: Blast Mitigation, Air-depleted Buffer, Low Pressure, Blast Waves, Sandwich Plates, Numerical Simulations / by Srikanti Rupa Avasarala. / S.M.
430

Data-driven revenue management

Uichanco, Joline Ann Villaranda January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 125-127). / In this thesis, we consider the classical newsvendor model and various important extensions. We do not assume that the demand distribution is known, rather the only information available is a set of independent samples drawn from the demand distribution. In particular, the variants of the model we consider are: the classical profit-maximization newsvendor model, the risk-averse newsvendor model and the price-setting newsvendor model. If the explicit demand distribution is known, then the exact solutions to these models can be found either analytically or numerically via simulation methods. However, in most real-life settings, the demand distribution is not available, and usually there is only historical demand data from past periods. Thus, data-driven approaches are appealing in solving these problems. In this thesis, we evaluate the theoretical and empirical performance of nonparametric and parametric approaches for solving the variants of the newsvendor model assuming partial information on the distribution. For the classical profit-maximization newsvendor model and the risk-averse newsvendor model we describe general non-parametric approaches that do not make any prior assumption on the true demand distribution. We extend and significantly improve previous theoretical bounds on the number of samples required to guarantee with high probability that the data-driven approach provides a near-optimal solution. By near-optimal we mean that the approximate solution performs arbitrarily close to the optimal solution that is computed with respect to the true demand distributions. / (cont.) For the price-setting newsvendor problem, we analyze a previously proposed simulation-based approach for a linear-additive demand model, and again derive bounds on the number of samples required to ensure that the simulation-based approach provides a near-optimal solution. We also perform computational experiments to analyze the empirical performance of these data-driven approaches. / by Joline Ann Villaranda Uichanco. / S.M.

Page generated in 0.0892 seconds