• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17558
  • 5457
  • 2960
  • 2657
  • 1693
  • 1640
  • 1013
  • 877
  • 762
  • 541
  • 306
  • 283
  • 279
  • 257
  • 175
  • Tagged with
  • 42219
  • 4330
  • 3915
  • 3756
  • 2861
  • 2490
  • 2415
  • 2310
  • 2143
  • 2020
  • 2011
  • 1951
  • 1949
  • 1926
  • 1864
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Methodology for rapid static and dynamic model-based engine calibration and optimization

Lee, Byungho 04 August 2005 (has links)
No description available.
402

Simulation and analysis of the triangular Ising model /

Metcalf, Bruce Dale January 1974 (has links)
No description available.
403

Continuous Model Theory and Finite-Representability Between Banach Spaces

Conley, Sean 05 1900 (has links)
In this thesis, we consider the problem of capturing finite-representability between Banach spaces using the tools of continuous model theory. We introduce predicates and additional sorts to capture finite-representability and show that these can be used to expand the language of Banach spaces. We then show that the class of infinite-dimensional Banach spaces expanded with this additional structure forms an elementary class K_G , and conclude that the theory T_G of K_G is interpretable in T^{eq} , where T is the theory of infinite-dimensional Banach spaces. Finally, we show that existential equivalence in a reduct of the language implies finite-representability. Relevant background on continuous model theory and Banach space theory is provided. / Thesis / Master of Science (MSc)
404

A Mechanistic Model to Examine Mercury in Aquatic Systems

Harris, Reed 03 1900 (has links)
Elevated mercury levels have been observed in a wide variety of aquatic systems. A mass balance non-steady state model was developed to examine mercury cycling in lakes and reservoirs. Hg(ll), methylmercury, Hg° , dimethylmercury and solid phase HgS cycles were interconnected. Compartments included air, water, sediment, suspended solids, plankton, benthos, and two generic fish categories based on diet. Bioenergetics equations for individual fish were extended to consider mercury dynamics for entire fish populations. Biota represented large methylmercury fluxes in the water column and were found to be important methylmercury repositories. In a simulation of a generic well-mixed shield lake in Ontario, the fish population contained about 4 times as much methylmercury as water. Uptake of methylmercury by individual walleye and yellow perch was predicted to be dominated by the food pathway (eg. 99% of total uptake). Based on simulations for the generic shield lake, the watershed has the potential to be an important source of methylmercury in some shield lakes (exceeding in-situ methylation in the generic simulation). Methylation in the water column and sediments were both simulated to be significant. Simulated net production of methylmercury in the generic shield lake was on the order of 0.05 to 0.15 ug methylmercury m⁻² year⁻¹ in the water column, with similar rates in sediments. Simulated rates of net methylation in polluted sytems were higher. Fractions of total dissolved Hg(II) or methylmercury available for methylation and demethylation in aerobic waters were thermodynamically predicted to be small (e.g. <1%). Dissolved organic carbon and sulphides (if present) were thermodynamically predicted to dominate Hg(II) and methylmercury complexation in freshwaters. Hg(II) burial and outflows represented about 85-90% of total mercury losses for the generic shield lake (2 year hydraulic retention time). Volatilization of Hg° , produced by demethylation and Hg(II) reduction, represented the remaining 10-15% of losses. Considerable system to system variability is expected for sources and sinks of total mercury and methylmercury in shield lakes. In simulations of two mercury contaminated environments, Lake St. Clair and Clay Lake, Ontario, sediment return of Hg(II) caused the lakes to be net sources of mercury to downstream areas. Sediment return of mercury could partially explain observed two-phase recoveries of fish methylmercury levels in some polluted systems. The time required for Hg(II) and methylmercury concentrations in various compartments to respond to changes in loads was simulated. There was a tendency towards relatively rapid internal cycling of Hg(II) and methylmercury, but slower overall system response times (eg. years to decades to respond to recover from flooding or pollution episodes). / Thesis / Master of Engineering (ME)
405

Thermal Modeling and System Identification of In-Situ, Through-Ventilated Industrial DC Machines

Jackiw, Isaac January 2018 (has links)
Concerns of the impact of greenhouse gasses (GHG) are leading heavy industry users to explore energy reduction strategies such as the conservation of electricity use in ventilated machines by the use of variable-cooling systems. For these strategies to be implemented, a thermal model of the system is required. This study focuses on the thermal modelling of through-ventilated, industrial, electric machines that employ a variable-cooling strategy, using only on-line data collected during regular machine operation. Two empirical thermal models were developed: a first-order model, and a second-order model which was extended from the first-order based on its performance. By means of an energy-balance, the first-order model was able to define an estimation of the motor temperature based on only a single variable, and thus was able to be fit directly to complete process-cycle data to determine the parameter. Over the 18 process-cycle samples, this parameter was found to vary by as much as $\pm$10\%, therefore, when a generalized model was proposed using the median value of the parameter, the maximum error seen over the process cycles was 9.0 $^{\circ}C$, with a maximum average error over a process-cycle of 4.2 $^{\circ}C$. An effort was made to determine the effects of reduced cooling on the model by performing reduced-cooling experiments during machine cool-downs, however the thermal-time constant, which directly relates the heat-transfer rate to the system capacitance, was found to vary by as much as 47\%, suggesting that the system's capacitance was changing, and that the first-order model was not accurate enough to distil these effects. A key obervation of the performance of the first-order model was that in heating it would under-predict the machine temperature, and in cooling would over-predict, suggesting that an additional heat-transfer path existed to the cooling air through some additional thermal capacitance. In an effort to include higher-order effects so that reduced-cooling effects could be established, a second-order model was developed by adding an additional lumped-node to the system, introducing the supposed additional conduction/capacitive path, where the heat-generating node was considered analogous to the motor's armature, and the additional node was considered as a thermal-sink. This model was then numerically fit to the cool-down data for both maximum and reduced flow-rate cases in order to identify the system's main heat transfer parameters, however, once again, a large variance in the parameters was found. Through model simulation, this was determined to be the result of the system not starting at a steady-state temperature distribution, which resulted in the parameter estimation under-predicting the true values. As such, the upper-limits of the parameter spreads were used to identify the model. Assuming the system's heat generation was due to Joule-losses only, the second-order model was found to perform marginally better than the first-order model, with a maximum error of 8.6 $^{\circ}C$, and a maximum average error of 3.3 $^{\circ}C$ over the process-cycles. Though the second-order model typically performed better than the first-order model in cooling, it was found that the model would vary between over-predicting and under-predicting the machine temperature, indicating that additional and higher-order core losses may play a role in the heating of the machine. Although the first-order model was found to be slightly less-accurate than that of the second-order, the first-order model has a much simpler and far less intrusive identification scheme than that of the second-order model with a relatively low loss in accuracy. As a result, it would be possible to to use the first-order model for on-line temperature monitoring of the machine by performing tests during operation where the cooling rate is reduced to identify the change in the model parameter. However a sufficient factor of safety ($\approx$10 $^{\circ}C$) would be required to account for the under-estimation that occurs in heating. For the second-order model to be implemented, more controlled testing is required in order to properly discern the effects of reduced cooling from the effects of the initial temperature distribution. Additionally, the inclusion of core-losses in the machine heat generation term should be investigated to improve model performance. / Thesis / Master of Applied Science (MASc)
406

On generalized Jónsson classes.

Sevee, Denis Edward January 1972 (has links)
No description available.
407

The Aggregated Spatial Logit Model: Theory, Estimation And Application

Ferguson, Richard Mark January 1995 (has links)
<p>In problems of spatial choice, the choice set is often more aggregated than the one considered by decision-makers, typically because choice data are available only at the aggregate level. These aggregate choice units will exhibit heterogeneity in utility and in size. To be consistent with utility maximization, a choice model must estimate choice probabilities on the basis of the maximum utility within heterogeneous aggregates. The ordinary multinomial logit model (OMNL) applied to aggregate choice units fails this criterion as it is estimated on the basis of average utility. In this thesis, the aggregated spatial logit model, which utilizes the theory underlying the nested logit model to estimate the appropriate maximum utilities of aggregates, is derived and discussed. Initially, the theoretical basis for the model is made clear and an asymptotic version of the model is derived. Secondly, the model is tested in a simulated environment to demonstrate that the OMNL model lacks the generality of the aggregated model in the presence of heterogeneous aggregates. Thirdly, full endogenous estimation of the aggregated model is studied with a view toward finding the best optimization algorithm. Finally, with all the elements in place, the model is tested in an application of migration from the Canadian Atlantic Provinces.</p> / Doctor of Philosophy (PhD)
408

Bayesian Multilevel-multiclass Graphical Model

Lin, Jiali 21 June 2019 (has links)
Gaussian graphical model has been a popular tool to investigate conditional dependency between random variables by estimating sparse precision matrices. Two problems have been discussed. One is to learn multiple Gaussian graphical models at multilevel from unknown classes. Another one is to select Gaussian process in semiparametric multi-kernel machine regression. The first problem is approached by Gaussian graphical model. In this project, I consider learning multiple connected graphs among multilevel variables from unknown classes. I esti- mate the classes of the observations from the mixture distributions by evaluating the Bayes factor and learn the network structures by fitting a novel neighborhood selection algorithm. This approach is able to identify the class membership and to reveal network structures for multilevel variables simultaneously. Unlike most existing methods that solve this problem by frequentist approaches, I assess an alternative to a novel hierarchical Bayesian approach to incorporate prior knowledge. The second problem focuses on the analysis of correlated high-dimensional data which has been useful in many applications. In this work, I consider a problem of detecting signals with a semiparametric regression model which can study the effects of fixed covariates (e.g. clinical variables) and sets of elements (e.g. pathways of genes). I model the unknown high-dimension functions of multi-sets via multi-Gaussian kernel machines to consider the possibility that elements within the same set interact with each other. Hence, my variable selection can be considered as Gaussian process selection. I develop my Gaussian process selection under the Bayesian variable selection framework. / Doctor of Philosophy / A network can be represented by nodes and edges between nodes. Under the assumption of multivariate Gaussian distribution, a graphical model is called a Gaussian graphical model, where edges are undirected. Gaussian graphical model has been studied for years to understand conditional dependency structure between random variables. Two problems have been discussed. In the first project, I consider learning multiple connected graphs among multilevel variables from unknown classes. I estimate the classes of the observations from the mixture distributions. This approach is able to identify the class membership and to reveal network structures for multilevel variables simultaneously. Unlike most existing methods that solve this problem by frequentist approaches, I assess an alternative to a novel hierarchical Bayesian approach to incorporate prior knowledge. The second problem focuses on the analysis of correlated high-dimensional data which has been useful in many applications. In this work, I consider a problem of detecting signals with a semiparametric regression model which can study the effects of fixed covariates (e.g. clinical variables) and sets of elements (e.g. pathways of genes). I model the unknown high-dimension functions of multi-sets via multi-Gaussian kernel machines to consider the possibility that elements within the same set interact with each other. Hence, my variable selection can be considered as Gaussian process selection. I develop my Gaussian process selection under the Bayesian variable selection framework
409

Reliability Transform Method

Young, Robert Benjamin 22 July 2003 (has links)
Since the end of the cold war the United States is the single dominant naval power in the world. The emphasis of the last decade has been to reduce cost while maintaining this status. As the Navy's infrastructure decreases, so too does its ability to be an active participant in all aspects of ship operations and design. One way that the navy has achieved large savings is by using the Military Sealift Command to manage day to day operations of the Navy's auxiliary and underway replenishment ships. While these ships are an active part of the Navy's fighting force, they infrequently are put into harm's way. The natural progression in the design of these ships is to have them fully classified under current American Bureau of Shipping (ABS) rules, as they closely resemble commercial ships. The first new design to be fully classed under ABS is the T-AKE. The Navy and ABS consider the T-AKE program a trial to determine if a partnership between the two organizations can extend into the classification of all new naval ships. A major difficulty in this venture is how to translate the knowledge base which led to the development of current military specifications into rules that ABS can use for future ships. The specific task required by the Navy in this project is to predict the inherent availability of the new T-AKE class ship. To accomplish this task, the reliability of T-AKE equipment and machinery must be known. Under normal conditions reliability data would be obtained from past ships with similar mission, equipment and machinery. Due to the unique nature of the T-AKE acquisition, this is not possible. Because of the use of commercial off the shelf (COTS) equipment and machinery, military equipment and machinery reliability data can not be used directly to predict T-AKE availability. This problem is compounded by the fact that existing COTS equipment and machinery reliability data developed in commercial applications may not be applicable to a military application. A method for deriving reliability data for commercial equipment and machinery adapted or used in military applications is required. A Reliability Transform Method is developed that allows the interpolation of reliability data between commercial equipment and machinery operating in a commercial environment, commercial equipment and machinery operating in a military environment, and military equipment and machinery operating in a military environment. The reliability data for T-AKE is created using this Reliability Transform Method and the commercial reliability data. The reliability data is then used to calculate the inherent availability of T-AKE. / Master of Science
410

A Finite Element Model of the Pregnant Female Occupant: Analysis of Injury Mechanisms and Restraint Systems

Moorcroft, David Michael 16 August 2002 (has links)
For women of reproductive age, automobile crashes are the leading cause of death worldwide. It has been estimated that 40,000 women in the second half of pregnancy are involved in motor-vehicle crashes each year. It has been estimated that between 300 to 3800 will experience a fetal loss. Placental abruption has been shown to account for 50% to 70% of fetal losses in motor vehicle crashes. While there is a growing database of medical case studies and retrospective studies describing the outcome of motor vehicle accidents involving pregnant occupants, as well as the effect of seatbelts on fetal survival, previous research has not produced a tool for engineers to use to improve the safety of a pregnant occupant in a motor vehicle. The goal of this project was to develop a model that can quantify the stresses and strains on the uterus of a pregnant woman in order to predict the risk of injury. A finite element uterine model of a 7-month pregnant female was created and integrated into a multi-body human model. Unrestrained, 3-pt belt, and 3-pt belt plus airbag tests were simulated at speeds ranging from 13 kph to 55 kph. Peak uterine strain was found to be a good predictor of fetal outcome. The uterine strain sufficient to cause placental abruption was seen in simulations known to have greater than 75% risk of adverse fetal outcome. Head injury criteria (HIC) and viscous criterion (V*C) were examined as a check of overall occupant protection. The 3-pt belt plus airbag restraint provided the greatest amount of protection to the mother. The model proved successful at predicting risk of fetal demise from placental abruption and verified experimental findings noting the importance of proper restraint use for the pregnant occupant. / Master of Science

Page generated in 0.0678 seconds