401 |
Simulation and analysis of the triangular Ising model /Metcalf, Bruce Dale January 1974 (has links)
No description available.
|
402 |
Continuous Model Theory and Finite-Representability Between Banach SpacesConley, Sean 05 1900 (has links)
In this thesis, we consider the problem of capturing finite-representability between Banach spaces using the tools of continuous model theory. We introduce predicates and additional sorts to capture finite-representability and show that these can be used to expand the language of Banach spaces. We then show that the class of infinite-dimensional Banach spaces expanded with this additional structure forms an elementary class K_G , and conclude that the theory T_G of K_G is interpretable in T^{eq} , where T is the theory of infinite-dimensional Banach spaces. Finally, we show that existential equivalence in a reduct of the language implies finite-representability. Relevant background on continuous model theory and Banach space theory is provided. / Thesis / Master of Science (MSc)
|
403 |
A Mechanistic Model to Examine Mercury in Aquatic SystemsHarris, Reed 03 1900 (has links)
Elevated mercury levels have been observed in a wide variety of aquatic systems. A mass balance non-steady state model was developed to examine mercury cycling in lakes and reservoirs. Hg(ll), methylmercury, Hg° , dimethylmercury and solid phase HgS cycles were interconnected. Compartments included air, water, sediment, suspended solids, plankton, benthos, and two generic fish categories based on diet. Bioenergetics equations for individual fish were extended to consider mercury dynamics for entire fish populations. Biota represented large methylmercury fluxes in the water column and were found to be important methylmercury repositories. In a simulation of a generic well-mixed shield lake in Ontario, the fish population contained about 4 times as much methylmercury as water. Uptake of methylmercury by individual walleye and yellow perch was predicted to be dominated by the food pathway (eg. 99% of total uptake).
Based on simulations for the generic shield lake, the watershed has the potential to be an important source of methylmercury in some shield lakes (exceeding in-situ methylation in the generic simulation). Methylation in the water column and sediments were both simulated to be significant. Simulated net production of methylmercury in the generic shield lake was on the order of 0.05 to 0.15 ug methylmercury m⁻² year⁻¹ in the water column, with similar rates in sediments. Simulated rates of net methylation in polluted sytems were higher. Fractions of total dissolved Hg(II) or methylmercury available for methylation and demethylation in aerobic waters were thermodynamically predicted to be small (e.g. <1%). Dissolved organic carbon and sulphides (if present) were thermodynamically predicted to dominate Hg(II) and methylmercury complexation in freshwaters. Hg(II) burial and outflows represented about 85-90% of total mercury losses for the generic shield lake (2 year hydraulic retention time). Volatilization of Hg° , produced by demethylation and Hg(II) reduction, represented the remaining 10-15% of losses. Considerable system to system variability is expected for sources and sinks of total mercury and methylmercury in shield lakes. In simulations of two mercury contaminated environments, Lake St. Clair and Clay Lake, Ontario, sediment return of Hg(II) caused the lakes to be net sources of mercury to downstream areas. Sediment return of mercury could partially explain observed two-phase recoveries of fish methylmercury levels in some polluted systems. The time required for Hg(II) and methylmercury concentrations in various compartments to respond to changes in loads was simulated. There was a tendency towards relatively rapid internal cycling of Hg(II) and methylmercury, but slower overall system response times (eg. years to decades to respond to recover from flooding or pollution episodes). / Thesis / Master of Engineering (ME)
|
404 |
Thermal Modeling and System Identification of In-Situ, Through-Ventilated Industrial DC MachinesJackiw, Isaac January 2018 (has links)
Concerns of the impact of greenhouse gasses (GHG) are leading heavy industry users to explore energy reduction strategies such as the conservation of electricity use in ventilated machines by the use of variable-cooling systems. For these strategies to be implemented, a thermal model of the system is required. This study focuses on the thermal modelling of through-ventilated, industrial, electric machines that employ a variable-cooling strategy, using only on-line data collected during regular machine operation. Two empirical thermal models were developed: a first-order model, and a second-order model which was extended from the first-order based on its performance.
By means of an energy-balance, the first-order model was able to define an estimation of the motor temperature based on only a single variable, and thus was able to be fit directly to complete process-cycle data to determine the parameter. Over the 18 process-cycle samples, this parameter was found to vary by as much as $\pm$10\%, therefore, when a generalized model was proposed using the median value of the parameter, the maximum error seen over the process cycles was 9.0 $^{\circ}C$, with a maximum average error over a process-cycle of 4.2 $^{\circ}C$. An effort was made to determine the effects of reduced cooling on the model by performing reduced-cooling experiments during machine cool-downs, however the thermal-time constant, which directly relates the heat-transfer rate to the system capacitance, was found to vary by as much as 47\%, suggesting that the system's capacitance was changing, and that the first-order model was not accurate enough to distil these effects. A key obervation of the performance of the first-order model was that in heating it would under-predict the machine temperature, and in cooling would over-predict, suggesting that an additional heat-transfer path existed to the cooling air through some additional thermal capacitance.
In an effort to include higher-order effects so that reduced-cooling effects could be established, a second-order model was developed by adding an additional lumped-node to the system, introducing the supposed additional conduction/capacitive path, where the heat-generating node was considered analogous to the motor's armature, and the additional node was considered as a thermal-sink. This model was then numerically fit to the cool-down data for both maximum and reduced flow-rate cases in order to identify the system's main heat transfer parameters, however, once again, a large variance in the parameters was found. Through model simulation, this was determined to be the result of the system not starting at a steady-state temperature distribution, which resulted in the parameter estimation under-predicting the true values. As such, the upper-limits of the parameter spreads were used to identify the model. Assuming the system's heat generation was due to Joule-losses only, the second-order model was found to perform marginally better than the first-order model, with a maximum error of 8.6 $^{\circ}C$, and a maximum average error of 3.3 $^{\circ}C$ over the process-cycles. Though the second-order model typically performed better than the first-order model in cooling, it was found that the model would vary between over-predicting and under-predicting the machine temperature, indicating that additional and higher-order core losses may play a role in the heating of the machine.
Although the first-order model was found to be slightly less-accurate than that of the second-order, the first-order model has a much simpler and far less intrusive identification scheme than that of the second-order model with a relatively low loss in accuracy. As a result, it would be possible to to use the first-order model for on-line temperature monitoring of the machine by performing tests during operation where the cooling rate is reduced to identify the change in the model parameter. However a sufficient factor of safety ($\approx$10 $^{\circ}C$) would be required to account for the under-estimation that occurs in heating. For the second-order model to be implemented, more controlled testing is required in order to properly discern the effects of reduced cooling from the effects of the initial temperature distribution. Additionally, the inclusion of core-losses in the machine heat generation term should be investigated to improve model performance. / Thesis / Master of Applied Science (MASc)
|
405 |
On generalized Jónsson classes.Sevee, Denis Edward January 1972 (has links)
No description available.
|
406 |
The Aggregated Spatial Logit Model: Theory, Estimation And ApplicationFerguson, Richard Mark January 1995 (has links)
<p>In problems of spatial choice, the choice set is often more aggregated than the one considered by decision-makers, typically because choice data are available only at the aggregate level. These aggregate choice units will exhibit heterogeneity in utility and in size. To be consistent with utility maximization, a choice model must estimate choice probabilities on the basis of the maximum utility within heterogeneous aggregates. The ordinary multinomial logit model (OMNL) applied to aggregate choice units fails this criterion as it is estimated on the basis of average utility. In this thesis, the aggregated spatial logit model, which utilizes the theory underlying the nested logit model to estimate the appropriate maximum utilities of aggregates, is derived and discussed. Initially, the theoretical basis for the model is made clear and an asymptotic version of the model is derived. Secondly, the model is tested in a simulated environment to demonstrate that the OMNL model lacks the generality of the aggregated model in the presence of heterogeneous aggregates. Thirdly, full endogenous estimation of the aggregated model is studied with a view toward finding the best optimization algorithm. Finally, with all the elements in place, the model is tested in an application of migration from the Canadian Atlantic Provinces.</p> / Doctor of Philosophy (PhD)
|
407 |
Bayesian Multilevel-multiclass Graphical ModelLin, Jiali 21 June 2019 (has links)
Gaussian graphical model has been a popular tool to investigate conditional dependency between random variables by estimating sparse precision matrices. Two problems have been discussed. One is to learn multiple Gaussian graphical models at multilevel from unknown classes. Another one is to select Gaussian process in semiparametric multi-kernel machine regression.
The first problem is approached by Gaussian graphical model. In this project, I consider learning multiple connected graphs among multilevel variables from unknown classes. I esti- mate the classes of the observations from the mixture distributions by evaluating the Bayes factor and learn the network structures by fitting a novel neighborhood selection algorithm. This approach is able to identify the class membership and to reveal network structures for multilevel variables simultaneously. Unlike most existing methods that solve this problem by frequentist approaches, I assess an alternative to a novel hierarchical Bayesian approach to incorporate prior knowledge.
The second problem focuses on the analysis of correlated high-dimensional data which has been useful in many applications. In this work, I consider a problem of detecting signals with a semiparametric regression model which can study the effects of fixed covariates (e.g. clinical variables) and sets of elements (e.g. pathways of genes). I model the unknown high-dimension functions of multi-sets via multi-Gaussian kernel machines to consider the possibility that elements within the same set interact with each other. Hence, my variable selection can be considered as Gaussian process selection. I develop my Gaussian process selection under the Bayesian variable selection framework. / Doctor of Philosophy / A network can be represented by nodes and edges between nodes. Under the assumption of multivariate Gaussian distribution, a graphical model is called a Gaussian graphical model, where edges are undirected. Gaussian graphical model has been studied for years to understand conditional dependency structure between random variables. Two problems have been discussed.
In the first project, I consider learning multiple connected graphs among multilevel variables from unknown classes. I estimate the classes of the observations from the mixture distributions. This approach is able to identify the class membership and to reveal network structures for multilevel variables simultaneously. Unlike most existing methods that solve this problem by frequentist approaches, I assess an alternative to a novel hierarchical Bayesian approach to incorporate prior knowledge.
The second problem focuses on the analysis of correlated high-dimensional data which has been useful in many applications. In this work, I consider a problem of detecting signals with a semiparametric regression model which can study the effects of fixed covariates (e.g. clinical variables) and sets of elements (e.g. pathways of genes). I model the unknown high-dimension functions of multi-sets via multi-Gaussian kernel machines to consider the possibility that elements within the same set interact with each other. Hence, my variable selection can be considered as Gaussian process selection. I develop my Gaussian process selection under the Bayesian variable selection framework
|
408 |
Reliability Transform MethodYoung, Robert Benjamin 22 July 2003 (has links)
Since the end of the cold war the United States is the single dominant naval power in the world. The emphasis of the last decade has been to reduce cost while maintaining this status. As the Navy's infrastructure decreases, so too does its ability to be an active participant in all aspects of ship operations and design. One way that the navy has achieved large savings is by using the Military Sealift Command to manage day to day operations of the Navy's auxiliary and underway replenishment ships. While these ships are an active part of the Navy's fighting force, they infrequently are put into harm's way. The natural progression in the design of these ships is to have them fully classified under current American Bureau of Shipping (ABS) rules, as they closely resemble commercial ships. The first new design to be fully classed under ABS is the T-AKE. The Navy and ABS consider the T-AKE program a trial to determine if a partnership between the two organizations can extend into the classification of all new naval ships. A major difficulty in this venture is how to translate the knowledge base which led to the development of current military specifications into rules that ABS can use for future ships.
The specific task required by the Navy in this project is to predict the inherent availability of the new T-AKE class ship. To accomplish this task, the reliability of T-AKE equipment and machinery must be known. Under normal conditions reliability data would be obtained from past ships with similar mission, equipment and machinery. Due to the unique nature of the T-AKE acquisition, this is not possible. Because of the use of commercial off the shelf (COTS) equipment and machinery, military equipment and machinery reliability data can not be used directly to predict T-AKE availability. This problem is compounded by the fact that existing COTS equipment and machinery reliability data developed in commercial applications may not be applicable to a military application. A method for deriving reliability data for commercial equipment and machinery adapted or used in military applications is required.
A Reliability Transform Method is developed that allows the interpolation of reliability data between commercial equipment and machinery operating in a commercial environment, commercial equipment and machinery operating in a military environment, and military equipment and machinery operating in a military environment. The reliability data for T-AKE is created using this Reliability Transform Method and the commercial reliability data. The reliability data is then used to calculate the inherent availability of T-AKE. / Master of Science
|
409 |
Methods for Naval Ship Concept Exploration Interfacing Model Center and ASSET with Machinery System ToolsStrock, Justin William 24 June 2008 (has links)
In response to the Fiscal Year 2006 National Defense Authorization Act, the US Navy conducted an evaluation of alternative propulsion methods for surface combatants and amphibious warfare ships. The study looked at current and future propulsion technology and propulsion alternatives for these three sizes of warships. In their analysis they developed 23 ship concepts, only 7 of which were variants of medium size surface combatants (MSC,21,000-26,000 MT). The report to Congress was based on a cost analysis and operational effectiveness analysis of these variants. The conclusions drawn were only based on the ship variants they developed and not on a representative sample of the feasible, non-dominated designs in the design space.
This thesis revisits the Alternative Propulsion Study results for a MSC, which were constrained by the inability of the Navy's design tools to adequately search the full design space. This thesis will also assess automated methods to improve the APS approach, and examine a range of power generation alternatives using realistic operational profiles and requirements to develop a notional medium surface combatant (CGXBMD). It is essential to base conclusions on the non-dominated design space, and this new approach will use a multi-objective optimization to find non-dominated designs in the specified design space and use new visualization tools to assess the characteristics of these designs. This automated approach and new tools are evaluated in the context of the revisited study. / Master of Science
|
410 |
A Nonlinear Finite Element Model of the Human Eye to Investigate Ocular Injuries From Night Vision GogglesPower, Erik D. 26 April 2001 (has links)
Airbags have been saving lives in automobile crashes for many years and are now being used in helicopters. The purpose of this study was to investigate the potential for ocular injuries to helicopter pilots wearing night vision goggles when the airbag is deployed. A nonlinear finite element model of the human eye was constructed. Ocular structures never before included in finite element models of the eye, such as the fatty tissue, extraocular muscles, and bony orbit were included in this model. In addition, this model includes material properties up to rupture making the eye suitable for large deformation applications.
The model was imported into Madymo and used to determine the worst-case position of a helicopter pilot wearing night vision goggles. This was evaluated as the greatest Von Mises stress in the eye when the airbag is deployed. The worst-case position was achieved by minimizing the distance between the eyes and goggles, having the occupant look directly into the airbag, and making initial contact with the airbag halfway through its full deployment. By removing the extraocular muscles, the stress sustained by the eye decreased. Simulations with both the goggles remaining fastened and breaking away from the aviator helmet were performed. Finally, placing a protective lens in front of the eyes was found to reduce the stress to the eye but increase the force experienced by the surrounding orbital bones.
The finite element model of the eye proved effective at evaluating the experimental boundary conditions, and could be used in the future to evaluate impact loading on eyes that have been surgically corrected and to model the geometry of the orbital bones. / Master of Science
|
Page generated in 0.0868 seconds