• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17558
  • 5457
  • 2960
  • 2657
  • 1693
  • 1640
  • 1013
  • 877
  • 762
  • 541
  • 306
  • 283
  • 279
  • 257
  • 175
  • Tagged with
  • 42219
  • 4330
  • 3915
  • 3756
  • 2861
  • 2490
  • 2415
  • 2310
  • 2143
  • 2020
  • 2011
  • 1951
  • 1949
  • 1926
  • 1864
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Calculation of observables in the nuclear 0s1d shell using new model-independent two-body interactions.

Mkhize, Percival Sbusiso. January 2007 (has links)
<p><font face="TimesNewRomanPSMT"> <p align="left">The objective of the present investigation is to calculate observables using the nuclear shell model for a variety of nuclear phenomena and to make comparisons with experimental data as well as the older interaction. The shell-model code OXBASH was employed for the calculations. Quantities investigated include energy level schemes, static magnetic dipole and electric quadrupole moments, electromagnetic transition probabilities, spectroscopic factors and beta decay rates.</p> </font></p>
552

Patient-Specific Modelling of the Cardiovascular System for Diagnosis and Therapy Assistance in Critical Care

Starfinger, Christina January 2008 (has links)
Critical care is provided to patients who require intensive monitoring and often the support of failing organs. Cardiovascular and circulatory diseases and dysfunctions are extremely common in this group of patients. However, cardiac disease states are highly patient-specific and every patient has a unique expression of the disease or underlying dysfunction. Clinical staff must consider many combinations of different disease scenarios based on frequently conflicting or confusing measured data on a patient’s condition. Successful diagnosis and treatment therefore often rely on the experience and intuition of clinical staff, increasing the likelihood for clinical errors. A cardiovascular (CVS) computerized model that uniquely represents the patient and underlying dysfunction or disease is developed. The CVS model is extended to account for the known physiologic mechanisms during spontaneous breathing and mechanical ventilation, thus increasing the model’s accuracy of representing a critically ill patient in the intensive care unit (ICU). The extended CVS model is validated by correctly simulating several well known circulatory mechanisms and interactions. An integral-based system parameter identification method is refined and extended to account for much smaller subsets of available input data, as usually seen in critical care units. For example, instead of requiring the continuous ventricle pressure and volume waveforms, only the end-systolic (ESV) and end-diastolic (EDV) volume values are needed, which can be even further reduced to only using the global end-diastolic volume (GEDV) and estimating the ventricle volumes. These changes make the CVS model and its application to monitoring more pplicable to a clinical environment. The CVS model and integral-based parameter identification approach are validated on data from porcine experiments of pulmonary embolism (PE), positive end-expiratory pressure (PEEP) titrations at different volemic levels, and 2 different studies of induced endotoxic (septic) shock. They are also validated on 3 adrenaline dosing data sets obtained from published studies in humans. Overall, these studies are used to show how the model and realistic clinical measurements may be used to provide a clear clinical picture in real-time. A wide range of clinically measured hemodynamics were successfully captured over time. The integral-based method identified all model parameters, typically with less than 10% error versus clinically measured pressure and volume signals. Moreover, patient-specific parameter relationships were formulated allowing the forward prediction of the patient’s response towards clinical interventions, such as administering a fluid bolus or changing the dose of an inotrope. Hence, the model and methods are able to provide diagnostic information and therapeutic decision support. In particular, tracking the model parameter changes over time can assist clinical staff in finding the right diagnosis, for example an increase in pulmonary vascular resistance indicates a developing constriction in the pulmonary artery caused by an embolus. Furthermore, using the predictive ability of the model and developed methods, different treatment choices and their effect on the patient can be simulated. Thus, the best individual treatment for each patient can be developed and chosen, and unnecessary or even harmful interventions avoided. This research thus increases confidence in the clinical applicability and validity of this overall diagnostic monitoring and therapy guidance approach. It accomplishes this goal using a novel physiological model of the heart and circulation. The integral-based parameter identification methods take dense, numerical data from diverse measurements and aggregate them into a clearer physiological picture of CVS status. Hence, the broader accomplishment of this thesis is the transformation, using computation and models, of diverse and often confusing measured data into a patient-specific physiological picture - a new model-based therapeutic.
553

Evaluating the Effectiveness of Multiple Presentations for Open Student Model in EER-Tutor

Duan, Dandi January 2009 (has links)
As one of the central problems in the area of Intelligent Tutoring Systems (ITSs), student modelling has been widely used to assist in systems’ decision making and students’ learning. On the one hand, by reasoning about students’ knowledge in the instructional domain, a system is able to adapt its pedagogical actions in order to provide a customized learning environment. These actions may include individualized problem-selection, tailored instructions and feedback, as well as updating the presentation of student models. On the other hand, students can reflect on their own learning progress by viewing individual Open Student Models (OSMs) and enhance their meta-cognitive skills by learning from the system’s estimation of their knowledge levels. It is believed that making the information in the student model available to students can raise students’ awareness of their strengths and weaknesses in the corresponding domain and hence allow them to develop a more effective and efficient way of learning. An OSM has been developed in EER-Tutor. EER-Tutor is a web-enhanced ITS that supports university students in learning conceptual database modelling. Students design Enhanced Entity-Relationship (EER) diagrams and receive different level of feedback in a problem-solving environment. The pedagogical decisions on feedback generation and problem selection are made according to student models. Previously, student models in EER-Tutor are presented to students on request as skill meters. Skill meters have been proved useful in helping students to improve their meta-cognitive skills. However, as the simplest presentation of a student model, skill meters contain very limited information. Some studies show that an OSM with multiple views is more effective since it supports individual preferences and different educational purposes. The focus of our work is to develop a variety of presentations for the OSM in EER-Tutor. For this purposes, we have modified the system to include not only skill meters but also other presentation styles. An evaluation study has been performed after the development. Both subjective and objective results have been collected. This thesis presents the extended EER-Tutor, followed by the analysis of the evaluation study.
554

Model choice and variable selection in mixed & semiparametric models

Säfken, Benjamin 27 March 2015 (has links)
No description available.
555

Managing Consistency of Business Process Models across Abstraction Levels

ALMEIDA CASTELO BRANCO, MOISES January 2014 (has links)
Process models support the transition from business requirements to IT implementations. An organization that adopts process modeling often maintain several co-existing models of the same business process. These models target different abstraction levels and stakeholder perspectives. Maintaining consistency among these models has become a major challenge for such an organization. For instance, propagating changes requires identifying tacit correspondences among the models, which may be only in the memories of their original creators or may be lost entirely. Although different tools target specific needs of different roles, we lack appropriate support for checking whether related models maintained by different groups of specialists are still consistent after independent editing. As a result, typical consistency management tasks such as tracing, differencing, comparing, refactoring, merging, conformance checking, change notification, and versioning are frequently done manually, which is time-consuming and error-prone. This thesis presents the Shared Model, a framework designed to improve support for consistency management and impact analysis in process modeling. The framework is designed as a result of a comprehensive industrial study that elicited typical correspondence patterns between Business and IT process models and the meaning of consistency between them. The framework encompasses three major techniques and contributions: 1) matching heuristics to automatically discover complex correspondences patterns among the models, and to maintain traceability among model parts---elements and fragments; 2) a generator of edit operations to compute the difference between process models; 3) a process model synchronizer, capable of consistently propagating changes made to any model to its counterpart. We evaluated the Shared Model experimentally. The evaluation shows that the framework can consistently synchronize Business and IT views related by correspondence patterns, after non-simultaneous independent editing.
556

Language Specific Analysis of State Machine Models of Reactive Systems

Zurowska, KAROLINA 25 June 2014 (has links)
Model Driven Development (MDD) is a paradigm introduced to overcome the complexities of modern software development. In MDD we use models as a primary artifact that is being developed, tested and refined, with code being a result of code generation. Analysis and verification of models is an important aspect of MDD paradigm, because they improve understanding of a developed system and enable discovery of faults early in the development. Even though many analysis methods exist (e.g., model checking, proof systems), they are not directly applicable in the context of industrial MDD tools such as IBM Rational Software Architect Real Time Edition (IBM RSA RTE). One of the main reasons for this inapplicability is the difference between modeling languages used in MDD tools (e.g. UML-RT language in IBM RSA RTE) and languages used in existing tools. These differences require an implementation of a transformation from a modeling language to an input language of a tool. UML-RT as well as other industrial MMD models, cannot be easily translated, if the target languages do not directly support key model features. To address this problem we follow a research direction that deviates from the standard approaches and instead of bringing MDD models to analysis tools, the approach brings analysis "closer" to MDD models. We introduce analysis of UML-RT models dedicated to this modeling language. To this end we use a formal internal representation of UML-RT models that preserves the important features of these models, such as hierarchical structures of components, asynchronous communication and action code. This provides us with formalized models using straightforward transformation. In addition, this approach enables the use of MDD-specific abstractions aiming to reduce the size of the state space necessary. To this end we introduce several MDD-specific types of abstractions for: data (using symbolic execution), structure and behavior. The work also includes model checking algorithms, which use the modular nature of UML-RT models. The proposed approach is implemented in a toolset that enables analysis directly of UML-RT models. We show the results of experiments with UML-RT models developed in-house and obtained from our industrial partner. / Thesis (Ph.D, Computing) -- Queen's University, 2014-06-24 17:58:05.973
557

Reduced order modeling of wind turbines in MatLab for grid integration and control studies

Antonelli, Jacopo January 2012 (has links)
The current trend in the wind power industry is to develop wind turbines of constantly increasing size and rated power, as well as wind farms of growing size and installed wind power. A careful study of the behaviour of the wind turbines during their operation is of crucial importance in the planning phase and in the design stage of a wind farm, in order to minimize the risks deriving from a non accurate prediction of their impact in the electric grid causing sensible faults of the system. To analyze the impact of the wind turbines in the system, motivates the development of accurate yet simple models. To be able to practically deal with this topics, a simple model of a wind turbine system is investigated and developed; it has the aim to describe the behaviour of a wind turbine in operation on a mechanical standpoint. The same reduced order simple model can also be employed for control system studies; the control system model that can’t be used in generation, can use the reduced model. Together with the analytical description of such model, is realized a MatLab code to numerically analyse the response of the system, and the results of the simulation through such code are presented. The objective of this thesis has been to provide a simple benchmark tool in MatLab for grid integration and control studies for interested researchers.
558

Simulating the present-day and future distribution of permafrost in the UVic Earth System Climate Model

Avis, Christopher Alexander 21 June 2012 (has links)
Warming over the past century has been greatest in high-latitudes over land and a number of environmental indicators suggest that the Arctic climate system is in the process of a major transition. Given the magnitude of observed and projected changes in the Arctic, it is essential that a better understanding of the characteristics of the Arctic climate system be achieved. In this work, I report on modifications to the UVic Earth System Climate model to allow it to represent regions of perennially-frozen ground, or permafrost. I examine the model’s representation of the Arctic climate during the 20th Century and show that it capably represents the distribution and thermal state of permafrost in the present-day climate system. I use Representative Concentration Pathways to examine a range of possible future permafrost states to the year 2500. A suite of sensitivity experiments is used to better understand controls on permafrost. I demonstrate the potential for radical environmental changes in the Arctic over the 21st Century including continued warming, enhanced precipitation and a reduction of between 29 and 54 % of the present-day permafrost area by 2100. Model projections show that widespread loss of high-latitude wetlands may accompany the loss of near surface permafrost. / Graduate
559

Robust Model Predictive Control and Distributed Model Predictive Control: Feasibility and Stability

Liu, Xiaotao 03 December 2014 (has links)
An increasing number of applications ranging from multi-vehicle systems, large-scale process control systems, transportation systems to smart grids call for the development of cooperative control theory. Meanwhile, when designing the cooperative controller, the state and control constraints, ubiquitously existing in the physical system, have to be respected. Model predictive control (MPC) is one of a few techniques that can explicitly and systematically handle the state and control constraints. This dissertation studies the robust MPC and distributed MPC strategies, respectively. Specifically, the problems we investigate are: the robust MPC for linear or nonlinear systems, distributed MPC for constrained decoupled systems and distributed MPC for constrained nonlinear systems with coupled system dynamics. In the robust MPC controller design, three sub-problems are considered. Firstly, a computationally efficient multi-stage suboptimal MPC strategy is designed by exploiting the j-step admissible sets, where the j-step admissible set is the set of system states that can be steered to the maximum positively invariant set in j control steps. Secondly, for nonlinear systems with control constraints and external disturbances, a novel robust constrained MPC strategy is designed, where the cost function is in a non-squared form. Sufficient conditions for the recursive feasibility and robust stability are established, respectively. Finally, by exploiting the contracting dynamics of a certain type of nonlinear systems, a less conservative robust constrained MPC method is designed. Compared to robust MPC strategies based on Lipschitz continuity, the strategy employed has the following advantages: 1) it can tolerate larger disturbances; and 2) it is feasible for a larger prediction horizon and enlarges the feasible region accordingly. For the distributed MPC of constrained continuous-time nonlinear decoupled systems, the cooperation among each subsystems is realized by incorporating a coupling term in the cost function. To handle the effect of the disturbances, a robust control strategy is designed based on the two-layer invariant set. Provided that the initial state is feasible and the disturbance is bounded by a certain level, the recursive feasibility of the optimization is guaranteed by appropriately tuning the design parameters. Sufficient conditions are given ensuring that the states of each subsystem converge to the robust positively invariant set. Furthermore, a conceptually less conservative algorithm is proposed by exploiting the controllability set instead of the positively invariant set, which allows the adoption of a shorter prediction horizon and tolerates a larger disturbance level. For the distributed MPC of a large-scale system that consists of several dynamically coupled nonlinear systems with decoupled control constraints and disturbances, the dynamic couplings and the disturbances are accommodated through imposing new robustness constraints in the local optimizations. Relationships among, and design procedures for the parameters involved in the proposed distributed MPC are derived to guarantee the recursive feasibility and the robust stability of the overall system. It is shown that, for a given bound on the disturbances, the recursive feasibility is guaranteed if the sampling interval is properly chosen. / Graduate / 0548 / 0544 / 0546 / liuxiaotao1982@gmail.com
560

Examining the Generalized Waring Model for the Analysis of Traffic Crashes

Peng, Yichuan 03 October 2013 (has links)
As one of the major data analysis methods, statistical models play an important role in traffic safety analysis. A common situation associated with crash data is the phenomenon known as overdispersion which has been discussed and investigated frequently in recent years. As such, researchers have proposed several models, such as the Poisson Gamma (PG) or Negative Binomial (NB), the Poisson-lognormal, or the Poisson-Weibull, to handle the overdispersion. Unfortunately, very few models have been proposed for specifically analyzing the sources of dispersions in the data. Better understanding of sources of variation and overdispersion could help in managing safety, such as establishing relationships and applying appropriate treatments or countermeasures, more efficiently. Given the limitations of existing models for exploring the source of overdispersion of crash data, this research examined a new model function that could be applied to explore sources of extra variability through the use of the Generalized Waring (GW) models. This model, which was recently introduced by statisticians, divides the observed variability into three components: randomness, internal differences between road segments or intersections, and the variances caused by other external factors that have not been included as covariates in the model. To evaluate these models, GW models were examined using both simulated and empirical crash datasets, and the results were compared to the most commonly used NB model and the recently developed NB-Lindley models. For model parameter estimation, both the maximum likelihood method and a Bayesian approach were adopted for better comparison. A simulation study was used to show the better performance of this model compared to NB model for overdispersed data, and then an application in the empirical crash data illustrates its capability of modeling data sets with great accuracy and exploring the source of overdispersion. The performances of hotspot identification for these two kinds of models (i.e., GW models and NB models) were also examined and compared based on the estimated models from the empirical dataset. Finally, bias properties related to the choice of prior distributions for parameters in GW model were examined by using a simulation study. In addition, the suggestions on the choice of minimum sample size and priors were presented for different kinds of datasets.

Page generated in 0.0928 seconds