• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 77
  • 77
  • 21
  • 17
  • 14
  • 12
  • 12
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Modelling And Controller Design Of The Gun And Turret System For An Aircraft

Mert, Ahmet 01 February 2009 (has links) (PDF)
Gun and gun turret systems are the primary units of the weapon systems of an aircraft. They are required to hit targets accurately during operations. That is why a complete, high precision control of weapon systems is required. This function is provided by accurate modeling of the system and the design of a suitable controller. This study presents the modeling of and controller design for the gun and turret system for an aircraft. For the controller design purpose, first the mathematical model of the system is constructed. Then the controller is designed to position the turret system as the target comes into sight. The reference input to the controller will either be obtained from a FLIR (Forward Looking Infrared) unit or from a HCU (Hand Control Unit). The basic specification for the controller is to hold theerror signal within the 5.5&deg / positioning envelope. This specification is satisfied by designing Linear Quadratic Gaussian and Internal Model Control type controllers. The performance of the overall system has been examined both by simulation studies and on the real physical system. Results have shown that the designed system is well over being sufficient.
22

Sustainable energy system pathways : Development and assessment of an indicator-based model approach to enhance sustainability of future energy technology pathways in Germany (SEnSys)

Streicher, Kai Nino January 2014 (has links)
After the nuclear fallout in Japan, Germany decided to back out from nuclear energy while at the same time changing the energy supply from fossil to renewable sources. This elaborate plan, known as Energiewende, will require significant economic and structural efforts that will have profound impacts on the environment and society itself. It is therefore crucial to identify possible technological pathways that can lead to a renewable energy supply, while reducing negative impacts on a holistic scope. In order to analyse alternative energy technology scenarios in Germany, this thesis focuses on the development of an indicator-based numerical Sustainable Energy Systems (SEnSys) model approach. Other than previous approaches, the SEnSys model considers full aggregated impacts of technological pathways leading to future configurations. With the help of an exemplary case study on two alternative energy technology scenarios (Trieb1 and Trieb2), the feasibility of the SEnSys model in evaluating sustainability is subsequently assessed. The results can affirm the findings of previous studies concerning lower economic and environ- mental impacts for scenario Trieb2, with small shares of renewable energy imports, compared to scenario Trieb1 based on only local but fluctuating renewables. Additionally, the results are in accordance with other relevant studies, while offering new valuable insights to the topic. Given a comprehensive revision of the identified uncertainties and limitations, it can be stated that the SEnSys model bares the potential for further analysing and improving sustainability of energy technology scenarios in Germany and other countries.
23

Temporal resolution in time series and probabilistic models of renewable power systems

Hoevenaars, Eric 27 April 2012 (has links)
There are two main types of logistical models used for long-term performance prediction of autonomous power systems: time series and probabilistic. Time series models are more common and are more accurate for sizing storage systems because they are able to track the state of charge. However, the computational time is usually greater than for probabilistic models. It is common for time series models to perform 1-year simulations with a 1-hour time step. This is likely because of the limited availability of high resolution data and the increase in computation time with a shorter time step. Computation time is particularly important because these types of models are often used for component size optimization which requires many model runs. This thesis includes a sensitivity analysis examining the effect of the time step on these simulations. The results show that it can be significant, though it depends on the system configuration and site characteristics. Two probabilistic models are developed to estimate the temporal resolution error of a 1-hour simulation: a time series/probabilistic model and a fully probabilistic model. To demonstrate the application of and evaluate the performance of these models, two case studies are analyzed. One is for a typical residential system and one is for a system designed to provide on-site power at an aquaculture site. The results show that the time series/probabilistic model would be a useful tool if accurate distributions of the sub-hour data can be determined. Additionally, the method of cumulant arithmetic is demonstrated to be a useful technique for incorporating multiple non-Gaussian random variables into a probabilistic model, a feature other models such as Hybrid2 currently do not have. The results from the fully probabilistic model showed that some form of autocorrelation is required to account for seasonal and diurnal trends. / Graduate
24

Computational optimal control modeling and smoothing for biomechanical systems

Said, Munzir January 2007 (has links)
[Truncated abstract] The study of biomechanical system dynamics consists of research to obtain an accurate model of biomechanical systems and to find appropriate torques or forces that reproduce motions of a biomechanical subject. In the first part of this study, specific computational models are developed to maintain relative angle constraints for 2-dimensional segmented bodies. This is motivated by the fact that there is a possibility of models of segmented bodies, moving under gravitational acceleration and joint torques, for its segments to move past the natural relative angle limits. Three models to maintain angle constraints between segments are proposed and compared. These models are: all-time angle constraints, a restoring torque in the state equations and an exponential penalty model. The models are applied to a 2-D three segment body to test the behaviour of each model when optimizing torques to minimize an objective. The optimization is run to find torques so that the end effector of the body follows the trajectory of a half circle. The result shows the behavior of each model in maintaining the angle constraints. The all-time constraints case exhibits a behaviour of not allowing torques (at a solution) which make segments move past the constraints, while the other two show a flexibility in handling the angle constraints more similar to a real biomechanical system. With three computational methods to represent the angle contraint, a workable set of initial torques for the motion of a segmented body can be obtained without causing integration failure in the ordinary differential equation (ODE) solver and without the need to use the “blind man method” that restarts the optimal control many times. ... With one layer of penalty weight balancing between trajectory compliance penalty and other optimal control objectives (minimizing torque/smoothing torque) already difficult to obtain (as explained by the L-curve phenomena), adding the second layer penalty weight for the closeness of fit for each of the body segments will further complicate the weight balancing and too much trial and error computation may be needed to get a reasonably good set of weighting values. Second order regularization is also added to the optimal control objective and the optimization has managed to obtain smoother torques for all body joints. To make the current approach more competitive with the inverse dynamic, an algorithm to speed up the computation of the optimal control is required as a potential future work.
25

Vers une définition patient-spécifique du taux cible de facteur anti-hémophilique à partir de la génération de thrombine : Apports des approches expérimentales et des modèles dynamiques de la cascade de la coagulation / Toward a patient specific level of anti-haemophilic factor based on thrombin generation : Contributions of experimental approaches and dynamic modeling of the coagulation cascade

Chelle, Pierre 14 June 2017 (has links)
L’hémophilie est une maladie génétique se traduisant par la déficience des facteurs VIII et IX de la coagulation et conduisant à une tendance hémorragique. L’intensité des traitements substitutifs en facteur VIII et IX est définie essentiellement sur le taux basal du facteur déficitaire et non pas sur la capacité propre à chaque patient à générer de la thrombine qui est l’enzyme clé dans la formation du caillot de fibrine. Le test de génération de thrombine pourrait être utilisé pour permettre une individualisation du traitement anti-hémophilique. En effet, le taux de facteur VIII ou IX nécessaire à la normalisation de la génération de thrombine est potentiellement variable d’un patient à l’autre pour une même sévérité d’hémophilie. On peut donc se demander quelle approche expérimentale permettrait de mettre en exergue le lien entre taux de facteur anti-hémophilique et la génération de thrombine. Est-il possible de modéliser mathématiquement la coagulation pour obtenir une relation, soit explicite, soit implicite, entre taux de facteurs et génération de thrombine ? Les modèles existants permettent-ils d'obtenir une telle relation ? Une vaste campagne expérimentale a donc été menée pour mettre en place une base de données qui a permis d’identifier les facteurs déterminants de la génération de thrombine et la relation entre génération de thrombine et taux de facteur anti-hémophilique, de définir leurs valeurs de références, ainsi que d’évaluer et de paramétrer de manière sujet-spécifique des modèles mathématiques de la coagulation. / Haemophilia is a genetic disease corresponding to the deficiency of coagulation factor VIII or IX and leading to a bleeding tendency. The current substitutive treatment is defined essentially by the basal level of deficient factor and not the individual capacity to generate thrombin, a key enzyme of the clot formation. The thrombin generation assay could help in the individualisation of the anti-haemophilia treatment. Indeed, the factor VIII or IX level needed to normalise the thrombin generation vary potentially from one patient to another for a same degree of severity. We can wonder which experimental approach could emphasise the relation between level of anti-haemophilic factor and thrombin generation. Is it possible to mathematically model coagulation to obtain a relation, either explicit, or implicit, between factor level and thrombin generation? Could existing models provide this relation? An extensive experimental campaign was carried out to build a database that has been used to identify the determinant coagulation factors of thrombin generation and the individual relation between thrombin generation and anti-haemophilic factor level, to define their reference values, and also to evaluate and parametrise subject-specifically mathematical models of the coagulation cascade
26

Critical infrastructure protection by advanced modelling, simulation and optimization for cascading failure mitigation and resilience / Protection des Infrastructures Essentielles par Advanced Modélisation, simulation et optimisation pour l’atténuation et résilience de défaillance en cascade

Fang, Yiping 02 February 2015 (has links)
Sans cesse croissante complexité et l'interdépendance des infrastructures critiques modernes, avec des environs de risque plus en plus complexes, posent des défis uniques pour leur exploitation sûre, fiable et efficace. L'objectif de la présente thèse est sur la modélisation, la simulation et l'optimisation des infrastructures critiques (par exemple, les réseaux de transmission de puissance) à l'égard de leur vulnérabilité et la résilience aux défaillances en cascade. Cette étude aborde le problème en modélisant infrastructures critiques à un niveau fondamental, en se concentrant sur la topologie du réseau et des modèles de flux physiques dans les infrastructures critiques. Un cadre de modélisation hiérarchique est introduit pour la gestion de la complexité du système. Au sein de ces cadres de modélisation, les techniques d'optimisation avancées (par exemple, non-dominée de tri binaire évolution différentielle (NSBDE) algorithme) sont utilisés pour maximiser à la fois la robustesse et la résilience (capacité de récupération) des infrastructures critiques contre les défaillances en cascade. Plus précisément, le premier problème est pris à partir d'un point de vue de la conception du système holistique, c'est-à-dire certaines propriétés du système, tels que ses capacités de topologie et de liaison, sont redessiné de manière optimale afin d'améliorer la capacité de résister à des défaillances systémiques de système. Les deux modèles de défaillance en cascade topologiques et physiques sont appliquées et leurs résultats correspondants sont comparés. En ce qui concerne le deuxième problème, un nouveau cadre est proposé pour la sélection optimale des mesures appropriées de récupération afin de maximiser la capacité du réseau d’infrastructure critique de récupération à partir d'un événement perturbateur. Un algorithme d'optimisation de calcul pas cher heuristique est proposé pour la solution du problème, en intégrant des concepts fondamentaux de flux de réseau et le calendrier du projet. Exemples d'analyse sont effectués en se référant à plusieurs systèmes de CI réalistes. / Continuously increasing complexity and interconnectedness of modern critical infrastructures, together with increasingly complex risk environments, pose unique challenges for their secure, reliable, and efficient operation. The focus of the present dissertation is on the modelling, simulation and optimization of critical infrastructures (CIs) (e.g., power transmission networks) with respect to their vulnerability and resilience to cascading failures. This study approaches the problem by firstly modelling CIs at a fundamental level, by focusing on network topology and physical flow patterns within the CIs. A hierarchical network modelling technique is introduced for the management of system complexity. Within these modelling frameworks, advanced optimization techniques (e.g., non-dominated sorting binary differential evolution (NSBDE) algorithm) are utilized to maximize both the robustness and resilience (recovery capacity) of CIs against cascading failures. Specifically, the first problem is taken from a holistic system design perspective, i.e. some system properties, such as its topology and link capacities, are redesigned in an optimal way in order to enhance system’s capacity of resisting to systemic failures. Both topological and physical cascading failure models are applied and their corresponding results are compared. With respect to the second problem, a novel framework is proposed for optimally selecting proper recovery actions in order to maximize the capacity of the CI network of recovery from a disruptive event. A heuristic, computationally cheap optimization algorithm is proposed for the solution of the problem, by integrating foundemental concepts from network flows and project scheduling. Examples of analysis are carried out by referring to several realistic CI systems.
27

The impact of innovative effluent permitting policy on urban wastewater system performance

Meng, Fanlin January 2015 (has links)
This thesis investigates innovative effluent point-source permitting approaches from an integrated urban wastewater system (UWWS) perspective, and demonstrates that three proposed permitting approaches based on optimal operational or control strategies of the wastewater system are effective in delivering multiple and balanced environmental benefits (water quality, GHG emissions) in a cost-efficient manner. Traditional permitting policy and current flexible permitting practices are first reviewed, and opportunities for permitting from an integrated UWWS perspective are identified. An operational strategy-based permitting approach is first developed by a four-step permitting framework. Based on integrated UWWS modelling, operational strategies are optimised with objectives including minimisation of operational cost, variability of treatment efficiency and environmental risk, subject to compliance of environmental water quality standards. As trade-offs exist between the three objectives, the optimal solutions are screened according to the decision-makers’ preference and permits are derived based on the selected solutions. The advantages of this permitting approach over the traditional regulatory method are: a) cost-effectiveness is considered in decision-making, and b) permitting based on operational strategies is more reliable in delivering desirable environmental outcomes. In the studied case, the selected operational strategies achieve over 78% lower environmental risk with at least 7% lower operational cost than the baseline scenario; in comparison, the traditional end-of-pipe limits can lead to expensive solutions with no better environmental water quality. The developed permitting framework facilitates the derivation of sustainable solutions as: a) stakeholders are involved at all points of the decision-making process, so that various impacts of the operation of the UWWS can be considered, and b) multi-objective optimisation algorithm and visual analytics tool are employed to efficiently optimise and select high performance operational solutions. The second proposed permitting approach is based on optimal integrated real time control (RTC) strategies. Permits are developed by a three-step decision-making analysis framework similar to the first approach. An off-line model-based predictive aeration control strategy is investigated for the case study, and further benefits (9% lower environmental risk and 0.6% less cost) are achieved by an optimal RTC strategy exploiting the dynamic assimilation capacity of the environment. A similar permitting approach, but simpler than the first two methods, is developed to derive operational/control strategy-based permits by an integrated cost-risk analysis framework. Less comprehensive modelling and optimisation skills are needed as it couples a dynamic wastewater system model and a stochastic permitting model and uses sensitivity analysis and scenario analysis to optimise operational/control strategies, hence this approach can be a good option to develop risk-based cost-effective permits without intensive resources. Finally, roadmaps for the implementation of the three innovative permitting approaches are discussed. Current performance-based regulations and self-monitoring schemes are used as examples to visualise the new way of permitting. The viability of the proposed methods as alternative regulation approaches are evaluated against the core competencies of modern policy-making.
28

Stream Identification in Pinch Analysis : Fixed and Flexible flows

Montagna Cimarelli Viktor, Donna January 2018 (has links)
The purpose of this project is to find an identification tag that can be used in a future automated pinch analysis tool. It can be used to further analyse composite curves and pinch results by tracking the original streams that was converted. In real life situations, retrofitting a process industries streams, can decrease heat demands and costs. A pinch analysis and a heat exchange network is created with fixed and flexible flows to show a recommendation on how the system model can handle this type of situations. The models have been created by hand with support from pinch literature and the calculations validated with mathematical software such as matlab and other graphing tools. The literature study and pinch modelling resulted in a recommendation of tagging Hstart and Hend for each individual stream. By using a geographical tag in a coordinate system the analyst will be able to find the original streams in the pinch analysis and composite curves. The project also resulted in a heating exchange network created from the fixed and flexible data set. The enthalpy differences between the ideal pinch result and the fixed data set is smaller than one might expect because of enthalpy abundance in the specific intervals.
29

Ground Source Heat Pumps: Considerations for Large Facilities in Massachusetts

Wagner, Eric 02 April 2021 (has links)
There has been a significant increase in the interest and implementations of heat pump systems for HVAC purposes in general and of ground source heat pumps (GSHPs) in particular. Though these systems have existed for decades, primarily in Europe, there has been an upward trend particularly in the United States in recent years. With the world-wide push toward CO2 emissions reduction targets, interest in heat pump systems to reduce CO2 emissions from heating and cooling is likely to only increase in the future. However, more than ever, financial considerations are also key factors in the implementation of any system. Ground source heat pumps (GSHPs) coupled to vertical borehole heat exchangers (BHEs) have been promoted as a viable heat pump system in climates where traditional air source heat pumps (ASHPs) may operate inefficiently. This type of system claims superior performance to ASHPs due to the relatively consistent temperature of the ground compared to the air, offering a higher temperature heat source in the heating season and a lower temperature sink in the cooling season. Projects designing and installing such a GSHP system have been implemented at large scales on several university campuses to provide heating and cooling. In this study, we aim to test the idea that a GSHP system, as a replacement for an existing CHP heating and conventional cooling systems, could reduce CO2 emissions, as well as provide a cost benefit to a large energy consumer, in this case the University of Massachusetts. This will be done using the existing heating and cooling loads provided by the conventional system and an established technique of modeling the heat pumps and BHEs. The GSHP system is modeled to follow the parameters of industry standards and sized to provide the best overall lifetime cost. The result on the overall annual costs, emissions, and university microgrid are considered.
30

Modelování a řízení systému FytoScope / Modelling and control of FytoScope system

Stoklásek, Petr January 2017 (has links)
The aim of this thesis is modelling of chamber for plant cultivation under defined conditions. First part of the thesis contains theoretical introduction to thermodynamics and humid air problematics. In the next part of thesis, the growth chamber model was designed. Model parameters were obtained during fytotron identification. Chamber control algorithm was designed using the model. The control algorithm was programmed into chamber control unit and its functionality was practically tested.

Page generated in 0.0678 seconds