• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 113
  • 91
  • 76
  • 36
  • 24
  • 12
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 878
  • 878
  • 145
  • 124
  • 121
  • 118
  • 113
  • 101
  • 101
  • 85
  • 82
  • 81
  • 73
  • 71
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Methodik zur funktionsorientierten Tolerierung mittels CAD-basierter Analysen

Berndt, Karsten, Ebermann, Marko 26 June 2015 (has links) (PDF)
Teil 1 Karsten Berndt Die Festlegung von Toleranzen ist eine alltägliche Aufgabenstellung des Konstrukteurs. Dabei bedingt die Berechnung nichtlinearer Toleranzketten einen erheblichen Zeitaufwand, wodurch meist auf deren genaue Berechnung vezichtet wird und Toleranzen stattdessen auf Basis von Erfahrungswerten festgelegt werden. Die vorgestellte Methodik zeigt Wege, wie schnell und frühzeitig im Konstruktionsprozess belastbare Aussagen zu Toleranzen komplexer Mechanismen getroffen werden können. Dazu werden sogenannte Sensitivitätsanalysen in der CAD-Software "Creo Elements" durchgeführt und ausgewertet. Das Ergebnis sind erste konkrete Toleranzfelder für alle den Mechanismus beschreibenden, geometrischen Abmessungen, welche sich als Startwerte für den anschließenden Toleranzsynthese/-analyseprozess eignen. Teil 2 Marko Ebermann Dieser zweite Vortragsteil behandelt eine mögliche Vorgehensweise zur Tolerierung von Geometrieabweichungen in der frühen Entwurfsphase am Beispiel des Koppelgliedes einer Verpackungsmaschine. Ausgangspunkt für die frühe Tolerierung bildet die im ersten Vortragsteil behandelte Sensitivitätsanalyse des Koppelgetriebes, welche Informationen zur Empfindlichkeit der Funktionsmaße bezüglich der Einhaltung der Schließmaßtoleranz lieferte. Die daraus abgeleitete Form- und Lagetolerierung des Koppelgliedes soll durch anschließende Toleranzamalysen die Tolerierung im Baugruppenkontext bestätigen und auf möglich Fertigungsverfahren abzielen, ohne die genaue Gestalt der Komponenten zu kennen. So können teure und zeitintensive Iterationsschleifen im Konstruktionsprozess minimiert und die Funktionalität frühzeitig gesichert werden.
562

Large Scale Solar Power Integration in Distribution Grids : PV Modelling, Voltage Support and Aggregation Studies

Samadi, Afshin January 2014 (has links)
Long term supporting schemes for photovoltaic (PV) system installation have led to accommodating large numbers of PV systems within load pockets in distribution grids. High penetrations of PV systems can cause new technical challenges, such as voltage rise due to reverse power flow during light load and high PV generation conditions. Therefore, new strategies are required to address the associated challenges. Moreover, due to these changes in distribution grids, a different response behavior of the distribution grid on the transmission side can be expected. Hence, a new equivalent model of distribution grids with high penetration of PV systems is needed to be addressed for future power system studies. The thesis contributions lie in three parts. The first part of the thesis copes with the PV modelling. A non-proprietary PV model of a three-phase, single stage PV system is developed in PSCAD/EMTDC and PowerFactory. Three different reactive power regulation strategies are incorporated into the models and their behavior are investigated in both simulation platforms using a distribution system with PV systems. In the second part of the thesis, the voltage rise problem is remedied by use of reactive power. On the other hand, considering large numbers of PV systems in grids, unnecessary reactive power consumption by PV systems first increases total line losses, and second it may also jeopardize the stability of the network in the case of contingencies in conventional power plants, which supply reactive power. Thus, this thesis investigates and develops the novel schemes to reduce reactive power flows while still keeping voltage within designated limits via three different approaches: decentralized voltage control to the pre-defined set-points developing a coordinated active power dependent (APD) voltage regulation Q(P)using local signals developing a multi-objective coordinated droop-based voltage (DBV) regulation Q(V) using local signals   In the third part of the thesis, furthermore, a gray-box load modeling is used to develop a new static equivalent model of a complex distribution grid with large numbers of PV systems embedded with voltage support schemes. In the proposed model, variations of voltage at the connection point simulate variations of the model’s active and reactive power. This model can simply be integrated intoload-flow programs and replace the complex distribution grid, while still keepingthe overall accuracy high. The thesis results, in conclusion, demonstrate: i) using rms-based simulations in PowerFactory can provide us with quite similar results using the time domain instantaneous values in PSCAD platform; ii) decentralized voltage control to specific set-points through the PV systems in the distribution grid is fundamentally impossible dueto the high level voltage control interaction and directionality among the PV systems; iii) the proposed APD method can regulate the voltage under the steady-state voltagelimit and consume less total reactive power in contrast to the standard characteristicCosφ(P)proposed by German Grid Codes; iv) the proposed optimized DBV method can directly address voltage and successfully regulate it to the upper steady-state voltage limit by causing minimum reactive power consumption as well as line losses; v) it is beneficial to address PV systems as a separate entity in the equivalencing of distribution grids with high density of PV systems. / <p>The Doctoral Degrees issued upon completion of the programme are issued by Comillas Pontifical University, Delft University of Technology and KTH Royal Institute of Technology. The invested degrees are official in Spain, the Netherlands and Sweden, respectively. QC 20141028</p>
563

Uncertainty and sensitivity analysis of a materials test reactor / Mogomotsi Ignatius Modukanele

Modukanele, Mogomotsi Ignatius January 2013 (has links)
This study was based on the uncertainty and sensitivity analysis of a generic 10 MW Materials Test Reactor (MTR). In this study an uncertainty and sensitivity analysis methodology called code scaling applicability and uncertainty (CSAU) was implemented. Although this methodology follows 14 steps, only the following were carried out: scenario specification, nuclear power plant (NPP) selection, phenomena identification and ranking table (PIRT), selection of frozen code, provision of code documentation, determination of code applicability, determination of code and experiment accuracy, NPP sensitivity analysis calculations, combination of biases and uncertainties, and total uncertainty to calculate specific scenario in a specific NPP. The thermal hydraulic code Flownex®1 was used to model only the reactor core to investigate the effects of the input parameters on the selected output parameters of the hot channel in the core. These output parameters were mass flow rate, temperature of the coolant, outlet pressure, centreline temperature of the fuel and surface temperature of the cladding. The PIRT process was used in conjunction with the sensitivity analysis results in order to select the relevant input parameters that significantly influenced the selected output parameters. The input parameters that have the largest effect on the selected output parameters were found to be the coolant flow channel width between the plates in the hot channel, the width of the fuel plates itself in the hot channel, the heat generation in the fuel plate of the hot channel, the global mass flow rate, the global coolant inlet temperature, the coolant flow channel width between the plates in the cold channel, and the width of the fuel plates in the cold channel. The uncertainty of input parameters was then propagated in Flownex using the Monte Carlo based uncertainty analysis function. From these results, the corresponding probability density function (PDF) of each selected output parameter was constructed. These functions were found to follow a normal distribution. / MIng (Nuclear Engineering), North-West University, Potchefstroom Campus, 2014
564

The Use of Simulation Methods to Understand and Control Pandemic Influenza

Michael, Beeler 20 November 2012 (has links)
This thesis investigates several uses of simulation methods to understand and control pandemic influenza in urban settings. An agent-based simulation, which models pandemic spread in a large metropolitan area, is used for two main purposes: to identify the shape of the distribution of pandemic outcomes, and to test for the presence of complex relationships between public health policy responses and underlying pandemic characteristics. The usefulness of pandemic simulation as a tool for assessing the cost-effectiveness of vaccination programs is critically evaluated through a rigorous comparison of three recent H1N1 vaccine cost-effectiveness studies. The potential for simulation methods to improve vaccine deployment is then demonstrated through a discrete-event simulation study of a mass immunization clinic.
565

The Use of Simulation Methods to Understand and Control Pandemic Influenza

Michael, Beeler 20 November 2012 (has links)
This thesis investigates several uses of simulation methods to understand and control pandemic influenza in urban settings. An agent-based simulation, which models pandemic spread in a large metropolitan area, is used for two main purposes: to identify the shape of the distribution of pandemic outcomes, and to test for the presence of complex relationships between public health policy responses and underlying pandemic characteristics. The usefulness of pandemic simulation as a tool for assessing the cost-effectiveness of vaccination programs is critically evaluated through a rigorous comparison of three recent H1N1 vaccine cost-effectiveness studies. The potential for simulation methods to improve vaccine deployment is then demonstrated through a discrete-event simulation study of a mass immunization clinic.
566

Análise de sensibilidade semi-analítica complexa geométrica e comportamento elastoplástico acoplado ao dano / Complex semianalytical sensitivity analysis applied to truss with geometric nonlinerity and elastoplastic coupled to damage

Haveroth, Geovane Augusto 30 November 2015 (has links)
Made available in DSpace on 2016-12-12T20:25:13Z (GMT). No. of bitstreams: 1 Geovane Augusto Haveroth.pdf: 5753124 bytes, checksum: 03000a94208725070d4dd679b75ac321 (MD5) Previous issue date: 2015-11-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In this work, a comprehensive study is developed aiming to the application of the semianalytical sensitivity method using complex variables (SAC) at truss structures considering geometric and material nonlinear behavior. Special emphasis is given to path dependent problems and the appropriate treatment for internal variables updating, which is an aspect not found in the literature by the author and applicable to problems with plasticity and damage behavior. Previous studies show that in path independent problems the SAC method has great efficiency and storage economy, since the operations are performed at the element level. In this work it is verified that when applied to path dependent problems, the method has the same efficiently detected for the independent counterpart, but at the expense of a little higher storage cost, however, the operations remain at the element level. In order to perform the mentioned study, the finite element formulation and some sensitivity evaluation methodologies of structural responses are presented in detail, including the proposed by the author. Finally, a comparative study between the different sensitivity methods is made for problems dominated by rigid body rotation, problems involving discontinuities sensitivity coefficients and for cellular structures. / Neste trabalho realiza-se um abrangente estudo que visa a aplicação do método semi-analítico de análise de sensibilidade, utilizando variáveis complexas (SAC) em estruturas treliçadas, considerando o comportamento não linear geométrico e material. Tal pesquisa foca principalmente nos problemas dependentes da trajetória e no tratamento adequado para a atualização das variáveis internas, sendo este um aspecto não encontrado na revisão bibliográfica pelo autor e aplicável em problemas que envolvem plasticidade e dano. Estudos anteriores mostram que em problemas independentes da trajetória o método SAC apresenta grande eficiência e economia de armazenamento, uma vez que as operações são realizadas no nível do elemento. Verifica-se que este método quando aplicado em problemas dependentes da trajetória, apresenta amesma eficiência detectada na contraparte independente comumcusto de armazenamento um pouco mais elevado. Contudo, as operações ainda se mantém no nível do elemento. Com a finalidade de realizar tal estudo, a formulação de elementos finitos e as diferentes metodologias para avaliar a sensibilidade de respostas estruturais, tanto para problemas dependentes quanto independentes da trajetória, são apresentadas em detalhes. Por fim, realizasse um estudo comparativo entre os diferentes métodos de sensibilidade em problemas que sejam dominados por rotação de corpo rígido, que possuam descontinuidades nos coeficientes da sensibilidade e em estruturas celulares.
567

Probabilistic modelling of unsaturated slope stability accounting for heterogeneity

Arnold, Patrick January 2017 (has links)
The performance and safety assessment of geo-structures is strongly affected by uncertainty; that is, both due a subjective lack of knowledge as well as objectively present and irreducible unknowns. Due to uncertainty in the non-linear variation of the matric suction induced effective stress as a function of the transient soil-atmosphere boundary conditions, the unsaturated state of the subsoil is generally not accounted for in a deterministic slope stability assessment. Probability theory, accounting for uncertainties quantitatively rather than using "cautious estimates" on loads and resistances, may aid to partly bridge the gap between unsaturated soil mechanics and engineering practice. This research investigates the effect of uncertainty in soil property values on the stability of unsaturated soil slopes. Two 2D Finite Element (FE) programs have been developed and implemented into a parallelised Reliability-Based Design (RBD) framework, which allows for the assessment of the failure probability, failure consequence and parameter sensitivity, rather than a deterministic factor of safety. Utilising the Random Finite Element Method (RFEM), within a Monte Carlo framework, multivariate cross-correlated random property fields have been mapped onto the FE mesh to assess the effect of isotropic and anisotropic moderate heterogeneity on the transient slope response, and thus performance. The framework has been applied to a generic slope subjected to different rainfall scenarios. The performance was found to be sensitive to the uncertainty in the effective shear strength parameters, as well as the parameters governing the unsaturated soil behaviour. The failure probability was found to increase most during prolonged rainfall events with a low precipitation rate. Nevertheless, accounting for the unsaturated state resulted in a higher slope reliability than when not considering suction effects. In a heterogeneous deposit failure is attracted to local zones of low shear strength, which, for an unsaturated soil, are a function of both the spatial variability of soil property values, as well as of the soil-water dynamics, leading to a significant increase in the failure probability near the end of the main rainfall event.
568

Contribution à l'évaluation des risques liés au TMD (transport de matières dangereuses) en prenant en compte les incertitudes / Contribution to the risk assessment related to DGT (dangerous goods transportation) by taking into account uncertainties

Safadi, El Abed El 09 July 2015 (has links)
Le processus d'évaluation des risques technologiques, notamment liés au Transport de Matières Dangereuses (TMD), consiste, quand un événement accidentel se produit, à évaluer le niveau de risque potentiel des zones impactées afin de pouvoir dimensionner et prendre rapidement des mesures de prévention et de protection (confinement, évacuation...) dans le but de réduire et maitriser les effets sur les personnes et l'environnement. La première problématique de ce travail consiste donc à évaluer le niveau de risque des zones soumises au transport des matières dangereuses. Pour ce faire, un certain nombre d'informations sont utilisées, comme la quantification de l'intensité des phénomènes qui se produisent à l'aide de modèles d'effets (analytique ou code informatique). Pour ce qui concerne le problème de dispersion de produits toxiques, ces modèles contiennent principalement des variables d'entrée liées à la base de données d'exposition, de données météorologiques,… La deuxième problématique réside dans les incertitudes affectant certaines entrées de ces modèles. Pour correctement réaliser une cartographie en déterminant la zone de de danger où le niveau de risque est jugé trop élevé, il est nécessaire d'identifier et de prendre en compte les incertitudes sur les entrées afin de les propager dans le modèle d'effets et ainsi d'avoir une évaluation fiable du niveau de risque. Une première phase de ce travail a consisté à évaluer et propager l'incertitude sur la concentration qui est induite par les grandeurs d'entrée incertaines lors de son évaluation par les modèles de dispersion. Deux approches sont utilisées pour modéliser et propager les incertitudes : l'approche ensembliste pour les modèles analytiques et l'approche probabiliste (Monte-Carlo) qui est plus classique et utilisable que le modèle de dispersion soit analytique ou défini par du code informatique. L'objectif consiste à comparer les deux approches pour connaitre leurs avantages et inconvénients en termes de précision et temps de calcul afin de résoudre le problème proposé. Pour réaliser les cartographies, deux modèles de dispersion (Gaussien et SLAB) sont utilisés pour évaluer l'intensité des risques dans la zone contaminée. La réalisation des cartographies a été abordée avec une méthode probabiliste (Monte Carlo) qui consiste à inverser le modèle d'effets et avec une méthode ensembliste générique qui consiste à formuler ce problème sous la forme d'un ensemble de contraintes à satisfaire (CSP) et le résoudre ensuite par inversion ensembliste. La deuxième phase a eu pour but d'établir une méthodologie générale pour réaliser les cartographies et améliorer les performances en termes de temps du calcul et de précision. Cette méthodologie s'appuie sur 3 étapes : l'analyse préalable des modèles d'effets utilisés, la proposition d'une nouvelle approche pour la propagation des incertitudes mixant les approches probabiliste et ensembliste en tirant notamment partie des avantages des deux approches précitées, et utilisable pour n'importe quel type de modèle d'effets spatialisé et statique, puis finalement la réalisation des cartographies en inversant les modèles d'effets. L'analyse de sensibilité présente dans la première étape s'adresse classiquement à des modèles probabilistes. Nous discutons de la validité d'utiliser des indices de type Sobol dans le cas de modèles intervalles et nous proposerons un nouvel indice de sensibilité purement intervalle cette fois-ci. / When an accidental event is occurring, the process of technological risk assessment, in particular the one related to Dangerous Goods Transportation (DGT), allows assessing the level of potential risk of impacted areas in order to provide and quickly take prevention and protection actions (containment, evacuation ...). The objective is to reduce and control its effects on people and environment. The first issue of this work is to evaluate the risk level for areas subjected to dangerous goods transportation. The quantification of the intensity of the occurring events needed to do this evaluation is based on effect models (analytical or computer code). Regarding the problem of dispersion of toxic products, these models mainly contain inputs linked to different databases, like the exposure data and meteorological data. The second problematic is related to the uncertainties affecting some model inputs. To determine the geographical danger zone where the estimated risk level is not acceptable, it is necessary to identify and take in consideration the uncertainties on the inputs in aim to propagate them in the effect model and thus to have a reliable evaluation of the risk level. The first phase of this work is to evaluate and propagate the uncertainty on the gas concentration induced by uncertain model inputs during its evaluation by dispersion models. Two approaches are used to model and propagate the uncertainties. The first one is the set-membership approach based on interval calculus for analytical models. The second one is the probabilistic approach (Monte Carlo), which is more classical and used more frequently when the dispersion model is described by an analytic expression or is is defined by a computer code. The objective is to compare the two approaches to define their advantages and disadvantages in terms of precision and computation time to solve the proposed problem. To determine the danger zones, two dispersion models (Gaussian and SLAB) are used to evaluate the risk intensity in the contaminated area. The risk mapping is achieved by using two methods: a probabilistic method (Monte Carlo) which consists in solving an inverse problem on the effect model and a set-membership generic method that defines the problem as a constraint satisfaction problem (CSP) and to resolve it with an set-membership inversion method. The second phase consists in establishing a general methodology to realize the risk mapping and to improve performance in terms of computation time and precision. This methodology is based on three steps: - Firstly the analysis of the used effect model. - Secondly the proposal of a new method for the uncertainty propagationbased on a mix between the probabilistic and set-membership approaches that takes advantage of both approaches and that is suited to any type of spatial and static effect model. -Finally the realization of risk mapping by inversing the effect models. The sensitivity analysis present in the first step is typically addressed to probabilistic models. The validity of using Sobol indices for interval models is discussed and a new interval sensitivity indiceis proposed.
569

Méthode et outils pour l'identification de défauts des bâtiments connectés performants / Method and tools for fault detection in smart high-performance buildings

Josse, Rozenn 13 November 2017 (has links)
Ces travaux de thèse portent sur le développement d’une nouvelle méthodologie pour l’identification de défauts de bâtiments performants et connectés afin d'aider à la garantie de performances. Nous avons dans un premier temps resitué nos travaux dans le contexte énergétique actuel en montrant le rôle majeur des bâtiments dans la réduction des consommations énergétiques. Nous avons ensuite présenté notre méthodologie en argumentant sur les techniques à utiliser avant d’effectuer un choix final. Cette méthodologie se compose de deux blocs principaux : le premier vise à réduire les incertitudes liées à l'occupant et à l'environnement et le second étudie l'écart entre la simulation et la mesure par une analyse de sensibilité couplée à un algorithme bayésien. Nous l'avons ensuite implémentée dans un outil que nous avons nommé REFATEC. Nous avons alors soumis notre méthodologie à différents tests dans des conditions idéales afin d’éprouver sa précision et son temps d’exécution. Cette étape a montré que la méthodologie est efficace mais montre quelques faiblesses dans le cas d’une saison estivale ou d’un défaut très localisé. Enfin, nous l’avons mise en situation face à un cas réel afin de traiter les nombreuses questions que soulèvent l’utilisation de mesures in-situ dans la perspective de la garantie de performances et de la détection de défauts, avec notamment la fiabilité des mesures et les incertitudes encore nombreuses qui doivent être traitées. / This thesis deals with the development of a new methodology for fault detection within smart high-performance buildings helping the performance guarantee. We first have placed our work in the current energy context by focusing on the major role of buildings in the decrease of energy consumption. Then we introduced our methodology and we argued about various techniques that could be used before making a choice. This methodology is made up of two main parts : the former reduces the uncertainties due to the occupant and the environment and the latter studies the gap between simulation and measurements thanks to a sensitivity analysis coupled with a bayesian algorithm. Then we implemented it within a tool that we named REFATEC. We carried out various tests in controlled conditions in order to evaluate its precision and its calculation time. This step showed that our methodology is effective but it has some difficulties when the studied period is during summer or when the faults are very located. is a very located fault. Eventually we confronted our methodology to a real case where we faced numerous questions that appear when dealing with measurements, especially their reliability and the uncertainties that still need to be taken care of, in the perspective of performance guarantee and fault detection.
570

Umělé neuronové sítě a jejich využití při extrakci znalostí / Artificial Neural Networks and Their Usage For Knowledge Extraction

Petříčková, Zuzana January 2015 (has links)
Title: Artificial Neural Networks and Their Usage For Knowledge Extraction Author: RNDr. Zuzana Petříčková Department: Department of Theoretical Computer Science and Mathema- tical Logic Supervisor: doc. RNDr. Iveta Mrázová, CSc., Department of Theoretical Computer Science and Mathematical Logic Abstract: The model of multi/layered feed/forward neural networks is well known for its ability to generalize well and to find complex non/linear dependencies in the data. On the other hand, it tends to create complex internal structures, especially for large data sets. Efficient solutions to demanding tasks currently dealt with require fast training, adequate generalization and a transparent and simple network structure. In this thesis, we propose a general framework for training of BP/networks. It is based on the fast and robust scaled conjugate gradient technique. This classical training algorithm is enhanced with analytical or approximative sensitivity inhibition during training and enforcement of a transparent in- ternal knowledge representation. Redundant hidden and input neurons are pruned based on internal representation and sensitivity analysis. The performance of the developed framework has been tested on various types of data with promising results. The framework provides a fast training algorithm,...

Page generated in 0.0847 seconds