• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 79
  • 41
  • 23
  • 16
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 371
  • 61
  • 56
  • 52
  • 51
  • 45
  • 39
  • 37
  • 36
  • 34
  • 33
  • 30
  • 29
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

[en] PROBABILISTIC METHODS APPLIED TO SOIL SLOPE STABILITY ANALYSIS / [pt] MÉTODOS PROBABILÍSTICOS APLICADOS NA ANÁLISE DA ESTABILIDADE DE TALUDES EM SOLO

CARLOS NACIANCENO MEZA LOPEZ 28 February 2018 (has links)
[pt] Comumente as análises de estabilidade de taludes são realizadas mediante métodos determinísticos, os quais visam o cálculo de um fator de segurança único assumindo os valores dos parâmetros de resistência como representativos e fixos. Estes métodos não conseguem avaliar as incertezas existentes nas propriedades do solo e tampouco indicam a proporção de influência que tem cada parâmetro de resistência no valor do fator de segurança. Os métodos probabilísticos, com base nas teorias de probabilidade, confiabilidade e estatística, permitem estimar a influência dessas incertezas nos cálculos determinísticos, com a possibilidade de prever mais amplamente o risco de insucesso associado a um projeto geotécnico de estabilidade de taludes. O presente trabalho estuda a aplicação de três métodos probabilísticos (Monte Carlo, Hipercubo Latino e Estimativas Pontuais Alternativas) na avaliação de estabilidade de taludes, com auxílio de métodos de equilíbrio limite no cálculo do fator de segurança. Com objetivo de inferir o impacto das variáveis aleatórias nas estimativas de probabilidade e confiabilidade, bem como da importância de uma quantificação adequada dos valores de desvio padrão, são realizadas comparações dos resultados obtidos com métodos probabilísticos e determinísticos (método das fatias, método dos elementos finitos) discutindo as principais vantagens, dificuldades e limitações nas aplicações dos mesmos em problemas de estabilidade de taludes de solo. / [en] Slope stability analyses are usually carried out using deterministic methods, which aim the calculation of a single safety factor assuming the values of the shear strength parameters as representative and fixed. These methods fail to assess the uncertainties in soil properties and do not indicate the proportion of influence that each resistance parameter has on the final value of the safety factor. The probabilistic methods, based on probability, reliability and statistical theories, allow the estimation of the influence of these uncertainties on the deterministic calculations, with the possibility to broadly predict the risk of failure associated with a geotechnical slope stability project. This dissertation studies the application of three probabilistic methods (Monte Carlo, Latin Hypercube, and Alternative Point Estimates) in the evaluation of slope stability, with aid of limit equilibrium methods for the calculation of safety factors. In order to infer the impact of random variables on the estimates of probability and reliability, as well as the importance of an adequate quantification of the standard deviation values, comparisons are made among the results obtained with probabilistic and deterministic methods (limit equilibrium method, finite element method), discussing the main advantages, difficulties and limitations in their application to soil slope stability problems.
242

Otimizando o teste estrutural de programas concorrentes: uma abordagem determinística e paralela / Improving the structural testing of concurrent programs: a deterministic and parallel approach

Raphael Negrisoli Batista 27 March 2015 (has links)
O teste de programas concorrentes é uma atividade custosa devido principalmente à quantidade de sequências de sincronização que devem ser testadas para validar tais programas. Uma das técnicas mais utilizadas para testar a comunicação e sincronização de programas concorrentes é a geração automática de diferentes pares de sincronização ou, em outras palavras, a geração de variantes de disputa (race variant). Nesta técnica as variantes de disputa são geradas a partir de arquivos de rastro de uma execução não-determinística e algoritmos de execução determinística são utilizados para forçar que diferentes sincronizações sejam cobertas. Este trabalho aborda de maneira abrangente este problema, cujo objetivo principal é reduzir o tempo de resposta da atividade de teste estrutural de programas concorrentes quando diferentes variantes de disputa são executadas. Há três principais contribuições neste trabalho: (1) geração de arquivos de rastro e execução determinística total/parcial, (2) geração automática de variantes e (3) paralelização da execução das variantes. Diferentemente de outros trabalhos disponíveis na literatura, os algoritmos propostos consideram programas concorrentes que interagem simultaneamente com passagem de mensagens e memória compartilhada. Foram consideradas seis primitivas com semânticas distintas: ponto-a-ponto bloqueante/não bloqueante, coletivas um-para-todos/todos-para-um/todos-para-todos e semáforos. Os algoritmos foram desenvolvidos no nível de aplicação em Java, são ortogonais à linguagem de programação utilizada e não requerem privilégios de sistema para serem executados. Estas três contribuições são descritas, detalhando seus algoritmos. Também são apresentados os resultados obtidos com os experimentos feitos durante as fases de validação e avaliação de cada contribuição. Os resultados demonstram que os objetivos propostos foram atingidos com sucesso para cada contribuição e, do ponto de vista do testador, o tempo de resposta da atividade de teste estrutural de programas concorrentes foi reduzido enquanto a cobertura de programas concorrentes com ambos os paradigmas aumentou com procedimentos automatizados e transparentes. Os experimentos mostram speedups próximos ao linear, quando comparadas as versões sequencial e paralela dos algoritmos. / The testing of concurrent programs is an expensive task, mainly because it needs to test a high number of synchronization sequences, in order to validate such programs. One of the most used techniques to test communication and synchronization of concurrent programs is the automatic generation of different synchronizations pairs (or generation of race variants). Race variants are generated from the trace files of a nondeterministic execution, and the deterministic executions force the coverage of different synchronizations. This work approaches this problem in a more general way. It reduces the response time of the structural testing of concurrent programs when different variants are required. There are three main contributions in this work: the generation of trace files and the total or partial deterministic execution, the automatic generation of race variants and the parallelization of execution of race variants. The proposed algorithms take into account concurrent programs that interact simultaneously with message passing and shared memory, including six primitives with distinct semantics: blocking and non-blocking point-to-point, all-to-all/one-to-all/all-toone collectives and shared memory. The algorithms have been implemented in Java in the application level, they are language independent and do not need system privileges to execute. Results obtained during the validation and evaluation phase are also presented and they show that the proposed objectives are reached with success. From the tester viewpoint, the response time of structural testing of concurrent programs was reduced, while the coverage of the concurrent programs with both paradigms increased with automatic and transparent procedures. The experiments showed speedups close to linear, when comparing the sequential and parallel versions.
243

Improving time series modeling by decomposing and analysing stochastic and deterministic influences / Modelagem de séries temporais por meio da decomposição e análise de influências estocásticas e determinísticas

Ricardo Araújo Rios 22 October 2013 (has links)
This thesis presents a study on time series analysis, which was conducted based on the following hypothesis: time series influenced by additive noise can be decomposed into stochastic and deterministic components in which individual models permit obtaining a hybrid one that improves accuracy. This hypothesis was confirmed in two steps. In the first one, we developed a formal analysis using the Nyquist-Shannon sampling theorem, proving Intrinsic Mode Functions (IMFs) extracted from the Empirical Mode Decomposition (EMD) method can be combined, according to their frequency intensities, to form stochastic and deterministic components. Considering this proof, we designed two approaches to decompose time series, which were evaluated in synthetic and real-world scenarios. Experimental results confirmed the importance of decomposing time series and individually modeling the deterministic and stochastic components, proving the second part of our hypothesis. Furthermore, we noticed the individual analysis of both components plays an important role in detecting patterns and extracting implicit information from time series. In addition to these approaches, this thesis also presents two new measurements. The first one is used to evaluate the accuracy of time series modeling in forecasting observations. This measurement was motivated by the fact that existing measurements only consider the perfect match between expected and predicted values. This new measurement overcomes this issue by also analyzing the global time series behavior. The second measurement presented important results to assess the influence of the deterministic and stochastic components on time series observations, supporting the decomposition process. Finally, this thesis also presents a Systematic Literature Review, which collected important information on related work, and two new methods to produce surrogate data, which permit investigating the presence of linear and nonlinear Gaussian processes in time series, irrespective of the influence of nonstationary behavior / Esta tese apresenta um estudo sobre análise de séries temporais, a qual foi conduzida baseada na seguinte hipótese: séries temporais influenciadas por ruído aditivo podem ser decompostas em componentes estocásticos e determinísticos que ao serem modelados individualmente permitem obter um modelo híbrido de maior acurácia. Essa hipótese foi confirmada em duas etapas. Na primeira, desenvolveu-se uma análise formal usando o teorema de amostragem proposto por Nyquist-Shannon, provando que IMFs (Intrinsic Mode Functions) extraídas pelo método EMD (Empirical Mode Decomposition) podem ser combinadas de acordo com suas intensidades de frequência para formar os componentes estocásticos e determinísticos. Considerando essa prova, duas abordagens de decomposição de séries foram desenvolvidas e avaliadas em aplicações sintéticas e reais. Resultados experimentais confirmaram a importância de decompor séries temporais e modelar seus componentes estocásticos e determinísticos, provando a segunda parte da hipótese. Além disso, notou-se que a análise individual desses componentes possibilita detectar padrões e extrair importantes informações implícitas em séries temporais. Essa tese apresenta ainda duas novas medidas. A primeira é usada para avaliar a acurácia de modelos utilizados para predizer observações. A principal vantagem dessa medida em relação às existentes é a possibilidade de avaliar os valores individuais de predição e o comportamento global entre as observações preditas e experadas. A segunda medida permite avaliar a influência dos componentes estocásticos e determinísticos sobre as séries temporais. Finalmente, essa tese apresenta ainda resultados obtidos por meio de uma revisão sistemática da literatura, a qual coletou importantes trabalhos relacionados, e dois novos métodos para geração de dados substitutos, permitindo investigar a presença de processos Gaussianos lineares e não-lineares, independente da influência de comportamento não-estacionário
244

Análise de texturas dinâmicas baseada em sistemas complexos / Dynamic texture analysis based on complex system

Lucas Correia Ribas 27 April 2017 (has links)
A análise de texturas dinâmicas tem se apresentado como uma área de pesquisa crescente e em potencial nos últimos anos em visão computacional. As texturas dinâmicas são sequências de imagens de textura (i.e. vídeo) que representam objetos dinâmicos. Exemplos de texturas dinâmicas são: evolução de colônia de bactérias, crescimento de tecidos do corpo humano, escada rolante em movimento, cachoeiras, fumaça, processo de corrosão de metal, entre outros. Apesar de existirem pesquisas relacionadas com o tema e de resultados promissores, a maioria dos métodos da literatura possui limitações. Além disso, em muitos casos as texturas dinâmicas são resultado de fenômenos complexos, tornando a tarefa de caracterização um desafio ainda maior. Esse cenário requer o desenvolvimento de um paradigma de métodos baseados em complexidade. A complexidade pode ser compreendida como uma medida de irregularidade das texturas dinâmicas, permitindo medir a estrutura dos pixels e quantificar os aspectos espaciais e temporais. Neste contexto, o objetivo deste mestrado é estudar e desenvolver métodos para caracterização de texturas dinâmicas baseado em metodologias de complexidade advindas da área de sistemas complexos. Em particular, duas metodologias já utilizadas em problemas de visão computacional são consideradas: redes complexas e caminhada determinística parcialmente auto-repulsiva. A partir dessas metodologias, três métodos de caracterização de texturas dinâmicas foram desenvolvidos: (i) baseado em difusão em redes - (ii) baseado em caminhada determinística parcialmente auto-repulsiva - (iii) baseado em redes geradas por caminhada determinística parcialmente auto-repulsiva. Os métodos desenvolvidos foram aplicados em problemas de nanotecnologia e tráfego de veículos, apresentando resultados potenciais e contribuindo para o desenvolvimento de ambas áreas. / Dynamic texture analysis has been an area of research increasing and in potential in recent years in computer vision. Dynamic textures are sequences of texture images (i.e. video) that represent dynamic objects. Examples of dynamic textures are: evolution of the colony of bacteria, growth of body tissues, moving escalator, waterfalls, smoke, process of metal corrosion, among others. Although there are researches related to the topic and promising results, most literature methods have limitations. Moreover, in many cases the dynamic textures are the result of complex phenomena, making a characterization task even more challenging. This scenario requires the development of a paradigm of methods based on complexity. The complexity can be understood as a measure of irregularity of the dynamic textures, allowing to measure the structure of the pixels and to quantify the spatial and temporal aspects. In this context, this masters aims to study and develop methods for the characterization of dynamic textures based on methodologies of complexity from the area of complex systems. In particular, two methodologies already used in computer vision problems are considered: complex networks and deterministic walk partially self-repulsive. Based on these methodologies, three methods of characterization of dynamic textures were developed: (i) based on diffusion in networks - (ii) based on deterministic walk partially self-repulsive - (iii) based on networks generated by deterministic walk partially self-repulsive. The developed methods were applied in problems of nanotechnology and vehicle traffic, presenting potencial results and contribuing to the development of both areas.
245

Contrôle optimal de modèles de neurones déterministes et stochastiques, en dimension finie et infinie. Application au contrôle de la dynamique neuronale par l'Optogénétique / Optimal control of deterministic and stochastic neuron models, in finite and infinite dimension. Application to the control of neuronal dynamics via Optogenetics

Renault, Vincent 20 September 2016 (has links)
Let but de cette thèse est de proposer différents modèles mathématiques de neurones pour l'Optogénétique et d'étudier leur contrôle optimal. Nous définissons d'abord une version contrôlée des modèles déterministes de dimension finie, dits à conductances. Nous étudions un problème de temps minimal pour un système affine mono-entrée dont nous étudions les singulières. Nous appliquons une méthode numérique directe pour observer les trajectoires et contrôles optimaux. Le contrôle optogénétique apparaît comme une nouvelle façon de juger de la capacité des modèles à conductances de reproduire les caractéristiques de la dynamique du potentiel de membrane, observées expérimentalement. Nous définissons ensuite un modèle stochastique en dimension infinie pour prendre en compte le caractère aléatoire des mécanismes des canaux ioniques et la propagation des potentiels d'action. Il s'agit d'un processus de Markov déterministe par morceaux (PDMP) contrôlé, à valeurs dans un espace de Hilbert. Nous définissons une large classe de PDMPs contrôlés en dimension infinie et prouvons le caractère fortement Markovien de ces processus. Nous traitons un problème de contrôle optimal à horizon de temps fini. Nous étudions le processus de décision Markovien (MDP) inclus dans le PDMP et montrons l'équivalence des deux problèmes. Nous donnons des conditions suffisantes pour l'existence de contrôles optimaux pour le MDP, et donc le PDMP. Nous discutons des variantes pour le modèle d'Optogénétique stochastique en dimension infinie. Enfin, nous étudions l'extension du modèle à un espace de Banach réflexif, puis, dans un cas particulier, à un espace de Banach non réflexif. / The aim of this thesis is to propose different mathematical neuron models that take into account Optogenetics, and study their optimal control. We first define a controlled version of finite-dimensional, deterministic, conductance based neuron models. We study a minimal time problem for a single-input affine control system and we study its singular extremals. We implement a direct method to observe the optimal trajectories and controls. The optogenetic control appears as a new way to assess the capability of conductance-based models to reproduce the characteristics of the membrane potential dynamics experimentally observed. We then define an infinite-dimensional stochastic model to take into account the stochastic nature of the ion channel mechanisms and the action potential propagation along the axon. It is a controlled piecewise deterministic Markov process (PDMP), taking values in an Hilbert space. We define a large class of infinite-dimensional controlled PDMPs and we prove that these processes are strongly Markovian. We address a finite time optimal control problem. We study the Markov decision process (MDP) embedded in the PDMP. We show the equivalence of the two control problems. We give sufficient conditions for the existence of an optimal control for the MDP, and thus, for the initial PDMP as well. The theoretical framework is large enough to consider several modifications of the infinite-dimensional stochastic optogenetic model. Finally, we study the extension of the model to a reflexive Banach space, and then, on a particular case, to a nonreflexive Banach space.
246

Utvärdering av stråldoser för personal verksamma inom diagnostisk nuklearmedicin

Mohammed, Aya January 2018 (has links)
I nuklearmedicinska verksamheten utsätts personal för strålning på olika vis. Huvudsakligen genom administrering av radiofarmaka som injicering eller avfallshantering men även genom att befinna sig nära patient efter injektion av radiofarmaka. Med strålning finns risker för skador som förekommer i cellnivå. Två typer av effekter förekommer vid bestrålning av vävnaden, deterministiska och stokastiska skador. För att minska risken för skador har strålsäkerhets-myndigheten (SSM) föreskrivit dosgränser som inte får överskridas. Syftet med studien var att kartlägga stråldoser till personal inom nukleamedicinska verksam-het. I studien kontrollerades fingerdoser för personal inom PET/CT, där små termoluminiscenta dosimetrar (TLD) placerades på sex fingertoppar hos personal-en. Stråldoserna mättes vid tre arbetstillfällen; uppackning av 18F, injicering med automatisk injektor samt manuellt uppdrag och injicering av 18F märkt läkemedel. För att fastfälla risken för internkontamination av personal vid ventilationsunder-sökningar med 99mTc-aresol, placerades personalen under en gammakamera och antalet pulser som fastställdes översattes till aktivitet genom en fantommätning. Dessutom mättes doshastigheten hos patienter injicerade med 18flour märkta läke-medel. Ett dosratinstrument (Ram GENE mark iii) användes för att mäta dos-hastigheten vid sju olika mätpunkter och tre olika avstånd. Enligt resultaten upp-nådde ingen SSM’s dosgränser. Skillnad mellan injicering manuellt och med auto-matisk injektor visade en stor variation vid erhållna resultat. Dosratmätningarna visade en mycket tydlig sänkning för varje gångavstånd ökade. Mätningarna för internkontamination visade att personalen inte utsattes för höga stråldoser med avseende på internkontamination. Den minsta detekterbara aktivitet var 0,0008 MBq. Det som konstateras utifrån studien är att hantering 18F ger högre stråldoser än 99mTc (200 keV), då den har en mycket högre fotonenergi (511 keV). / Working staff in nuclear medicine are exposed for radiation in different ways. Mainly by the administration of radiopharmaceuticals, such as injection or dis-posal, even by being close to the patient after injection of radiopharmaceuticals. With radiation there are risks of damage occurring at the cellular level. Two types of effects are found in the irradiation of tissues, deterministic and stochastic injuries. To reduce the risk of injury, the Swedish radiation safety authority (SSM) has prescribed dose limits that cannot be exceeded. Among other doses, there are limits for the fingers per year. The purpose of the study was to control radiation doses to personnel working in nuclear medicine. In the study finger doses were controlled for personnel within PET / CT, where thermoluminescent dosimeters (TLDs) were placed at six fingertips. Radiation doses were measured at three moments; unpacking of 18F, injection with automatic injector and manual injection of 18F labeled drug. To determine the risk of internal contamination of personnel that performs ventilation studies with 99mTc aresol, staff were placed under a gamma camera and the number of pulses detected were translated into activity through measurement of a radiation source (cylinder filled with known activity). In addition, the dose rate was measured around patients injected with 18flour-labeled drugs. A dose rate detector (Ram GENE mark iii) was used to measure the dose rate at seven different measuring points and three different distances. Difference between injection manually and with automatic injector showed a large variation in results obtained and SSM’s dose limits weren’t reached. The dose rate measurements showed a very clear reduction for each time the distance increased. Internal contamination measurements showed that staff were not exposed to high radiation doses regarding internal contamination and the least detectable activity was 0.0008 MBq. The study showed that handling 18F produces higher radiation doses than 99mTc (200 keV), as it has a much higher photon energy (511 keV).
247

SUBHARMONIC FREQUENCIES IN GUITAR SPECTRA

Bunnell, Leah M. 24 June 2021 (has links)
No description available.
248

Stabilizace chaosu: metody a aplikace / The Control of Chaos: Methods and Applications

Hůlka, Tomáš January 2017 (has links)
This thesis focuses on deterministic chaos and selected methods of chaos control. It briefly describes the matter of deterministic chaos and presents commonly used tools of analysis of dynamical systems exhibiting chaotic behavior. A list of frequently studied chaotic systems is presented and followed by a description of methods of chaos control and the optimization of these methods. The practical part is dedicated to the stabilization of two model systems and one real system with described methods.
249

Modélisation hydrologique déterministe pour les systèmes d'aide à la décision en temps réel : application au bassin versant Var, France / Deterministic hydrological modelling for real time decision support systems : application to the Var catchment, France

Ma, Qiang 14 March 2018 (has links)
Les ressources en eau sont généralement considérées comme l'une des ressources naturelles les plus importantes du développement social, en particulier pour soutenir les usages domestiques, agricoles et industriels. Au cours de la dernière décennie, en raison de l'augmentation des activités humaines, telles que l'urbanisation et l'industrialisation, les impacts sociaux sur l'environnement naturel deviennent de plus en plus intenses. Par conséquent, de nos jours, les problèmes d'eau par rapport à avant deviennent plus compliqués. Pour faire face au problème complexe depuis les années 1970, les gens ont reconnu que le système d'aide à la décision (DSS) présente des avantages évidents. De plus, avec le développement de l'informatique et des techniques web, les DSS sont souvent utilisés pour appuyer la décision locale. Les décideurs pour gérer les ressources naturelles de la région en particulier les ressources en eau. La modélisation hydrologique en charge de la représentation des caractéristiques du bassin versant joue un rôle important dans le système d'aide à la décision environnementale (EDSS). Parmi les différents types de modèles, le modèle hydrologique distribué déterministe est capable de décrire l'état réel de la zone d'étude de manière plus détaillée et précise. Cependant, le seul obstacle à la limitation des applications de ce type de modèle est pointé vers le grand besoin de données demandé par sa configuration de modélisation. Dans cette étude d'évaluation de la modélisation hydrologique dans le projet AquaVar, un modèle distribué déterministe (MIKE SHE) est construit pour l'ensemble du bassin versant du Var avec moins d'informations de terrain disponibles dans la zone. Grâce à une stratégie de modélisation raisonnable, plusieurs hypothèses sont conçues pour résoudre les problèmes de données manquantes dans les intervalles de temps quotidiens et horaires. La simulation est étalonnée sur une échelle de temps quotidienne et horaire de 2008 à 2011, qui contient un événement de crue extrême en 2011. En raison des impacts des données manquantes sur les entrées et les observations du modèle, l'évaluation de l'étalonnage de la modélisation n'est pas seulement basée sur des coefficients statistiques tels que le coefficient de Nash, mais aussi des facteurs physiques (p. ex. valeurs maximales et débit total). Le modèle calibré est capable de décrire les conditions habituelles du système hydrologique varois, et représente également le phénomène inhabituel dans le bassin versant tel que les inondations et les sécheresses. Le processus de validation mis en œuvre de 2011 à 2014 dans l'intervalle de temps journalier et horaire confirme la bonne performance de la simulation dans le Var. La simulation MIKE SHE dans Var est l'une des parties principales du système de modélisation distribuée déterministe de l'EDSS d'AquaVar. Après l'étalonnage et la validation, le modèle pourrait être utilisé pour prévoir les impacts des événements météorologiques à venir (par exemple, des crues extrêmes) dans cette région et produire les conditions aux limites pour d'autres modèles distribués déterministes dans le système. La conception de l'architecture EDSS, la stratégie de modélisation et le processus d'évaluation de modélisation présentés dans cette recherche pourraient être appliqués comme un processus de travail standard pour résoudre les problèmes similaires dans d'autres régions. / Water resource is commonly considered as one of the most important natural resources in social development especially for supporting domestic, agricultural and industrial uses. During the last decade, due to the increase of human activities, such as urbanization and industrialization, the social impacts on the natural environment become more and more intensive. Therefore, recently, water problems compared to before become more complicated. To deal with the complex problem, since 1970s, started from the companies, people recognized that the Decision Support System (DSS) has obvious advantages Moreover, with the development of computer science and web techniques, the DSS are commonly applied for supporting the local decision makers to manage the region natural resources especially the water resources. The hydrological modelling in charge of representing the catchment characteristics plays significant role in the Environment Decision Support System (EDSS). Among different kinds of models, the deterministic distributed hydrological model is able to describe the real condition of the study area in more detail and accurate way. However, the only obstacle to limit the applications of this kind of model is pointed to the large data requirement requested by its modelling set up. In this study of hydrological modelling assessment in AquaVar project, one deterministic distributed model (MIKE SHE) is built for the whole Var catchment with less field information available in the area. Through one reasonable modelling strategy, several hypothesises are conceived to solve the missing data problems within daily and hourly time intervals. The simulation is calibrated in both daily and hourly time scale from 2008 to 2011, which contains one extreme flood event at 2011. Due to the impacts of missing data on both model inputs and observations, the evaluation of modelling calibration is not only based on the statistic coefficients such as Nash coefficient, but also effected by some physical factors (e.g. peak values and total discharge). The calibrated model is able to describe usual condition of Var hydrological system, and also represent the unusual phenomenon in the catchment such as flood and drought event. The validation process implemented from 2011 to 2014 within both daily and hourly time interval further proves the good performance of the simulation in Var. The MIKE SHE simulation in Var is one of the main parts of the deterministic distributed modelling system in the EDSS of AquaVar. After the calibration and validation, the model could be able to use for forecasting the impacts of coming meteorological events (e.g. extreme flood) in this region and producing the boundary conditions for other deterministic distributed models in the system. The design of the EDSS architecture, modelling strategy and modelling evaluation process presented in this research could be applied as one standard working process for solving the similar problems in other region.
250

Investigation and forecasting drift component of a gas sensor

Chowdhury Tondra, Farhana January 2021 (has links)
Chemical sensor based systems that are used for detection, identification, or quantification of various gases are very complex in nature. Sensor response data collected as a multivariate time series signals encounters gradual change of the sensor characteristics(known as sensor drift) due to several reasons. In this thesis, drift component of a silicon carbide Field-Effect Transistor (SiC-FET) sensor data was analyzed using time series. The data was collected from an experiment measuring output response of the sensor with respect to gases emitted by certain experimental object at different temperatures. Augmented Dickey Fuller Test (ADF) was carried out to analyze the sensor drift which revealed that stochastic trend along with deterministic trend characterized the drift components of the sensor. The drift started to rise in daily measurements which contributed to the total drift. / Traditional Autoregressive Integrated Moving Average (ARIMA) and deep learning based Long Short-Term Memory (LSTM) algorithm were carried out to forecast the sensor drift in reduced set of data. However, reduction of the data size degraded the forecasting accuracy and imposed loss of information. Therefore, careful selection of data using only one temperature from the temperature cycle was chosen instead of all time points. This chosen data from sensor array outperformed forecasting of sensor drift than reduced dataset using both traditional and deep learning methods.

Page generated in 0.0895 seconds