• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 6
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 59
  • 11
  • 9
  • 9
  • 9
  • 9
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Completely Recursive Least Squares and Its Applications

Bian, Xiaomeng 02 August 2012 (has links)
The matrix-inversion-lemma based recursive least squares (RLS) approach is of a recursive form and free of matrix inversion, and has excellent performance regarding computation and memory in solving the classic least-squares (LS) problem. It is important to generalize RLS for generalized LS (GLS) problem. It is also of value to develop an efficient initialization for any RLS algorithm. In Chapter 2, we develop a unified RLS procedure to solve the unconstrained/linear-equality (LE) constrained GLS. We also show that the LE constraint is in essence a set of special error-free observations and further consider the GLS with implicit LE constraint in observations (ILE-constrained GLS). Chapter 3 treats the RLS initialization-related issues, including rank check, a convenient method to compute the involved matrix inverse/pseudoinverse, and resolution of underdetermined systems. Based on auxiliary-observations, the RLS recursion can start from the first real observation and possible LE constraints are also imposed recursively. The rank of the system is checked implicitly. If the rank is deficient, a set of refined non-redundant observations is determined alternatively. In Chapter 4, base on [Li07], we show that the linear minimum mean square error (LMMSE) estimator, as well as the optimal Kalman filter (KF) considering various correlations, can be calculated from solving an equivalent GLS using the unified RLS. In Chapters 5 & 6, an approach of joint state-and-parameter estimation (JSPE) in power system monitored by synchrophasors is adopted, where the original nonlinear parameter problem is reformulated as two loosely-coupled linear subproblems: state tracking and parameter tracking. Chapter 5 deals with the state tracking which determines the voltages in JSPE, where dynamic behavior of voltages under possible abrupt changes is studied. Chapter 6 focuses on the subproblem of parameter tracking in JSPE, where a new prediction model for parameters with moving means is introduced. Adaptive filters are developed for the above two subproblems, respectively, and both filters are based on the optimal KF accounting for various correlations. Simulations indicate that the proposed approach yields accurate parameter estimates and improves the accuracy of the state estimation, compared with existing methods.
12

Efficient Factor Graph Fusion for Multi-robot Mapping

Natarajan, Ramkumar 12 June 2017 (has links)
"This work presents a novel method to efficiently factorize the combination of multiple factor graphs having common variables of estimation. The fast-paced innovation in the algebraic graph theory has enabled new tools of state estimation like factor graphs. Recent factor graph formulation for Simultaneous Localization and Mapping (SLAM) like Incremental Smoothing and Mapping using the Bayes tree (ISAM2) has been very successful and garnered much attention. Variable ordering, a well-known technique in linear algebra is employed for solving the factor graph. Our primary contribution in this work is to reuse the variable ordering of the graphs being combined to find the ordering of the fused graph. In the case of mapping, multiple robots provide a great advantage over single robot by providing a faster map coverage and better estimation quality. This coupled with an inevitable increase in the number of robots around us produce a demand for faster algorithms. For example, a city full of self-driving cars could pool their observation measurements rapidly to plan a traffic free navigation. By reusing the variable ordering of the parent graphs we were able to produce an order-of-magnitude difference in the time required for solving the fused graph. We also provide a formal verification to show that the proposed strategy does not violate any of the relevant standards. A common problem in multi-robot SLAM is relative pose graph initialization to produce a globally consistent map. The other contribution addresses this by minimizing a specially formulated error function as a part of solving the factor graph. The performance is illustrated on a publicly available SuiteSparse dataset and the multi-robot AP Hill dataset."
13

Improving Short-Range Cloud Forecasts in Harmonie-Arome Through Cloud Initialization Using Mesan Cloud Data

Pyykkö, Joakim January 2019 (has links)
Previous studies, such as van der Veen (2012) and White et al. (2017), have demonstrated the potential of using measurement-based cloud data to improve Numerical Weather Prediction (NWP) based cloud forecasts. This can be done through cloud initialization; a process of injecting cloud data after the regular data assimilation in an NWP model. The purpose of this study was to use cloud data from the Mesoscale Analysis system MESAN to investigate cloud initialization in the HARMONIE-AROME model system for improving short-range cloud forecasts. The cloud initialization method that was used was similar to a method used by van der Veen (2012), where specific humidities, temperatures, and hydrometeor concentrations were altered using information on cloud fractions, cloud base heights and cloud top heights. MESAN input data analyses as well as cloud initialization investigations were carried out. MESAN input data analyses revealed significant differences in cloud fractions between MESAN and the background model field in MESAN. Overestimations of cloud fractions in MESAN over sea were caused by satellite data, particularly due to the inclusion of the fractional cloud category. Underestimations of cloud fractions over land were caused by limitations of the synoptic weather (SYNOP) stations in measuring clouds. Furthermore, larger differences between MESAN and SYNOP were found over Sweden and Finland compared to Norway, which may be tied to Norway having mostly manual SYNOP stations, and Sweden and Finland having mostly automatic stations. Shortcomings were found in the investigated cloud initialization method. Such shortcomings involved a limit check on the specific humidity change, the cloud initialization being repeated for an unnecessarily large amount of iterations, and the use of a sub-optimal profile of critical relative humidity. Using a one-dimensional vertical column version of HARMONIE-AROME, named MUSC, to integrate forward in time revealed a large sensitivity to the use of forcing profiles and forcing time scales in MUSC. Alterations made through cloud initialization were found to last over 12 h, with varying effects depending on the investigated height. A reasonably good agreement between MUSC results and results from the three-dimensional version of HARMONIE-AROME was found. Findings in this thesis point at potential to further enhance the HARMONIE-AROME cloud initialization technique. These enhancements concern a revised MESAN cloud product and taking care of some flaws in the cloud initialization method. / I en operationell vädermodell inkluderas olika mätdata, såsom temperatur och atmosfärstryck, i ett regelbundet intervall. Molnighet är inte vanligtvis en del av dessa cykler; istället bildas molnen av modellen utifrån balanser i de andra fysikaliska fälten. Detta projekt gick ut på att direkt införa molnmätningar från väderanalyssystemet MESAN i vädermodellsystemet HARMONIE-AROME genom en metod som kallas molninitialisering. Specifikt förbättringar för korttidsprognoser var av i ntresse. MESAN är ett system vars produkter är en sammanslagning av ett bakgrundsfält från en vädermodellkörning med olika mätdata. I MESAN kommer molndata från tre källor: bakgrundsfältet, satellitdata och synoptisk väderstationsdata (SYNOP-data). Undersökningar av indata till MESAN samt molninitialiseringsmetoden har utförts. Analyser av indata till MESAN visade på överskattningar av moln i satellitdata över hav och underskattningar av moln i SYNOP-data över land. För satellitdatat berodde detta på medtagande av moln på liten skala eller väldigt tunna moln, medan det för SYNOP berodde på begränsningar i mätmetoderna. Det fanns även en skillnad i kvalitet i SYNOP-data i Sverige och Finland gentemot Norge, vilket kan bero på att de flesta mätstationer i Norge är manuella medan de flesta i Sverige och Finland är automatiska. Molninitialiseringsmetoden bestod i att extrahera data om molnbashöjd och molntopphöjd från MESAN, och sedan modifiera fuktighet, temperatur och hydrometeorer (såsom molndroppar och iskristaller) i HARMONIE-AROME utifrån molnens position. Brister i metoden hittades. Initialiseringsprocessen upprepades ett suboptimalt antal gånger. En begränsning i hur mycket fuktigheten tillåts modifieras förändras under initialiseringsprocessen och fungerade inte som avsett. Dessutom, jämförese med radiosonddata pekar på att relativa fuktighetsgränserna för villket moln bildas inledningsvis inte ansattes korrekt. Effekterna av metoden kunde vara i över 12 timmar, men denna studie pekar på ytterligare troliga förbättringsmöjligheter i HARMONIE-AROME genom införande av reviderad version av metoden samt förbättrade satellitprodukter.
14

Análise de uma aula de biologia com base nas interações discursivas / Biology class analysis based on discursive interactions

Vieira, João Luís de Abreu 10 September 2018 (has links)
As aulas expositivas dialogadas são aquelas nas quais os professores dialogam com os alunos, atuando como mediadores dessas aulas. O diálogo é um dos momentos em que pode ocorrer o processo de ensino e aprendizagem. Neste trabalho, foi analisada uma aula expositiva dialogada que foi ministrada 2 vezes pela mesma professora do 1º ano do ensino médio regular de uma escola pública da zona oeste da cidade de São Paulo. Nessa aula, foi realizada a correção de exercícios de uma ficha de atividades sobre dinâmica populacional. Essa ficha fazia parte de uma sequência de ensino investigativa (SEI) que incluía algumas atividades. As aulas foram áudio-vídeo gravadas, transcritas e analisadas sob o ponto de vista de 3 aspectos de 2 referenciais metodológicos, que foram os tipos de iniciações/perguntas (MEHAN, 1979) e os padrões de interação e a abordagem comunicativa (MORTIMER E SCOTT, 2002). Esses referenciais foram combinados uns aos outros para que a nossa análise ficasse mais completa. Os padrões de interação foram associados aos tipos de iniciações e os mesmos padrões foram analisados individualmente, seja em relação aos padrões triádicos, não triádicos abertos ou fechados. A abordagem comunicativa também foi analisada individualmente. Nossa hipótese inicial era que iniciações de metaprocesso desencadeariam sequências de interação mais longas e interativas. Porém, nossa hipótese não foi confirmada e constatamos que iniciações mais simples como as de escolha e de produto desencadearam sequências de interação mais longas. Em relação aos padrões de interação, a maioria era de não triádicos fechados, o que mostrou que a aula foi bem dinâmica e interativa e essa interação terminava com uma avaliação da professora, na maioria das vezes. Em relação à abordagem comunicativa, a interativa de autoridade foi a presente na maioria do discurso, demonstrando que o objetivo da professora era ouvir os pontos de vista dos alunos e encaminhá-los para o ponto de vista da ciência escolar. A abordagem interativa dialógica também esteve presente em algumas partes do discurso quando não havia avaliação das falas dos alunos pela professora. / The expository - negotiated lessons are those in which teachers interact with their students, becoming mediators. This interaction is one moment that can happen the teaching and learning process. This research analysed a expository - negotiated lesson that was given twice by the same teacher from a public school in the west zone of São Paulo. In this lesson, the teacher was correcting questions from activities form about population dynamics. This form was part of a didactic sequence with some activities. The lessons were recorded, transcribed and analysed in 3 aspects of 2 methodological standards: inicialization / questions types (MEHAN, 1979); interaction patterns and communicative approach (MORTIMER E SCOTT, 2002). These standards were combined in order to become our analysis more complete. The interaction patterns were associated to the inicialization types and the same patterns were analysed individually, in relation to the triadic and non triadic patterns. There were 2 types of non triadic patterns: the open and the closed ones. The communicative approach was also analysed individually. Our initial hypothesis were that metaprocess initiations could trigger longer and interactive interaction patterns. But our hypothesis was not confirmed and we establish that more simple initiations like choice and product types triggered longer interaction patterns. In relation to interaction patterns, most of them were closed non triadic, which meant that the lesson was very dynamic and interactive. And this interaction ended with a teacher evaluation most of the times. With respect to the communicative approach, the interactive of authority was present in most of all discourse, showing that the teacher aim was listen the students point of view to take them to the educational science point of view. The dialogical interactive communicative approach was present in some discourse parts when the teacher did not evaluate the students speeches.
15

An investigation into fuzzy clustering quality and speed : fuzzy C-means with effective seeding

Stetco, Adrian January 2017 (has links)
Cluster analysis, the automatic procedure by which large data sets can be split into similar groups of objects (clusters), has innumerable applications in a wide range of problem domains. Improvements in clustering quality (as captured by internal validation indexes) and speed (number of iterations until cost function convergence), the main focus of this work, have many desirable consequences. They can result, for example, in faster and more precise detection of illness onset based on symptoms or it could provide investors with a rapid detection and visualization of patterns in financial time series and so on. Partitional clustering, one of the most popular ways of doing cluster analysis, can be classified into two main categories: hard (where the clusters discovered are disjoint) and soft (also known as fuzzy; clusters are non-disjoint, or overlapping). In this work we consider how improvements in the speed and solution quality of the soft partitional clustering algorithm Fuzzy C-means (FCM) can be achieved through more careful and informed initialization based on data content. By carefully selecting the cluster centers in a way which disperses the initial cluster centers through the data space, the resulting FCM++ approach samples starting cluster centers during the initialization phase. The cluster centers are well spread in the input space, resulting in both faster convergence times and higher quality solutions. Moreover, we allow the user to specify a parameter indicating how far and apart the cluster centers should be picked in the dataspace right at the beginning of the clustering procedure. We show FCM++'s superior behaviour in both convergence times and quality compared with existing methods, on a wide rangeof artificially generated and real data sets. We consider a case study where we propose a methodology based on FCM++for pattern discovery on synthetic and real world time series data. We discuss a method to utilize both Pearson correlation and Multi-Dimensional Scaling in order to reduce data dimensionality, remove noise and make the dataset easier to interpret and analyse. We show that by using FCM++ we can make an positive impact on the quality (with the Xie Beni index being lower in nine out of ten cases for FCM++) and speed (with on average 6.3 iterations compared with 22.6 iterations) when trying to cluster these lower dimensional, noise reduced, representations of the time series. This methodology provides a clearer picture of the cluster analysis results and helps in detecting similarly behaving time series which could otherwise come from any domain. Further, we investigate the use of Spherical Fuzzy C-Means (SFCM) with the seeding mechanism used for FCM++ on news text data retrieved from a popular British newspaper. The methodology allows us to visualize and group hundreds of news articles based on the topics discussed within. The positive impact made by SFCM++ translates into a faster process (with on average 12.2 iterations compared with the 16.8 needed by the standard SFCM) and a higher quality solution (with the Xie Beni being lower for SFCM++ in seven out of every ten runs).
16

Developing Modeling, Optimization, and Advanced Process Control Frameworks for Improving the Performance of Transient Energy-Intensive Applications

Safdarnejad, Seyed Mostafa 01 May 2016 (has links)
The increasing trend of world-wide energy consumption emphasizes the importance of ongoing optimization of new and existing technologies. In this dissertation, two energy–intensive systems are simulated and optimized. Advanced estimation, optimization, and control techniques such as a moving horizon estimator and a model predictive controller are developed to enhance the profitability, product quality, and reliability of the systems. An enabling development is presented for the solution of complex dynamic optimization problems. The strategy involves an initialization approach to large–scale system models that both enhance the computational performance as well as the ability of the solver to converge to an optimal solution. One particular application of this approach is the modeling and optimization of a batch distillation column. For estimation of unknown parameters, an L1-norm method is utilized that is less sensitive to outliers than a squared error objective. The results obtained from the simple model match the experimental data and model prediction for a more rigorous model. A nonlinear statistical analysis and a sensitivity analysis are also implemented to verify the reliability of the estimated parameters. The reduced–order model developed for the batch distillation column is computationally fast and reasonably accurate and is applicable for real time control and online optimization purposes. Similar to estimation, an L1-norm objective function is applied for optimization of the column operation. Application of an L1-norm permits explicit prioritization of the multi–objective problems and adds only linear terms to the problem. Dynamic optimization of the column results in a 14% increase in the methanol product obtained from the column with 99% purity. In a second application of the methodology, the results obtained from optimization of the hybrid system of a cryogenic carbon capture (CCC) and power generation units are presented. Cryogenic carbon capture is a novel technology for CO2 removal from power generation units and has superior features such as low energy consumption, large–scale energy storage, and fast response to fluctuations in electricity demand. Grid–level energy storage of the CCC process enables 100% utilization of renewable power sources while 99% of the CO2 produced from fossil–fueled power plants is captured. In addition, energy demand of the CCC process is effectively managed by deploying the energy storage capability of this process. By exploiting time–of–day pricing, the profit obtained from dynamic optimization of this hybrid energy system offsets a significant fraction of the cost of construction of the cryogenic carbon capture plant.
17

An improved fully connected hidden Markov model for rational vaccine design

Zhang, Chenhong 24 February 2005
<p>Large-scale, in vitro vaccine screening is an expensive and slow process, while rational vaccine design is faster and cheaper. As opposed to the emperical ways to design vaccines in biology laboratories, rational vaccine design models the structure of vaccines with computational approaches. Building an effective predictive computer model requires extensive knowledge of the process or phenomenon being modelled. Given current knowledge about the steps involved in immune system responses, computer models are currently focused on one or two of the most important and best known steps; for example: presentation of antigens by major histo-compatibility complex (MHC) molecules. In this step, the MHC molecule selectively binds to some peptides derived from antigens and then presents them to the T-cell. One current focus in rational vaccine design is prediction of peptides that can be bound by MHC.<p>Theoretically, predicting which peptides bind to a particular MHC molecule involves discovering patterns in known MHC-binding peptides and then searching for peptides which conform to these patterns in some new antigenic protein sequences. According to some previous work, Hidden Markov models (HMMs), a machine learning technique, is one of the most effective approaches for this task. Unfortunately, for computer models like HMMs, the number of the parameters to be determined is larger than the number which can be estimated from available training data.<p>Thus, heuristic approaches have to be developed to determine the parameters. In this research, two heuristic approaches are proposed. The rst initializes the HMM transition and emission probability matrices by assigning biological meanings to the states. The second approach tailors the structure of a fully connected HMM (fcHMM) to increase specicity. The effectiveness of these two approaches is tested on two human leukocyte antigens(HLA) alleles, HLA-A*0201 and HLAB* 3501. The results indicate that these approaches can improve predictive accuracy. Further, the HMM implementation incorporating the above heuristics can outperform a popular prole HMM (pHMM) program, HMMER, in terms of predictive accuracy.
18

An improved fully connected hidden Markov model for rational vaccine design

Zhang, Chenhong 24 February 2005 (has links)
<p>Large-scale, in vitro vaccine screening is an expensive and slow process, while rational vaccine design is faster and cheaper. As opposed to the emperical ways to design vaccines in biology laboratories, rational vaccine design models the structure of vaccines with computational approaches. Building an effective predictive computer model requires extensive knowledge of the process or phenomenon being modelled. Given current knowledge about the steps involved in immune system responses, computer models are currently focused on one or two of the most important and best known steps; for example: presentation of antigens by major histo-compatibility complex (MHC) molecules. In this step, the MHC molecule selectively binds to some peptides derived from antigens and then presents them to the T-cell. One current focus in rational vaccine design is prediction of peptides that can be bound by MHC.<p>Theoretically, predicting which peptides bind to a particular MHC molecule involves discovering patterns in known MHC-binding peptides and then searching for peptides which conform to these patterns in some new antigenic protein sequences. According to some previous work, Hidden Markov models (HMMs), a machine learning technique, is one of the most effective approaches for this task. Unfortunately, for computer models like HMMs, the number of the parameters to be determined is larger than the number which can be estimated from available training data.<p>Thus, heuristic approaches have to be developed to determine the parameters. In this research, two heuristic approaches are proposed. The rst initializes the HMM transition and emission probability matrices by assigning biological meanings to the states. The second approach tailors the structure of a fully connected HMM (fcHMM) to increase specicity. The effectiveness of these two approaches is tested on two human leukocyte antigens(HLA) alleles, HLA-A*0201 and HLAB* 3501. The results indicate that these approaches can improve predictive accuracy. Further, the HMM implementation incorporating the above heuristics can outperform a popular prole HMM (pHMM) program, HMMER, in terms of predictive accuracy.
19

Analyzing methods of mitigating initialization bias in transportation simulation models

Taylor, Stephen Luke 22 November 2010 (has links)
All computer simulation models require some form of initialization before their outputs can be considered meaningful. Simulation models are typically initialized in a particular, often "empty" state and therefore must be "warmed-up" for an unknown amount of simulation time before reaching a "quasi-steady-state" representative of the systems' performance. The portion of the output series that is influenced by the arbitrary initialization is referred to as the initial transient and is a widely recognized problem in simulation analysis. Although several methods exist for removing the initial transient, there are no methods that perform well in all applications. This research evaluates the effectiveness of several techniques for reducing initialization bias from simulations using the commercial transportation simulation model VISSIM®. The three methods ultimately selected for evaluation are Welch's Method, the Marginal Standard Error Rule (MSER) and the Volume Balancing Method currently being used by the CORSIM model. Three model instances - a single intersection, a corridor, and a large network - were created to analyze the length of the initial transient for varying scenarios, under high and low demand scenarios. After presenting the results of each initialization method, advantages and criticisms of each are discussed as well as issues that arose during the implementation. The results for estimation of the extent of the initial transient are compared across each method and across the varying model sizes and volume levels. Based on the results of this study, Welch's Method is recommended based on is consistency and ease of implementation.
20

Examination of Initialization Techniques for Nonnegative Matrix Factorization

Frederic, John 21 November 2008 (has links)
While much research has been done regarding different Nonnegative Matrix Factorization (NMF) algorithms, less time has been spent looking at initialization techniques. In this thesis, four different initializations are considered. After a brief discussion of NMF, the four initializations are described and each one is independently examined, followed by a comparison of the techniques. Next, each initialization's performance is investigated with respect to the changes in the size of the data set. Finally, a method by which smaller data sets may be used to determine how to treat larger data sets is examined.

Page generated in 0.1262 seconds