• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 1
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Metodo para a determinação do numero de gaussianas em modelos ocultos de Markov para sistemas de reconhecimento de fala continua / A new method for determining the number of gaussians in hidden Markov models for continuos speech recognition systems

Yared, Glauco Ferreira Gazel 20 April 2006 (has links)
Orientador: Fabio Violaro / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-06T10:44:21Z (GMT). No. of bitstreams: 1 Yared_GlaucoFerreiraGazel_D.pdf: 5774867 bytes, checksum: 49a79d9495ce25c8a69ca34858a956ee (MD5) Previous issue date: 2006 / Resumo: Atualmente os sistemas de reconhecimento de fala baseados em HMMs são utilizados em diversas aplicações em tempo real, desde telefones celulares até automóveis. Nesse contexto, um aspecto importante que deve ser considerado é a complexidade dos HMMs, a qual está diretamente relacionada com o custo computacional. Assim, no intuito de permitir a aplicação prática do sistema, é interessante otimizar a complexidade dos HMMs, impondo-se restrições em relação ao desempenho no reconhecimento. Além disso, a otimização da topologia é importante para uma estimação confiável dos parâmetros dos HMMs. Os trabalhos anteriores nesta área utilizam medidas de verossimilhança para a obtenção de sistemas que apresentem um melhor compromisso entre resolução acústica e robustez. Este trabalho apresenta o novo Algoritmo para Eliminação de Gaussianas (GEA), o qual é baseado em uma análise discriminativa e em uma análise interna, para a determinação da complexidade mais apropriada para os HMMs. O novo método é comparado com o Critério de Informação Bayesiano (BIC), com um método baseado em medidas de entropia, com um método discriminativo para o aumento da resolução acústica dos modelos e com os sistemas contendo um número fixo de Gaussianas por estado / Abstract: Nowadays, HMM-based speech recognition systems are used in many real time processing applications, from cell phones to auto mobile automation. In this context, one important aspect to be considered is the HMM complexity, which directly determines the system computational load. So, in order to make the system feasible for practical purposes, it is interesting to optimize the HMM size constrained to a minimum acceptable recognition performance. Furthermore, topology optimization is also important for reliable parameter estimation. Previous works in this area have used likelihood measures in order to obtain models with a better compromise between acoustic resolution and robustness. This work presents the new Gaussian Elimination Algorithm (GEA), which is based on a discriminative analysis and on an internal analysis, for determining the more suitable HMM complexity. The new approach is compared to the classical Bayesian Information Criterion (BIC), to an entropy based method, to a discriminative-based method for increasing the acoustic resolution of the HMMs and also to systems containing a fixed number of Gaussians per state / Doutorado / Telecomunicações e Telemática / Doutor em Engenharia Elétrica
12

Urban building energy modeling : A systematic evaluation of modeling and simulation approaches

Johari, Fatemeh January 2021 (has links)
Urban energy system planning can play a pivotal role in the transition of urban areas towards energy efficiency and carbon neutrality. With the building sector being one of the main components of the urban energy system, there is a great opportunity for improving energy efficiency in cities if the spatio-temporal patterns of energy use in the building sector are accurately identified. A bottom-up engineering energy model of buildings, known as urban building energy model (UBEM), is an analytical tool for modeling buildings on city-levels and evaluating scenarios for an energy-efficient built environment, not only on the building-level but also on the district and city-level. Methods for developing an UBEM vary, yet, the majority of existing models use the same approach to incorporating already established building energy simulation software into the main core of the model. Due to difficulties in accessing building-specific information on the one hand, and the computational cost of UBEMs on the other hand, simplified building modeling is the most common method to make the modeling procedure more efficient. This thesis contributes to the state-of-the-art and advancement of the field of urban building energy modeling by analyzing the capabilities of conventional building simulation tools to handle an UBEM and suggesting modeling guidelines on the zoning configuration and levels of detail of the building models. According to the results from this thesis, it is concluded that with 16% relative difference from the annual measurements, EnergyPlus is the most suitable software that can handle large-scale building energy models efficiently. The results also show that on the individual building-level, a simplified single-zone model results in 6% mean absolute percentage deviation (MAPD) from a detailed multi-zone model. This thesis proposes that on the aggregated levels, simplified building models could contribute to the development of a fast but still accurate UBEM.
13

An analysis of hydraulic, environmental and economic impacts of flood polder management at the Elbe River

Förster, Saskia January 2008 (has links)
Flood polders are part of the flood risk management strategy for many lowland rivers. They are used for the controlled storage of flood water so as to lower peak discharges of large floods. Consequently, the flood hazard in adjacent and downstream river reaches is decreased in the case of flood polder utilisation. Flood polders are usually dry storage reservoirs that are typically characterised by agricultural activities or other land use of low economic and ecological vulnerability. The objective of this thesis is to analyse hydraulic, environmental and economic impacts of the utilisation of flood polders in order to draw conclusions for their management. For this purpose, hydrodynamic and water quality modelling as well as an economic vulnerability assessment are employed in two study areas on the Middle Elbe River in Germany. One study area is an existing flood polder system on the tributary Havel, which was put into operation during the Elbe flood in summer 2002. The second study area is a planned flood polder, which is currently in the early planning stages. Furthermore, numerical models of different spatial dimensionality, ranging from zero- to two-dimensional, are applied in order to evaluate their suitability for hydrodynamic and water quality simulations of flood polders in regard to performance and modelling effort. The thesis concludes with overall recommendations on the management of flood polders, including operational schemes and land use. In view of future changes in flood frequency and further increasing values of private and public assets in flood-prone areas, flood polders may be effective and flexible technical flood protection measures that contribute to a successful flood risk management for large lowland rivers. / Flutpolder werden zum gezielten Rückhalt von Wasser eingesetzt, um Spitzenabflüsse von großen Hochwassern zu senken. Dadurch wird im Falle des Flutpoldereinsatzes die Hochwassergefährdung für flussab gelegene Flussabschnitte verringert. Flutpolder sind meist trockene Staubecken, die typischerweise durch landwirtschaftliche Nutzung gekennzeichnet sind. Ziel der Dissertation ist die Analyse von hydraulischen, ökologischen und ökonomischen Auswirkungen des Einsatzes von Flutpoldern, um daraus Schlussfolgerungen für ihre Bewirtschaftung zu ziehen. Dazu werden numerische Modelle zur Simulation der Hydrodynamik und Wassergüte sowie ein landwirtschaftliches Schadenmodell gemeinsam in einem integrativen Ansatz eingesetzt. Ein Untersuchungsgebiet ist ein existierendes Flutpoldersystem am Nebenfluss Havel, welches während der Elbeflut im Sommer 2002 zum Einsatz kam. Das zweite Untersuchungsgebiet ist ein geplanter Flutpolder, welcher sich bisher noch in einem frühen Planungsstadium befindet. Darüber hinaus werden numerische Modelle verschiedener räumlicher Dimensionalität von null- bis zwei-dimensional angewandt, um ihre Eignung für hydrodynamische und Wassergütesimulationen von Flutpoldern hinsichtlich der Leistungsfähigkeit und des Modellierungsaufwands zu bewerten. Die Dissertation schließt mit übergreifenden Empfehlungen zur Bewirtschaftung von Flutpoldern einschließlich Kontrollstrategien und Landnutzung ab. Im Hinblick auf zukünftige Änderungen in der Auftretenshäufigkeit von Hochwassern und weiterhin ansteigenden Werten von privatem und öffentlichem Vermögen in überflutungsgefährdeten Gebieten stellen Flutpolder ein effektive und flexible Maßnahmen des technischen Hochwasserschutzes dar, welche zu einem erfolgreichen Hochwasserrisikomanagement großer Tieflandflüsse beitragen.
14

Entwicklung eines aggregierten Modells zur Simulation der Gewässergüte in Talsperren als Baustein eines Flussgebietsmodells

Siemens, Katja 20 January 2010 (has links) (PDF)
Der großräumige Abbau von Braunkohle in der Lausitz führte in der Vergangenheit zu einer extremen Beeinflussung des Wasserhaushaltes im Einzugsgebiet der Spree. Mit dem Beginn der Sanierung und Flutung der Tagebaue kommt es nun langfristig zu einer verstärkten Nutzung der existierenden Oberflächengewässer und der Einbindung der entstehenden Tagebaurestseen in das Fließgewässernetz. Die Kopplung von Mengenbewirtschaftungsmodellen mit Gütemodellen berücksichtigt die Verfügbarkeit und Verteilung der begrenzten Ressource Wasser im Einzugsgebiet und der aus der Bewirtschaftung resultierenden Gewässergüte. Dies entspricht auch dem Leitbild der EU-WRRL (2000) für ein integriertes Flussgebietsmanagement, was eine einzugsgebietsbezogene Betrachtung der vorhandenen Ressourcen unter Berücksichtigung aller beeinflussten und beeinflussenden Kriterien fordert. Werden Modelle, die unterschiedlich sensitive und komplexe Systeme abbilden, miteinander gekoppelt, erfordert dies eine Anpassung der Datenstruktur und der zeitlichen Skalen. Schwerpunkt dieser Arbeit war die Entwicklung einfacher, robuster Simulationswerkzeuge für die Prognose der Gewässergüte in den Talsperren Bautzen und Quitzdorf. Als Basis diente das komplexe Standgewässergütemodell SALMO. Das Modell wurde zunächst um einfache Algorithmen ergänzt, so dass es trotz einer angepassten, stark reduzierten Datengrundlage, plausible Ergebnisse simulierte. Stochastisch erzeugte Bewirtschaftungsszenarien und die komplex simulierten Modellergebnisse bezüglich der resultierenden Gewässergüte, wurden als Trainingsdaten für ein Künstliches Neuronales Netz (ANN) genutzt. Die für beide Talsperren trainierten ANN sind als effektive Black-Box-Module in der Lage, das komplexe Systemverhalten des deterministischen Modells SALMO widerzuspiegeln. Durch eine Kopplung der entwickelten ANN mit dem Bewirtschaftungsmodell WBalMo ist es möglich, Bewirtschaftungsalternativen hinsichtlich ihrer Konsequenzen für die Gewässergüte zu bewerten. ANN sind systemgebundene Modelle, die nicht auf andere Gewässersysteme übertragen werden können. Allerdings stellt die hier erarbeitete Methodik einen fundierten Ansatz dar, der für die Entwicklung weiterer aggregierter Gütemodule im Rahmen integrierter Bewirtschaftungsmodelle angewendet werden kann. / The large-scale extraction of lignite in Lusatia in the past had an extreme impact on the water balance of the Spree river catchment. The restoration and flooding of the opencast pits put heavy demand on existing surface waters for a long time period. The resulting artificial lakes have to be integrated in the riverine network. The coupling of management models and water quality models allows to consider both availability and distribution of limited water resources in the catchment and resulting water quality. This is corresponding to the principles of the EU-WFD for integrated river basin management, which is a basin-related consideration of available resources taking into account all influencing and influenced characteristics. Adjustment of data structure and time scale is necessary if models describing unequally sensitive and complex systems are to be coupled. Main focus of this task was to develop simple and robust simulation tools for the prediction of water quality in the reservoirs Bautzen and Quitzdorf. The complex water quality model SALMO served as a basis. In a first step, simple algorithms had to be amended in order to generate plausible simulation results despite of an adapted reduced data base. Stochastically generated management scenarios and complex simulated model results regarding the resulting water quality were employed as training data for an Artificial Neuronal Network (ANN). The trained ANN’s are efficient black box modules. As such they are able to mirror complex system behaviour of the deterministic model SALMO. By coupling the developed ANN with the management model WBalMo it is possible to evaluate management strategies in terms of their impact on the quality of the water bodies. ANN’s are system-linked models. A transfer to other aquatic systems is not possible. However, the methodology developed here represents an in-depth approach which is applicable to the development of further aggregated water quality models in the framework of integrated management models.
15

Entwicklung eines aggregierten Modells zur Simulation der Gewässergüte in Talsperren als Baustein eines Flussgebietsmodells

Siemens, Katja 27 March 2009 (has links)
Der großräumige Abbau von Braunkohle in der Lausitz führte in der Vergangenheit zu einer extremen Beeinflussung des Wasserhaushaltes im Einzugsgebiet der Spree. Mit dem Beginn der Sanierung und Flutung der Tagebaue kommt es nun langfristig zu einer verstärkten Nutzung der existierenden Oberflächengewässer und der Einbindung der entstehenden Tagebaurestseen in das Fließgewässernetz. Die Kopplung von Mengenbewirtschaftungsmodellen mit Gütemodellen berücksichtigt die Verfügbarkeit und Verteilung der begrenzten Ressource Wasser im Einzugsgebiet und der aus der Bewirtschaftung resultierenden Gewässergüte. Dies entspricht auch dem Leitbild der EU-WRRL (2000) für ein integriertes Flussgebietsmanagement, was eine einzugsgebietsbezogene Betrachtung der vorhandenen Ressourcen unter Berücksichtigung aller beeinflussten und beeinflussenden Kriterien fordert. Werden Modelle, die unterschiedlich sensitive und komplexe Systeme abbilden, miteinander gekoppelt, erfordert dies eine Anpassung der Datenstruktur und der zeitlichen Skalen. Schwerpunkt dieser Arbeit war die Entwicklung einfacher, robuster Simulationswerkzeuge für die Prognose der Gewässergüte in den Talsperren Bautzen und Quitzdorf. Als Basis diente das komplexe Standgewässergütemodell SALMO. Das Modell wurde zunächst um einfache Algorithmen ergänzt, so dass es trotz einer angepassten, stark reduzierten Datengrundlage, plausible Ergebnisse simulierte. Stochastisch erzeugte Bewirtschaftungsszenarien und die komplex simulierten Modellergebnisse bezüglich der resultierenden Gewässergüte, wurden als Trainingsdaten für ein Künstliches Neuronales Netz (ANN) genutzt. Die für beide Talsperren trainierten ANN sind als effektive Black-Box-Module in der Lage, das komplexe Systemverhalten des deterministischen Modells SALMO widerzuspiegeln. Durch eine Kopplung der entwickelten ANN mit dem Bewirtschaftungsmodell WBalMo ist es möglich, Bewirtschaftungsalternativen hinsichtlich ihrer Konsequenzen für die Gewässergüte zu bewerten. ANN sind systemgebundene Modelle, die nicht auf andere Gewässersysteme übertragen werden können. Allerdings stellt die hier erarbeitete Methodik einen fundierten Ansatz dar, der für die Entwicklung weiterer aggregierter Gütemodule im Rahmen integrierter Bewirtschaftungsmodelle angewendet werden kann. / The large-scale extraction of lignite in Lusatia in the past had an extreme impact on the water balance of the Spree river catchment. The restoration and flooding of the opencast pits put heavy demand on existing surface waters for a long time period. The resulting artificial lakes have to be integrated in the riverine network. The coupling of management models and water quality models allows to consider both availability and distribution of limited water resources in the catchment and resulting water quality. This is corresponding to the principles of the EU-WFD for integrated river basin management, which is a basin-related consideration of available resources taking into account all influencing and influenced characteristics. Adjustment of data structure and time scale is necessary if models describing unequally sensitive and complex systems are to be coupled. Main focus of this task was to develop simple and robust simulation tools for the prediction of water quality in the reservoirs Bautzen and Quitzdorf. The complex water quality model SALMO served as a basis. In a first step, simple algorithms had to be amended in order to generate plausible simulation results despite of an adapted reduced data base. Stochastically generated management scenarios and complex simulated model results regarding the resulting water quality were employed as training data for an Artificial Neuronal Network (ANN). The trained ANN’s are efficient black box modules. As such they are able to mirror complex system behaviour of the deterministic model SALMO. By coupling the developed ANN with the management model WBalMo it is possible to evaluate management strategies in terms of their impact on the quality of the water bodies. ANN’s are system-linked models. A transfer to other aquatic systems is not possible. However, the methodology developed here represents an in-depth approach which is applicable to the development of further aggregated water quality models in the framework of integrated management models.
16

Exploring complexity metrics for artifact- centric business process Models

Marin, Mike Andy 02 1900 (has links)
This study explores complexity metrics for business artifact process models described by Case Management Model and Notation (CMMN). Process models are usually described using Business Process Management (BPM), which is a relatively mature discipline with a large number of practitioners. Over the last few decades a new way of describing data intensive business processes has emerged in BPM literature, for which traditional BPM is no longer adequate. This emerging method, used to describe more flexible processes, is called business artifacts with Guard-Stage-Milestone (GSM). The work on GSM influenced CMMN, which was created to fill a market need for more flexible case management processes for knowledge workers. Complexity metrics have been developed for traditional BPM models, such as the Business Process Model and Notation (BPMN). However, traditional BPM is not suitable for describing GSM or CMMN process models. Therefore, complexity metrics developed for traditional process models may not be applicable to business artifact process models such as CMMN. This study addresses this gap by exploring complexity metrics for business artifact process models using CMMN. The findings of this study have practical implications for the CMMN standard and for the commercial products implementing CMMN. This research makes the following contributions: • The development of a formal description of CMMN using first-order logic. • An exploration of the relationship between CMMN and GSM and the development of transformation procedures between them. • A comparison between the method complexity of CMMN and other popular process methods, including BPMN, Unified Modeling Language (UML) Activity diagrams, and Event-driven Process Charts (EPC). • The creation of a systematic literature review of complexity metrics for process models, which was conducted in order to inform the creation of CMMN metrics. • The identification of a set of complexity metrics for the CMMN standard, which underwent theoretical and empirical validation. This research advances literature in the areas of method complexity, complexity metrics for process models, declarative processes, and research on CMMN by characterizing CMMN method complexity, identifying complexity metrics for CMMN, and exploring the relationship between CMMN and GSM. / School of Computing / Ph. D. (Computer Science)
17

Exploring complexity metrics for artifact-centric business process models

Marin, Mike A. 02 1900 (has links)
This study explores complexity metrics for business artifact process models described by Case Management Model and Notation (CMMN). Process models are usually described using Business Process Management (BPM), which is a relatively mature discipline with a large number of practitioners. Over the last few decades a new way of describing data intensive business processes has emerged in BPM literature, for which traditional BPM is no longer adequate. This emerging method, used to describe more flexible processes, is called business artifacts with Guard-Stage-Milestone (GSM). The work on GSM influenced CMMN, which was created to fill a market need for more flexible case management processes for knowledge workers. Complexity metrics have been developed for traditional BPM models, such as the Business Process Model and Notation (BPMN). However, traditional BPM is not suitable for describing GSM or CMMN process models. Therefore, complexity metrics developed for traditional process models may not be applicable to business artifact process models such as CMMN. This study addresses this gap by exploring complexity metrics for business artifact process models using CMMN. The findings of this study have practical implications for the CMMN standard and for the commercial products implementing CMMN. This research makes the following contributions: • The development of a formal description of CMMN using first-order logic. • An exploration of the relationship between CMMN and GSM and the development of transformation procedures between them. • A comparison between the method complexity of CMMN and other popular process methods, including BPMN, Unified Modeling Language (UML) Activity diagrams, and Event-driven Process Charts (EPC). • The creation of a systematic literature review of complexity metrics for process models, which was conducted in order to inform the creation of CMMN metrics. • The identification of a set of complexity metrics for the CMMN standard, which underwent theoretical and empirical validation. This research advances literature in the areas of method complexity, complexity metrics for process models, declarative processes, and research on CMMN by characterizing CMMN method complexity, identifying complexity metrics for CMMN, and exploring the relationship between CMMN and GSM. / Ph.D. (Computer Science)
18

Sign of the Times : Unmasking Deep Learning for Time Series Anomaly Detection / Skyltarna på Tiden : Avslöjande av djupinlärning för detektering av anomalier i tidsserier

Richards Ravi Arputharaj, Daniel January 2023 (has links)
Time series anomaly detection has been a longstanding area of research with applications across various domains. In recent years, there has been a surge of interest in applying deep learning models to this problem domain. This thesis presents a critical examination of the efficacy of deep learning models in comparison to classical approaches for time series anomaly detection. Contrary to the widespread belief in the superiority of deep learning models, our research findings suggest that their performance may be misleading and the progress illusory. Through rigorous experimentation and evaluation, we reveal that classical models outperform deep learning counterparts in various scenarios, challenging the prevailing assumptions. In addition to model performance, our study delves into the intricacies of evaluation metrics commonly employed in time series anomaly detection. We uncover how it inadvertently inflates the performance scores of models, potentially leading to misleading conclusions. By identifying and addressing these issues, our research contributes to providing valuable insights for researchers, practitioners, and decision-makers in the field of time series anomaly detection, encouraging a critical reevaluation of the role of deep learning models and the metrics used to assess their performance. / Tidsperiods avvikelsedetektering har varit ett långvarigt forskningsområde med tillämpningar inom olika områden. Under de senaste åren har det uppstått ett ökat intresse för att tillämpa djupinlärningsmodeller på detta problemområde. Denna avhandling presenterar en kritisk granskning av djupinlärningsmodellers effektivitet jämfört med klassiska metoder för tidsperiods avvikelsedetektering. I motsats till den allmänna övertygelsen om överlägsenheten hos djupinlärningsmodeller tyder våra forskningsresultat på att deras prestanda kan vara vilseledande och framsteg illusoriskt. Genom rigorös experimentell utvärdering avslöjar vi att klassiska modeller överträffar djupinlärningsalternativ i olika scenarier och därmed utmanar de rådande antagandena. Utöver modellprestanda går vår studie in på detaljerna kring utvärderings-metoder som oftast används inom tidsperiods avvikelsedetektering. Vi avslöjar hur dessa oavsiktligt överdriver modellernas prestandapoäng och kan därmed leda till vilseledande slutsatser. Genom att identifiera och åtgärda dessa problem bidrar vår forskning till att erbjuda värdefulla insikter för forskare, praktiker och beslutsfattare inom området tidsperiods avvikelsedetektering, och uppmanar till en kritisk omvärdering av djupinlärningsmodellers roll och de metoder som används för att bedöma deras prestanda.
19

Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms

Vestin, Albin, Strandberg, Gustav January 2019 (has links)
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.

Page generated in 0.0881 seconds