• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 42
  • 31
  • 20
  • 19
  • 14
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 393
  • 393
  • 292
  • 64
  • 46
  • 46
  • 45
  • 42
  • 40
  • 36
  • 36
  • 34
  • 34
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Deep Learning Empowered Unsupervised Contextual Information Extraction and its applications in Communication Systems

Gusain, Kunal 16 January 2023 (has links)
Master of Science / There has been an astronomical increase in data at the network edge due to the rapid development of 5G infrastructure and the proliferation of the Internet of Things (IoT). In order to improve the network controller's decision-making capabilities and improve the user experience, it is of paramount importance to properly analyze this data. However, transporting such a large amount of data from edge devices to the network controller requires large bandwidth and increased latency, presenting a significant challenge to resource-constrained wireless networks. By using information processing techniques, one could effectively address this problem by sending only pertinent and critical information to the network controller. Nevertheless, finding critical information from high-dimensional observation is not an easy task, especially when large amounts of background information are present. Our thesis proposes to extract critical but low-dimensional information from high-dimensional observations using an information-theoretic deep learning framework. We focus on two distinct problems where critical information extraction is imperative. In the first problem, we study the problem of feature extraction from video frames collected in a dynamic environment and showcase its effectiveness using a video game simulation experiment. In the second problem, we investigate the detection of anomaly signals in the spectrum by extracting and analyzing useful features from spectrograms. Using extensive simulation experiments based on a practical data set, we conclude that our proposed approach is highly effective in detecting anomaly signals in a wide range of signal-to-noise ratios.
312

Advances in Document Layout Analysis

Bosch Campos, Vicente 05 March 2020 (has links)
[EN] Handwritten Text Segmentation (HTS) is a task within the Document Layout Analysis field that aims to detect and extract the different page regions of interest found in handwritten documents. HTS remains an active topic, that has gained importance with the years, due to the increasing demand to provide textual access to the myriads of handwritten document collections held by archives and libraries. This thesis considers HTS as a task that must be tackled in two specialized phases: detection and extraction. We see the detection phase fundamentally as a recognition problem that yields the vertical positions of each region of interest as a by-product. The extraction phase consists in calculating the best contour coordinates of the region using the position information provided by the detection phase. Our proposed detection approach allows us to attack both higher level regions: paragraphs, diagrams, etc., and lower level regions like text lines. In the case of text line detection we model the problem to ensure that the system's yielded vertical position approximates the fictitious line that connects the lower part of the grapheme bodies in a text line, commonly known as the baseline. One of the main contributions of this thesis, is that the proposed modelling approach allows us to include prior information regarding the layout of the documents being processed. This is performed via a Vertical Layout Model (VLM). We develop a Hidden Markov Model (HMM) based framework to tackle both region detection and classification as an integrated task and study the performance and ease of use of the proposed approach in many corpora. We review the modelling simplicity of our approach to process regions at different levels of information: text lines, paragraphs, titles, etc. We study the impact of adding deterministic and/or probabilistic prior information and restrictions via the VLM that our approach provides. Having a separate phase that accurately yields the detection position (base- lines in the case of text lines) of each region greatly simplifies the problem that must be tackled during the extraction phase. In this thesis we propose to use a distance map that takes into consideration the grey-scale information in the image. This allows us to yield extraction frontiers which are equidistant to the adjacent text regions. We study how our approach escalates its accuracy proportionally to the quality of the provided detection vertical position. Our extraction approach gives near perfect results when human reviewed baselines are provided. / [ES] La Segmentación de Texto Manuscrito (STM) es una tarea dentro del campo de investigación de Análisis de Estructura de Documentos (AED) que tiene como objetivo detectar y extraer las diferentes regiones de interés de las páginas que se encuentran en documentos manuscritos. La STM es un tema de investigación activo que ha ganado importancia con los años debido a la creciente demanda de proporcionar acceso textual a las miles de colecciones de documentos manuscritos que se conservan en archivos y bibliotecas. Esta tesis entiende la STM como una tarea que debe ser abordada en dos fases especializadas: detección y extracción. Consideramos que la fase de detección es, fundamentalmente, un problema de clasificación cuyo subproducto son las posiciones verticales de cada región de interés. Por su parte, la fase de extracción consiste en calcular las mejores coordenadas de contorno de la región utilizando la información de posición proporcionada por la fase de detección. Nuestro enfoque de detección nos permite atacar tanto regiones de alto nivel (párrafos, diagramas¿) como regiones de nivel bajo (líneas de texto principalmente). En el caso de la detección de líneas de texto, modelamos el problema para asegurar que la posición vertical estimada por el sistema se aproxime a la línea ficticia que conecta la parte inferior de los cuerpos de los grafemas en una línea de texto, comúnmente conocida como línea base. Una de las principales aportaciones de esta tesis es que el enfoque de modelización propuesto nos permite incluir información conocida a priori sobre la disposición de los documentos que se están procesando. Esto se realiza mediante un Modelo de Estructura Vertical (MEV). Desarrollamos un marco de trabajo basado en los Modelos Ocultos de Markov (MOM) para abordar tanto la detección de regiones como su clasificación de forma integrada, así como para estudiar el rendimiento y la facilidad de uso del enfoque propuesto en numerosos corpus. Así mismo, revisamos la simplicidad del modelado de nuestro enfoque para procesar regiones en diferentes niveles de información: líneas de texto, párrafos, títulos, etc. Finalmente, estudiamos el impacto de añadir información y restricciones previas deterministas o probabilistas a través de el MEV propuesto que nuestro enfoque proporciona. Disponer de un método independiente que obtiene con precisión la posición de cada región detectada (líneas base en el caso de las líneas de texto) simplifica enormemente el problema que debe abordarse durante la fase de extracción. En esta tesis proponemos utilizar un mapa de distancias que tiene en cuenta la información de escala de grises de la imagen. Esto nos permite obtener fronteras de extracción que son equidistantes a las regiones de texto adyacentes. Estudiamos como nuestro enfoque aumenta su precisión de manera proporcional a la calidad de la detección y descubrimos que da resultados casi perfectos cuando se le proporcionan líneas de base revisadas por humanos. / [CA] La Segmentació de Text Manuscrit (STM) és una tasca dins del camp d'investigació d'Anàlisi d'Estructura de Documents (AED) que té com a objectiu detectar I extraure les diferents regions d'interès de les pàgines que es troben en documents manuscrits. La STM és un tema d'investigació actiu que ha guanyat importància amb els anys a causa de la creixent demanda per proporcionar accés textual als milers de col·leccions de documents manuscrits que es conserven en arxius i biblioteques. Aquesta tesi entén la STM com una tasca que ha de ser abordada en dues fases especialitzades: detecció i extracció. Considerem que la fase de detecció és, fonamentalment, un problema de classificació el subproducte de la qual són les posicions verticals de cada regió d'interès. Per la seva part, la fase d'extracció consisteix a calcular les millors coordenades de contorn de la regió utilitzant la informació de posició proporcionada per la fase de detecció. El nostre enfocament de detecció ens permet atacar tant regions d'alt nivell (paràgrafs, diagrames ...) com regions de nivell baix (línies de text principalment). En el cas de la detecció de línies de text, modelem el problema per a assegurar que la posició vertical estimada pel sistema s'aproximi a la línia fictícia que connecta la part inferior dels cossos dels grafemes en una línia de text, comunament coneguda com a línia base. Una de les principals aportacions d'aquesta tesi és que l'enfocament de modelització proposat ens permet incloure informació coneguda a priori sobre la disposició dels documents que s'estan processant. Això es realitza mitjançant un Model d'Estructura Vertical (MEV). Desenvolupem un marc de treball basat en els Models Ocults de Markov (MOM) per a abordar tant la detecció de regions com la seva classificació de forma integrada, així com per a estudiar el rendiment i la facilitat d'ús de l'enfocament proposat en nombrosos corpus. Així mateix, revisem la simplicitat del modelatge del nostre enfocament per a processar regions en diferents nivells d'informació: línies de text, paràgrafs, títols, etc. Finalment, estudiem l'impacte d'afegir informació i restriccions prèvies deterministes o probabilistes a través del MEV que el nostre mètode proporciona. Disposar d'un mètode independent que obté amb precisió la posició de cada regió detectada (línies base en el cas de les línies de text) simplifica enormement el problema que ha d'abordar-se durant la fase d'extracció. En aquesta tesi proposem utilitzar un mapa de distàncies que té en compte la informació d'escala de grisos de la imatge. Això ens permet obtenir fronteres d'extracció que són equidistants de les regions de text adjacents. Estudiem com el nostre enfocament augmenta la seva precisió de manera proporcional a la qualitat de la detecció i descobrim que dona resultats quasi perfectes quan se li proporcionen línies de base revisades per humans. / Bosch Campos, V. (2020). Advances in Document Layout Analysis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138397
313

Word Classes in Language Modelling

Erikson, Emrik, Åström, Marcus January 2024 (has links)
This thesis concerns itself with word classes and their application to language modelling.Considering a purely statistical Markov model trained on sequences of word classes in theSwedish language different problems in language engineering are examined. Problemsconsidered are part-of-speech tagging, evaluating text modifiers such as translators withthe help of probability measurements and matrix norms, and lastly detecting differenttypes of text using the Fourier transform of cross entropy sequences of word classes.The results show that the word class language model is quite weak by itself but that itis able to improve part-of-speech tagging for 1 and 2 letter models. There are indicationsthat a stronger word class model could aid 3-letter and potentially even stronger models.For evaluating modifiers the model is often able to distinguish between shuffled andsometimes translated text as well as to assign a score as to how much a text has beenmodified. Future work on this should however take better care to ensure large enoughtest data. The results from the Fourier approach indicate that a Fourier analysis of thecross entropy sequence between word classes may allow the model to distinguish betweenA.I. generated text as well as translated text from human written text. Future work onmachine learning word class models could be carried out to get further insights into therole of word class models in modern applications. The results could also give interestinginsights in linguistic research regarding word classes.
314

Predictive maintenance using the classification of time series

Siddik, Md Abu Bakar January 2024 (has links)
In today's industrial landscape, the pursuit of operational excellence has driven organizations to seek innovative approaches to ensure the uninterrupted functionality of machinery and equipment. Predictive maintenance (PM) provides a pivotal strategy to achieve this goal by detecting faults earlier and predicting maintenance before the system enters a critical state. This thesis proposed a fault detection and diagnosis (FDD) method for predictive maintenance using particle filter resampling and a particle tracking technique. To develop this FDD method, particle filter and hidden Markov model efficiency in the forecasting system state variables are studied on a hydraulic wind power transfer system with different noise levels and system faults. Furthermore, a particle tracker is developed to analyze the particle filter's resampling process and study the particle selection process. After that, the proposed FDD method was developed and validated through three simulation tests employing system degradation models. Furthermore, the system's remaining useful life (RUL) is estimated for those simulation tests.
315

[en] GOAL-BASED INVESTMENTS: A DYNAMIC STOCHASTIC PROGRAMMING APPROACH / [pt] POLÍTICA DE INVESTIMENTO ORIENTADA A OBJETIVO DE LONGO PRAZO

ANDRE FREDERICO MACIEL GUTIERREZ 13 June 2024 (has links)
[pt] O objetivo deste estudo é desenvolver uma política de investimentoque minimize a contribuição total necessária para atingir um objetivofinanceiro a longo prazo. Para atingir este objetivo, desenvolvemos umproblema de otimização multi-estágios que integra um modelo de Markovoculto para captar a dinâmica estocástica dos retornos dos ativos. Aocontrário dos modelos convencionais de otimização de carteiras, que sebaseiam em pressupostos irrealistas, a nossa abordagem baseia-se no quadrode investimentos orientado a objetivos, que proporciona uma solução maisprática e eficaz. Além disso, ao utilizar o modelo de Markov oculto no nossoprocesso de otimização, obtemos uma estimativa mais precisa da dinâmicados retornos dos ativos, o que se traduz numa melhor tomada de decisõesde investimento. Ao utilizar o nosso modelo, a contribuição necessária paraatingir um objetivo financeiro desejado é minimizada através de uma políticade investimento que tem em conta o estado atual da riqueza e as condiçõeseconomicas prevalecentes. / [en] The aim of this study is to develop an investment policy that minimizes the total contribution required to achieve a long-term financial objective. To achieve this goal, we developed a multi-stage optimization problem that integrates a Hidden Markov Model to capture the stochastic dynamics of asset returns. Unlike conventional portfolio optimization models which are based on unrealistic assumptions, our approach is based on the goal oriented investment framework which provides a more practical and effective solution. In addition, by using the Hidden Markov Model in our optimization process, we obtain a more accurate estimate of the dynamics of asset returns, which translates into better investment decision-making. By using our model, the contribution required to achieve a desired financial goal is minimized through an investment policy that considers current levels of wealth and prevailing economic conditions.
316

Data Transformation Trajectories in Embedded Systems

Kasinathan, Gokulnath January 2016 (has links)
Mobile phone tracking is the ascertaining of the position or location of a mobile phone when moving from one place to another place. Location Based Services Solutions include Mobile positioning system that can be used for a wide array of consumer-demand services like search, mapping, navigation, road transport traffic management and emergency-call positioning. The Mobile Positioning System (MPS) supports complementary positioning methods for 2G, 3G and 4G/LTE (Long Term Evolution) networks. Mobile phone is popularly known as an UE (User Equipment) in LTE. A prototype method of live trajectory estimation for massive UE in LTE network has been proposed in this thesis work. RSRP (Reference Signal Received Power) values and TA(Timing Advance) values are part of LTE events for UE. These specific LTE events can be streamed to a system from eNodeB of LTE in real time by activating measurements on UEs in the network. AoA (Angle of Arrival) and TA values are used to estimate the UE position. AoA calculation is performed using RSRP values. The calculated UE positions are filtered using Particle Filter(PF) to estimate trajectory. To obtain live trajectory estimation for massive UEs, the LTE event streamer is modelled to produce several task units with events data for massive UEs. The task level modelled data structures are scheduled across Arm Cortex A15 based MPcore, with multiple threads. Finally, with massive UE live trajectory estimation, IMSI (International mobile subscriber identity) is used to maintain hidden markov requirements of particle filter functionality while maintaining load balance for 4 Arm A15 cores. This is proved by serial and parallel performance engineering. Future work is proposed for Decentralized task level scheduling with hash function for IMSI with extension of cores and Concentric circles method for AoA accuracy. / Mobiltelefoners positionering är välfungerande för positionslokalisering av mobiltelefoner när de rör sig från en plats till en annan. Lokaliseringstjänsterna inkluderar mobil positionering system som kan användas till en mängd olika kundbehovs tjänster som sökning av position, position i kartor, navigering, vägtransporters trafik managering och nödsituationssamtal med positionering. Mobil positions system (MPS) stödjer komplementär positions metoder för 2G, 3G och 4G/LTE (Long Term Evolution) nätverk. Mobiltelefoner är populärt känd som UE (User Equipment) inom LTE. En prototypmetod med verkliga rörelsers estimering för massiv UE i LTE nätverk har blivit föreslagen för detta examens arbete. RSRP (Reference Signal Received Power) värden och TA (Timing Advance) värden är del av LTE händelser för UE. Dessa specifika LTE event kan strömmas till ett system från eNodeB del av LTE, i realtid genom aktivering av mätningar på UEar i nätverk. AoA (Angel of Arrival) och TA värden är använt för att beräkna UEs position. AoA beräkningar är genomförda genom användandet av RSRP värden. Den kalkylerade UE positionen är filtrerad genom användande av Particle Filter (PF) för att estimera rörelsen. För att identifiera verkliga rörelser, beräkningar för massiva UEs, LTE event streamer är modulerad att producera flera uppgifts enheter med event data från massiva UEar. De tasks modulerade data strukturerna är planerade över Arm Cortex A15 baserade MPcore, med multipla trådar. Slutligen, med massiva UE verkliga rörelser, beräkningar med IMSI(International mobile subscriber identity) är använt av den Hidden Markov kraven i Particle Filter’s funktionalitet medans kravet att underhålla last balansen för 4 Arm A15 kärnor. Detta är utfört genom seriell och parallell prestanda teknik. Framtida arbeten för decentraliserade task nivå skedulering med hash funktion för IMSI med utökning av kärnor och Concentric circles metod för AoA noggrannhet.
317

有記憶性信用價差期間結構模型

李弘道 Unknown Date (has links)
本文建立了當違約機率及回收率為隨機變動,同時信用等級移動有記憶性,且回收率和無風險利率期間結構相關之信用風險價差期間結構模型。並評價信用價差選擇權及有對手違約風險普通選擇權之價值。 此模型產生的信用價差有更多的變化性,將可描述:信用價差的隨機波動行為,且即使信用等級沒變,價差仍可能發生改變;信用價差與無風險利率期間結構有相關性;公司特定或證券特定的價差及其變動行為;處於等級上升或下降趨勢公司債券之殖利率曲線,能更準確配適有風險債券的價格等實際現象。 並可應用至有對手違約風險之商品及多種信用衍生性商品等的評價與避險,且可進行風險管理方面的應用。 關鍵詞:信用風險;信用風險價差;馬可夫模型;信用衍生性商品 / In this thesis we develop a credit migration model with memory for the term structure of credit risk spreads. Our model incorporates stochastic default probability, stochastic recovery rate, and the correlation between the recovery rate and the term structure of risk-free interest rates. We derive valuation formulae for a credit spread option and a plain vanilla option with counterparty risk. This model provides greater variability in credit spreads, and it has properties in line with what have been observed in practice: (1) credit spreads show diffusion-like behavior even though the credit rating of the firm has not changed; (2) the model injects correlation between spreads and the term structure of interest rates; (3) the model enables firm-specific and security-specific variability of spreads to be accommodated; and (4) the model enables us to estimate the yield curves corresponding to the positive and negative trends of credit ratings and match the observed risky bond prices more precisely. This model is useful for pricing and hedging OTC derivatives with counterparty risk, for pricing and hedging credit derivatives, and for risk management. Key Words: Credit Risk, Credit Risk Spread, Markov Model, Credit Derivative.
318

Mathematical modelling and analysis of aspects of bacterial motility

Rosser, Gabriel A. January 2012 (has links)
The motile behaviour of bacteria underlies many important aspects of their actions, including pathogenicity, foraging efficiency, and ability to form biofilms. In this thesis, we apply mathematical modelling and analysis to various aspects of the planktonic motility of flagellated bacteria, guided by experimental observations. We use data obtained by tracking free-swimming Rhodobacter sphaeroides under a microscope, taking advantage of the availability of a large dataset acquired using a recently developed, high-throughput protocol. A novel analysis method using a hidden Markov model for the identification of reorientation phases in the tracks is described. This is assessed and compared with an established method using a computational simulation study, which shows that the new method has a reduced error rate and less systematic bias. We proceed to apply the novel analysis method to experimental tracks, demonstrating that we are able to successfully identify reorientations and record the angle changes of each reorientation phase. The analysis pipeline developed here is an important proof of concept, demonstrating a rapid and cost-effective protocol for the investigation of myriad aspects of the motility of microorganisms. In addition, we use mathematical modelling and computational simulations to investigate the effect that the microscope sampling rate has on the observed tracking data. This is an important, but often overlooked aspect of experimental design, which affects the observed data in a complex manner. Finally, we examine the role of rotational diffusion in bacterial motility, testing various models against the analysed data. This provides strong evidence that R. sphaeroides undergoes some form of active reorientation, in contrast to the mainstream belief that the process is passive.
319

A comparative study between algorithms for time series forecasting on customer prediction : An investigation into the performance of ARIMA, RNN, LSTM, TCN and HMM

Almqvist, Olof January 2019 (has links)
Time series prediction is one of the main areas of statistics and machine learning. In 2018 the two new algorithms higher order hidden Markov model and temporal convolutional network were proposed and emerged as challengers to the more traditional recurrent neural network and long-short term memory network as well as the autoregressive integrated moving average (ARIMA). In this study most major algorithms together with recent innovations for time series forecasting is trained and evaluated on two datasets from the theme park industry with the aim of predicting future number of visitors. To develop models, Python libraries Keras and Statsmodels were used. Results from this thesis show that the neural network models are slightly better than ARIMA and the hidden Markov model, and that the temporal convolutional network do not perform significantly better than the recurrent or long-short term memory networks although having the lowest prediction error on one of the datasets. Interestingly, the Markov model performed worse than all neural network models even when using no independent variables.
320

A Novel Cloud Broker-based Resource Elasticity Management and Pricing for Big Data Streaming Applications

Runsewe, Olubisi A. 28 May 2019 (has links)
The pervasive availability of streaming data from various sources is driving todays’ enterprises to acquire low-latency big data streaming applications (BDSAs) for extracting useful information. In parallel, recent advances in technology have made it easier to collect, process and store these data streams in the cloud. For most enterprises, gaining insights from big data is immensely important for maintaining competitive advantage. However, majority of enterprises have difficulty managing the multitude of BDSAs and the complex issues cloud technologies present, giving rise to the incorporation of cloud service brokers (CSBs). Generally, the main objective of the CSB is to maintain the heterogeneous quality of service (QoS) of BDSAs while minimizing costs. To achieve this goal, the cloud, although with many desirable features, exhibits major challenges — resource prediction and resource allocation — for CSBs. First, most stream processing systems allocate a fixed amount of resources at runtime, which can lead to under- or over-provisioning as BDSA demands vary over time. Thus, obtaining optimal trade-off between QoS violation and cost requires accurate demand prediction methodology to prevent waste, degradation or shutdown of processing. Second, coordinating resource allocation and pricing decisions for self-interested BDSAs to achieve fairness and efficiency can be complex. This complexity is exacerbated with the recent introduction of containers. This dissertation addresses the cloud resource elasticity management issues for CSBs as follows: First, we provide two contributions to the resource prediction challenge; we propose a novel layered multi-dimensional hidden Markov model (LMD-HMM) framework for managing time-bounded BDSAs and a layered multi-dimensional hidden semi-Markov model (LMD-HSMM) to address unbounded BDSAs. Second, we present a container resource allocation mechanism (CRAM) for optimal workload distribution to meet the real-time demands of competing containerized BDSAs. We formulate the problem as an n-player non-cooperative game among a set of heterogeneous containerized BDSAs. Finally, we incorporate a dynamic incentive-compatible pricing scheme that coordinates the decisions of self-interested BDSAs to maximize the CSB’s surplus. Experimental results demonstrate the effectiveness of our approaches.

Page generated in 0.0987 seconds