Spelling suggestions: "subject:"[een] CONDITIONAL RANDOM FIELD"" "subject:"[enn] CONDITIONAL RANDOM FIELD""
1 |
Conditional Random People: Tracking Humans with CRFs and Grid FiltersTaycher, Leonid, Shakhnarovich, Gregory, Demirdjian, David, Darrell, Trevor 01 December 2005 (has links)
We describe a state-space tracking approach based on a Conditional Random Field(CRF) model, where the observation potentials are \emph{learned} from data. Wefind functions that embed both state and observation into a space wheresimilarity corresponds to $L_1$ distance, and define an observation potentialbased on distance in this space. This potential is extremely fast to compute and in conjunction with a grid-filtering framework can be used to reduce acontinuous state estimation problem to a discrete one. We show how a statetemporal prior in the grid-filter can be computed in a manner similar to asparse HMM, resulting in real-time system performance. The resulting system isused for human pose tracking in video sequences.
|
2 |
Prediction-based failure management for supercomputersGe, Wuxiang January 2011 (has links)
The growing requirements of a diversity of applications necessitate the deployment of large and powerful computing systems and failures in these systems may cause severe damage in every aspect from loss of human lives to world economy. However, current fault tolerance techniques cannot meet the increasing requirements for reliability. Thus new solutions are urgently needed and research on proactive schemes is one of the directions that may offer better efficiency. This thesis proposes a novel proactive failure management framework. Its goal is to reduce the failure penalties and improve fault tolerance efficiency in supercomputers when running complex applications. The proposed proactive scheme builds on two core components: failure prediction and proactive failure recovery. More specifically, the failure prediction component is based on the assessment of system events and employs semi-Markov models to capture the dependencies between failures and other events for the forecasting of forthcoming failures. Furthermore, a two-level failure prediction strategy is described that not only estimates the future failure occurrence but also identifies the specific failure categories. Based on the accurate failure forecasting, a prediction-based coordinated checkpoint mechanism is designed to construct extra checkpoints just before each predicted failure occurrence so that the wasted computational time can be significantly reduced. Moreover, a theoretical model has been developed to assess the proactive scheme that enables calculation of the overall wasted computational time.The prediction component has been applied to industrial data from the IBM BlueGene/L system. Results of the failure prediction component show a great improvement of the prediction accuracy in comparison with three other well-known prediction approaches, and also demonstrate that the semi-Markov based predictor, which has achieved the precision of 87.41% and the recall of 77.95%, performs better than other predictors.
|
3 |
[en] ALGORITHMS FOR TABLE STRUCTURE RECOGNITION / [pt] ALGORITMOS PARA RECONHECIMENTO DE ESTRUTURAS DE TABELASYOSVENI ESCALONA ESCALONA 26 June 2020 (has links)
[pt] Tabelas são uma forma bastante comum de organizar e publicar dados. Por exemplo, a Web possui um enorme número de tabelas publicadas em HTML, embutidas em documentos em PDF, ou que podem ser simplesmente baixadas de páginas Web. Porém, tabelas nem sempre são fáceis de interpretar pois possuem uma grande variedade de características e são organizadas de diversas formas. De fato, um grande número de métodos e ferramentas foram desenvolvidos para interpretação de tabelas. Esta dissertação apresenta a implementação de um algoritmo, baseado em Conditional Random Fields (CRFs), para classificar as linhas de uma tabela em linhas de cabeçalho, linhas de dados e linhas de metadados.
A implementação é complementada por dois algoritmos para reconhecimento de tabelas em planilhas, respectivamente baseados em regras e detecção de regiões. Por fim, a dissertação descreve os resultados e os benefícios obtidos pela aplicação dos algoritmos a tabelas em formato HTML, obtidas da Web, e a tabelas em forma de planilhas, baixadas do Web site da Agência Nacional de Petróleo. / [en] Tables are widely adopted to organize and publish data. For example, the Web has an enormous number of tables, published in HTML, imbedded in PDF documents, or that can be simply downloaded from Web pages. However, tables are not always easy to interpret because of the variety of features and formats used. Indeed, a large number of methods and tools have been developed to interpret tables. This dissertation presents the implementation of an algorithm, based on Conditional Random Fields (CRFs), to classify the rows of a table as header rows, data rows or metadata rows. The implementation is complemented by two algorithms for table recognition in a spreadsheet document, respectively based on rules and on region detection. Finally, the dissertation describes the results and the benefits obtained by applying the implemented algorithms to HTML tables, obtained from the Web, and to spreadsheet tables, downloaded from the Brazilian National Petroleum Agency.
|
4 |
Logarithmic opinion pools for conditional random fieldsSmith, Andrew January 2007 (has links)
Since their recent introduction, conditional random fields (CRFs) have been successfully applied to a multitude of structured labelling tasks in many different domains. Examples include natural language processing (NLP), bioinformatics and computer vision. Within NLP itself we have seen many different application areas, like named entity recognition, shallow parsing, information extraction from research papers and language modelling. Most of this work has demonstrated the need, directly or indirectly, to employ some form of regularisation when applying CRFs in order to overcome the tendency for these models to overfit. To date a popular method for regularising CRFs has been to fit a Gaussian prior distribution over the model parameters. In this thesis we explore other methods of CRF regularisation, investigating their properties and comparing their effectiveness. We apply our ideas to sequence labelling problems in NLP, specifically part-of-speech tagging and named entity recognition. We start with an analysis of conventional approaches to CRF regularisation, and investigate possible extensions to such approaches. In particular, we consider choices of prior distribution other than the Gaussian, including the Laplacian and Hyperbolic; we look at the effect of regularising different features separately, to differing degrees, and explore how we may define an appropriate level of regularisation for each feature; we investigate the effect of allowing the mean of a prior distribution to take on non-zero values; and we look at the impact of relaxing the feature expectation constraints satisfied by a standard CRF, leading to a modified CRF model we call the inequality CRF. Our analysis leads to the general conclusion that although there is some capacity for improvement of conventional regularisation through modification and extension, this is quite limited. Conventional regularisation with a prior is in general hampered by the need to fit a hyperparameter or set of hyperparameters, which can be an expensive process. We then approach the CRF overfitting problem from a different perspective. Specifically, we introduce a form of CRF ensemble called a logarithmic opinion pool (LOP), where CRF distributions are combined under a weighted product. We show how a LOP has theoretical properties which provide a framework for designing new overfitting reduction schemes in terms of diverse models, and demonstrate how such diverse models may be constructed in a number of different ways. Specifically, we show that by constructing CRF models from manually crafted partitions of a feature set and combining them with equal weight under a LOP, we may obtain an ensemble that significantly outperforms a standard CRF trained on the entire feature set, and is competitive in performance to a standard CRF regularised with a Gaussian prior. The great advantage of LOP approach is that, unlike the Gaussian prior method, it does not require us to search a hyperparameter space. Having demonstrated the success of LOPs in the simple case, we then move on to consider more complex uses of the framework. In particular, we investigate whether it is possible to further improve the LOP ensemble by allowing parameters in different models to interact during training in such a way that diversity between the models is encouraged. Lastly, we show how the LOP approach may be used as a remedy for a problem that standard CRFs can sometimes suffer. In certain situations, negative effects may be introduced to a CRF by the inclusion of highly discriminative features. An example of this is provided by gazetteer features, which encode a word's presence in a gazetteer. We show how LOPs may be used to reduce these negative effects, and so provide some insight into how gazetteer features may be more effectively handled in CRFs, and log-linear models in general.
|
5 |
Genomsökning av filsystem för att hitta personuppgifter : Med Linear chain conditional random field och Regular expressionAfram, Gabriel January 2018 (has links)
The new General Data Protection Regulation (GDPR) Act will apply to all companies within the European Union after 25 May. This means stricter legal requirements for companies that in some way store personal data. The goal of this project is therefore to make it easier for companies to meet the new legal requirements. This by creating a tool that searches file systems and visually shows the user in a graphical user interface which files contain personal data. The tool uses Named entity recognition with the Linear chain conditional random field algorithm which is a type of supervised learning method in machine learning. This algorithm is used in the project to find names and addresses in files. The different models are trained with different parameters and the training is done using the stanford NER library in Java. The models are tested by a test file containing 45,000 words where the models themselves can predict all classes to the words in the file. The models are then compared with each other using the measurements of precision, recall and F-score to find the best model. The tool also uses Regular Expression to find emails, IP numbers, and social security numbers. The result of the final machine learning model shows that it does not find all names and addresses, but that can be improved by increasing exercise data. However, this is something that requires a more powerful computer than the one used in this project. An analysis of how the Swedish language is built would also need to be done to apply the most appropriate parameters for the training of the model. / Den nya lagen General data protection regulation (GDPR) började gälla för alla företag inom Europeiska unionen efter den 25 maj. Detta innebär att det blir strängare lagkrav för företag som på något sätt lagrar personuppgifter. Målet med detta projekt är därför att underlätta för företag att uppfylla de nya lagkraven. Detta genom att skapa ett verktyg som söker igenom filsystem och visuellt visar användaren i ett grafiskt användargränssnitt vilka filer som innehåller personuppgifter. Verktyget använder Named Entity Recognition med algoritmen Linear Chain Conditional Random Field som är en typ av ”supervised” learning metod inom maskininlärning. Denna algoritm används för att hitta namn och adresser i filer. De olika modellerna tränas med olika parametrar och träningen sker med hjälp av biblioteket Stanford NER i Java. Modellerna testas genom en testfil som innehåller 45 000 ord där modellerna själva får förutspå alla klasser till orden i filen. Modellerna jämförs sedan med varandra med hjälp av mätvärdena precision, recall och F-score för att hitta den bästa modellen. Verktyget använder även Regular expression för att hitta e- mails, IP-nummer och personnummer. Resultatet på den slutgiltiga maskininlärnings modellen visar att den inte hittar alla namn och adresser men att det är något som kan förbättras genom att öka träningsdata. Detta är dock något som kräver en kraftfullare dator än den som användes i detta projekt. En undersökning på hur det svenska språket är uppbyggt skulle även också behöva göras för att använda de lämpligaste parametrarna vid träningen av modellen.
|
6 |
[en] POROSITY ESTIMATION FROM SEISMIC ATTRIBUTES WITH SIMULTANEOUS CLASSIFICATION OF SPATIALLY STRUCTURED LATENT FACIES / [pt] PREDIÇÃO DE POROSIDADE A PARTIR DE ATRIBUTOS SÍSMICOS COM CLASSIFICAÇÃO SIMULTÂNEA DE FACIES GEOLÓGICAS LATENTES EM ESTRUTURAS ESPACIAISLUIZ ALBERTO BARBOSA DE LIMA 26 April 2018 (has links)
[pt] Predição de porosidade em reservatórios de óleo e gás representa em uma tarefa crucial e desafiadora na indústria de petróleo. Neste trabalho é proposto um novo modelo não-linear para predição de porosidade que trata fácies sedimentares como variáveis ocultas ou latentes. Esse modelo, denominado Transductive Conditional Random Field Regression (TCRFR), combina com sucesso os conceitos de Markov random fields, ridge regression e aprendizado transdutivo. O modelo utiliza volumes de impedância sísmica como informação de entrada condicionada aos valores de porosidade disponíveis nos poços existentes no reservatório e realiza de forma simultânea e automática a classificação das fácies e a estimativa de porosidade em todo o volume. O método é capaz de inferir as fácies latentes através da combinação de amostras precisas de porosidade local presentes nos poços com dados de impedância sísmica ruidosos, porém disponíveis em todo o volume do reservatório. A informação precisa de porosidade é propagada no volume através de modelos probabilísticos baseados em grafos, utilizando conditional random fields. Adicionalmente, duas novas técnicas são introduzidas como etapas de pré-processamento para aplicação do método TCRFR nos casos extremos em que somente um número bastante reduzido de amostras rotuladas de porosidade encontra-se disponível em um pequeno conjunto de poços exploratórios, uma situação típica para geólogos durante a fase exploratória de uma nova área. São realizados experimentos utilizando dados de um reservatório sintético e de um reservatório real. Os resultados comprovam que o método apresenta um desempenho consideravelmente superior a outros métodos automáticos de predição em relação aos dados sintéticos e, em relação aos dados reais, um desempenho comparável ao gerado por técnicas tradicionais de geo estatística que demandam grande esforço manual por parte de especialistas. / [en] Estimating porosity in oil and gas reservoirs is a crucial and challenging task in the oil industry. A novel nonlinear model for porosity estimation is proposed, which handles sedimentary facies as latent variables. It successfully combines the concepts of conditional random fields (CRFs), transductive learning and ridge regression. The proposed Transductive Conditional Random Field Regression (TCRFR) uses seismic impedance volumes as input information, conditioned on the porosity values from the available wells in the reservoir, and simultaneously and automatically provides as output the porosity estimation and facies classification in the whole volume. The method is able to infer the latent facies states by combining the local, labeled and accurate porosity information available at well locations with the plentiful but imprecise impedance information available everywhere in the reservoir volume. That accurate information is propagated in the reservoir based on conditional random field probabilistic graphical models, greatly reducing uncertainty. In addition, two new techniques are introduced as preprocessing steps for the application of TCRFR in the extreme but realistic cases where just a scarce amount of porosity labeled samples are available in a few exploratory wells, a typical situation for geologists during the evaluation of a reservoir in the exploration phase. Both synthetic and real-world data experiments are presented to prove the usefulness of the proposed methodology, which show that it outperforms previous automatic estimation methods on synthetic data and provides a comparable result to the traditional manual labored geostatistics approach on real-world data.
|
7 |
Segmentation et identification audiovisuelle de personnes dans des journaux télévisés / Audiovisual segmentation and identification of persons in broadcast newsGay, Paul 25 March 2015 (has links)
Cette thèse traite de l’identification des locuteurs et des visages dans les journaux télévisés. L’identification est effectuée à partir des noms affichés à l’écran dans les cartouches qui servent couramment à annoncer les locuteurs. Puisque ces cartouches apparaissent parcimonieusement dans la vidéo, obtenir de bonnes performances d’identification demande une bonne qualité du regroupement audiovisuel des personnes. Par regroupement, on entend ici la tâche de détecteret regrouper tous les instants où une personne parle ou apparaît. Cependant les variabilités intra-personnes gênent ce regroupement. Dans la modalité audio, ces variabilités sont causées par la parole superposée et les bruits de fond. Dans la modalité vidéo, elles correspondent essentiellement à des variations de la pose des visages dans les scènes de plateaux avec, en plus, des variations de luminosité (notamment dans le cas des reportages). Dans cette thèse, nous proposons une modélisation du contexte de la vidéo est proposée afin d’optimiser le regroupement pour une meilleure identification. Dans un premier temps, un modèle basé sur les CRF est proposé afin d’effectuer le regroupement audiovisuel des personnes de manière jointe. Dans un second temps, un système d’identification est mis en place, basé sur la combinaison d’un CRF de nommage à l’échelle des classes, et du CRF développé précédemment pour le regroupement. En particulier, des informations de contexte extraites de l’arrière plan des images et des noms extraits des cartouches sont intégrées dans le CRF de regroupement. Ces éléments permettent d’améliorer le regroupement et d’obtenir des gains significatifs en identification dans les scènes de plateaux. / This Phd thesis is about speaker and face identification in broadcast news. The identification is relying on the names automatically extracted from overlaid texts which are used to announce the speakers. Since those names appear sparsely in the video, identification performance depends on the diarization performance i.e. the capacity of detecting and clustering together all the moments when a given person appears or speaks. However, intra-person variability in the video signal make this task difficult. In the audio modality, this variability comes from overlap speech and background noise. For the video, it consists in head pose variations and lighting conditions (especially in report scenes). A context-aware model is proposed to optimize the diarization for a better identification. Firstly, a Conditional Random Field (CRF) model isproposed to perform the diarization jointly over the speech segments and the face tracks. Secondly, an identifcation system is designed. It is based on the combination of a naming CRF at cluster level and the diarization CRF. In particular, context information extracted from the image background and the names extracted from the overlaid texts are integrated in the diarization CRF at segment level. The use of those elements enable us to obtain better performances in diarization and identification, especially in studio scenes.
|
8 |
Extracting Particular Information from Swedish Public Procurement Using Machine LearningWaade, Eystein January 2020 (has links)
The Swedish procurement process has a yearly value of 706 Billion SEK over approximately 18 000 procurements. With each process comes many documents written in different formats that need to be understood to be able to be a possible tender. With the development of new technology and the age of Machine Learning it is of huge interest to investigate how we can use this knowledge to enhance the way we procure. The goal of this project was to investigate if public procurements written in Swedish in PDF format can be parsed and segmented into a structured format. This process was divided into three parts; pre-processing, annotation, and training/evaluation. The pre-processing was accomplished using an open-source pdf-parser called pdfalto that produces structured XML-files with layout and lexical information. The annotation process consisted of generalizing a procurement into high-level segments that are applicable to different document structures as well as finding relevant features. This was accomplished by identifying frequent document formats so that many documents could be annotated using deterministic rules. Finally, a linear chain Conditional Random Field was trained and tested to segment the documents. The models showed a high performance when they were tested on documents of the same format as it was trained on. However, the data from five different documents were not sufficient or general enough to make the model able to make reliable predictions on a sixth format that it had not seen before. The best result was a total accuracy of 90,6% where two of the labels had a f1-score above 95% and the two other labels had a f1-score of 51,8% and 63,3%.
|
9 |
An AI-based System for Assisting Planners in a Supply Chain with Email CommunicationDantu, Sai Shreya Spurthi, Yadlapalli, Akhilesh January 2023 (has links)
Background: Communication plays a crucial role in supply chain management (SCM) as it facilitates the flow of information, materials, and goods across various stages of the supply chain. In the context of supply planning, each planner manages thousands of supply chain entities and spends a lot of time reading and responding to high volumes of emails related to part orders, delays, and backorders that can lead to information overload and hinder workflow and decision-making. Therefore, streamlining communication and enhancing email management are essential for optimizing supply chain efficiency. Objectives: This study aims to create an automated system that can summarize email conversations between planners, suppliers, and other stakeholders. The goal is to increase communication efficiency using Natural Language Processing (NLP) algorithms to extract important information from lengthy conversations. Additionally, the study will explore the effectiveness of using conditional random fields (CRF) to filter out irrelevant content during preprocessing. Methods: We chose four advanced pre-trained abstractive dialogue summarization models, BART, PEGASUS, T5, and CODS, and evaluation metrics, ROUGE and BERTScore, to compare their performance in effectively summarizing our email conversations. We used CRF to preprocess raw data from around 400 planner-supplier email conversations to extract important sentences in a dialogue format and label them with specific dialogue act tags. We then manually summarized the 400 conversations and fine-tuned the four chosen models. Finally, we evaluated the models using ROUGE and BERTScore metrics to determine their similarity to human references. Results: The results show that the performance of the summarization models has significantly improved after fine-tuning the models with domain-specific data. The BART model achieved the highest ROUGE-1 score of 0.65, ROUGE-L score of 0.56, and BERTScore of 0.95 compared to other models. Additionally, CRF-based preprocessing proved to be crucial in extracting essential information and minimizing unnecessary details for the summarization process. Conclusions: This study shows that advanced NLP techniques can make supply chain communication workflows more efficient. The BART-based email summarization tool that we created showed great potential in giving important insights and helping planners deal with information overload.
|
10 |
Email Thread Summarization with Conditional Random FieldsShockley, Darla Magdalene 23 August 2010 (has links)
No description available.
|
Page generated in 0.0297 seconds