Spelling suggestions: "subject:"[een] SUMMARIZATION"" "subject:"[enn] SUMMARIZATION""
221 |
Δημιουργία περιλήψεων από ακολουθίες βίντεο στο συμπιεσμένο πεδίοΡήγας, Ιωάννης 08 December 2008 (has links)
Στην παρούσα εργασία υλοποιούμε ένα σύστημα δημιουργίας περιλήψεων από ακολουθίες βίντεο. Υλοποιούνται όλα τα βήματα
που θα πρέπει να ακολουθηθούν (εξαγωγή χαρακτηριστικών-ανίχνευση πλάνων-εξαγωγή χαρακτηριστικών καρέ) έτσι ώστε να
εξαχθεί ένα σύνολο καρέ (χαρακτηριστικά καρέ) τα οποία να συνοψίζουν νοηματικά το περιεχόμενο μιας ακολουθίας βίντεο.
Η επεξεργασία του βίντεο γίνεται απευθείας στο συμπιεσμένο πεδίο και συγκεκριμένα σε συμπιεσμένα αρχεία MPEG-1-2,
έτσι ώστε τα αποτελέσματα να εξάγονται σε σχετικά μικρό χρόνο και με σχετικά χαμηλές απαιτήσεις σε αποθηκευτικό
χώρο και επεξεργαστική ισχύ. / In this paper a video summarization system is being constructed. We acomplish all the needed steps (feature extraction
-shot detection-keyframe extraction) in order to extract a set of frames (keyframes) that capture the semantic content of the
video sequence. The processing of the video takes place directly at the compressed domain (at MPEG-1-2 video files). Thus we obtain results at relatively little time and with relatively low storage and computer power demands.
|
222 |
Representative Subsets for Preference QueriesChester, Sean 26 August 2013 (has links)
We focus on the two overlapping areas of preference queries and dataset summarization. A (linear) preference query specifies the relative importance of the attributes in a dataset and asks for the tuples that best match those preferences. Dataset summarization is the task of representing an entire dataset by a small, representative subset. Within these areas, we focus on three important sub-problems, significantly advancing the state-of-the-art in each.
We begin with an investigation into a new formulation of preference queries, identifying a neglected and important subclass that we call threshold projection queries. While literature typically constrains the attribute preferences (which are real-valued weights) such that their sum is one, we show that this introduces bias when querying by threshold rather than cardinality. Using projection, rather than inner product as in that literature, removes the bias. We then give algorithms for building and querying indices for this class of query, based, in the general case, on geometric duality and halfspace range searching, and, in an important special case, on stereographic projection.
In the second part of the dissertation, we investigate the monochromatic reverse top-k
(mRTOP) query in two dimensions. A mRTOP query asks for, given a tuple and a dataset,
the linear preference queries on the dataset that will include the given tuple. Towards this goal, we consider the novel scenario of building an index to support mRTOP queries, using geometric duality and plane sweep. We show theoretically and empirically that the index is quick to build, small on disk, and very efficient at answering mRTOP queries. As a corollary to these efforts, we defined the top-k rank contour, which encodes the k-ranked tuple for every possible linear preference query. This is tremendously useful in answering mRTOP queries, but also, we posit, of significant independent interest for its relation to myriad related linear preference query problems. Intuitively, the top-k rank contour is the minimum possible representation of knowledge needed to identify the k-ranked tuple for any query, without apriori knowledge of that query.
We also introduce k-regret minimizing sets, a very succinct approximation of a numeric
dataset. The purpose of the approximation is to represent the entire dataset by just a small subset that nonetheless will contain a tuple within or near to the top-k for any linear preference query. We show that the problem of finding k-regret minimizing sets—and, indeed, the problem in literature that it generalizes—is NP-Hard. Still, for the special case of two dimensions, we provide a fast, exact algorithm based on the top-k rank contour. For arbitrary dimension, we introduce a novel greedy algorithm based on linear programming and randomization that does excellently in our empirical investigation. / Graduate / 0984
|
223 |
Utilisation des citations pour le résumé automatique de la contribution d'articles scientifiquesMalenfant, Bruno 12 1900 (has links)
No description available.
|
224 |
Técnicas para compreensão de rastros de execução de programas orientados a objetosSilva, Luciana Lourdes 22 February 2011 (has links)
Several attempts to facilitate understanding the behavior of software systems have
been proposed. Perfective changes in well-established software systems are easier to perform
when the development team has a solid understanding of the internals. However,
it is reasonable to assume that the use of an open source system to incorporate new
features and obtain a new software product is an appealing approach instead of coding
a new product from scratch. Considering this scenario, and considering that it is not
uncommon that systems are poorly documented, there is no widely accepted approach to
guide the perfective maintenance for developers with low understanding of the system or
that recovers high-level information about both the structure and the behavior of large
systems.
This work proposes a new approach to simplify comprehension tasks of object oriented
programs through the analysis of summarized execution traces. The approach is perfomed
on two techniques: The rst technique enables the separation of common parts of source
code from specic parts related to important features that drive the addition of the new
one. An evaluation is done to verify if the summarized execution traces helps the technique
to locate potential elements of code that can guide the development of a new feature. The
evaluation was conducted with real-world systems and with meaningful evolution tasks.
The second is based on a technique that reconstructs structural and behavioral highlevel
diagrams by the analysis of summarized execution traces. Precision and recall were
evaluated using two third-party open-source systems, including the webserver Tomcat.
The result suggests the feasibility for using the approach on real world large scale systems. / Várias abordagens para facilitar a compreensão do comportamento de sistemas de software
têm sido propostas. Mudanças perfectivas em sistemas de software bem estabelecidos
são mais fáceis de executar quando a equipe de desenvolvimento tem um entendimento
sólido do código fonte. Mas é razoável assumir que o uso de um sistema de código aberto
para incorporar novas características e obter um novo produto de software é uma abordagem
interessante, ao invés de codificar um novo produto a partir do zero. Em consideração
a este cenário e considerando que não é incomum sistemas pobres em documentação, não
existe uma abordagem amplamente aceita para guiar em mudanças perfectivas desenvolvedores
com baixo conhecimento do sistema ou que recupera informações em alto nível
de abstração sobre a estrutura e comportamento de sistemas complexos.
Este trabalho propõe uma nova abordagem para simplificar tarefas de compreensão
de programas orientados a objetos através da análise de rastros de execução sumarizados.
A abordagem é aplicada sobre duas técnicas: a primeira permite a separação de partes
comuns do código fonte das partes específicas relacionadas a características importantes
que conduz a adição de uma nova. Uma avaliação é feita para verificar se os rastros de
execução sumarizados ajudam a técnica na localização de elementos potenciais de código
que podem guiar o desenvolvimento de uma nova característica. A avaliação foi realizada
com sistemas do mundo real e com tarefas de evolução significativas. A segunda é baseada
na reconstrução de diagramas estruturais e comportamentais de alto nível baseada na
análise de rastros de execução sumarizados. É apresentada uma avaliação do desempenho
da abordagem em termos de precisão e recall em dois sistemas públicos de terceiros, dentre
eles o servidor Web Tomcat. O resultado sugere a viabilidade da abordagem para uso em
sistemas reais de larga escala. / Mestre em Ciência da Computação
|
225 |
Auxílio à leitura de textos em português facilitado: questões de acessibilidade / Reading assistance for texts in facilitated portuguese: accessibility issuesWillian Massami Watanabe 05 August 2010 (has links)
A grande capacidade de disponibilização de informações que a Web possibilita se traduz em múltiplas possibilidades e oportunidades para seus usuários. Essas pessoas são capazes de acessar conteúdos provenientes de todas as partes do planeta, independentemente de onde elas estejam. Mas essas possibilidades não são estendidas a todos, sendo necessário mais que o acesso a um computador e a Internet para que sejam realizadas. Indivíduos que apresentem necessidades especiais (deficiência visual, cognitiva, dificuldade de locomoção, entre outras) são privados do acesso a sites e aplicações web que façam mal emprego de tecnologias web ou possuam o conteúdo sem os devidos cuidados para com a acessibilidade. Um dos grupos que é privado do acesso a esse ambiente é o de pessoas com dificuldade de leitura (analfabetos funcionais). A ampla utilização de recursos textuais nas aplicações pode tornar difícil ou mesmo impedir as interações desses indivíduos com os sistemas computacionais. Nesse contexto, este trabalho tem por finalidade o desenvolvimento de tecnologias assistivas que atuem como facilitadoras de leitura e compreensão de sites e aplicações web a esses indivíduos (analfabetos funcionais). Essas tecnologias assistivas utilizam recursos de processamento de língua natural visando maximizar a compreensão do conteúdo pelos usuários. Dentre as técnicas utilizadas são destacadas: simplificação sintática, sumarização automática, elaboração léxica e reconhecimento das entidades nomeadas. Essas técnicas são utilizadas com a finalidade de promover a adaptação automática de conteúdos disponíveis na Web para usuários com baixo nível de alfabetização. São descritas características referentes à acessibilidade de aplicações web e princípios de design para usuários com baixo nível de alfabetização, para garantir a identificação e entendimento das funcionalidades que são implementadas nas duas tecnologias assistivas resultado deste trabalho (Facilita e Facilita Educacional). Este trabalho contribuiu com a identificação de requisitos de acessibilidade para usuários com baixo nível de alfabetização, modelo de acessibilidade para automatizar a conformidade com a WCAG e desenvolvimento de soluções de acessibilidade na camada de agentes de usuários / The large capacity of Web for providing information leads to multiple possibilities and opportunities for users. The development of high performance networks and ubiquitous devices allow users to retrieve content from any location and in different scenarios or situations they might face in their lives. Unfortunately the possibilities offered by the Web are not necessarily currently available to all. Individuals who do not have completely compliant software or hardware that are able to deal with the latest technologies, or have some kind of physical or cognitive disability, find it difficult to interact with web pages, depending on the page structure and the ways in which the content is made available. When specifically considering the cognitive disabilities, users classified as functionally illiterate face severe difficulties accessing web content. The heavy use of texts on interfaces design creates an accessibility barrier to those who cannot read fluently in their mother tongue due to both text length and linguistic complexity. In this context, this work aims at developing an assistive technologies that assists functionally illiterate users during their reading and understanding of websites textual content. These assistive technologies make use of natural language processing (NLP) techniques that maximize reading comprehension for users. The natural language techniques that this work uses are: syntactic simplification, automatic summarization, lexical elaboration and named entities recognition. The techniques are used with the goal of automatically adapting textual content available on the Web for users with low literacy levels. This work describes the accessibility characteristics incorporated into both resultant applications (Facilita and Educational Facilita) that focus on low literacy users limitations towards computer usage and experience. This work contributed with the identification of accessibility requirements for low-literacy users, elaboration of an accessibility model for automatizing WCAG conformance and development of accessible solutions in the user agents layer of web applications
|
226 |
Graph mining for object tracking in videos / Fouille de graphes pour le suivi d’objets dans les vidéosDiot, Fabien 03 June 2014 (has links)
Détecter et suivre les objets principaux d’une vidéo est une étape nécessaire en vue d’en décrire le contenu pour, par exemple, permettre une indexation judicieuse des données multimédia par les moteurs de recherche. Les techniques de suivi d’objets actuelles souffrent de défauts majeurs. En effet, soit elles nécessitent que l’utilisateur désigne la cible a suivre, soit il est nécessaire d’utiliser un classifieur pré-entraîné à reconnaitre une classe spécifique d’objets, comme des humains ou des voitures. Puisque ces méthodes requièrent l’intervention de l’utilisateur ou une connaissance a priori du contenu traité, elles ne sont pas suffisamment génériques pour être appliquées aux vidéos amateurs telles qu’on peut en trouver sur YouTube. Pour résoudre ce problème, nous partons de l’hypothèse que, dans le cas de vidéos dont l’arrière-plan n’est pas fixe, celui-ci apparait moins souvent que les objets intéressants. De plus, dans une vidéo, la topologie des différents éléments visuels composant un objet est supposée consistante d’une image a l’autre. Nous représentons chaque image par un graphe plan modélisant sa topologie. Ensuite, nous recherchons des motifs apparaissant fréquemment dans la base de données de graphes plans ainsi créée pour représenter chaque vidéo. Cette approche nous permet de détecter et suivre les objets principaux d’une vidéo de manière non supervisée en nous basant uniquement sur la fréquence des motifs. Nos contributions sont donc réparties entre les domaines de la fouille de graphes et du suivi d’objets. Dans le premier domaine, notre première contribution est de présenter un algorithme de fouille de graphes plans efficace, appelé PLAGRAM. Cet algorithme exploite la planarité des graphes et une nouvelle stratégie d’extension des motifs. Nous introduisons ensuite des contraintes spatio-temporelles au processus de fouille afin d’exploiter le fait que, dans une vidéo, les objets se déplacent peu d’une image a l’autre. Ainsi, nous contraignons les occurrences d’un même motif a être proches dans l’espace et dans le temps en limitant le nombre d’images et la distance spatiale les séparant. Nous présentons deux nouveaux algorithmes, DYPLAGRAM qui utilise la contrainte temporelle pour limiter le nombre de motifs extraits, et DYPLAGRAM_ST qui extrait efficacement des motifs spatio-temporels fréquents depuis les bases de données représentant les vidéos. Dans le domaine du suivi d’objets, nos contributions consistent en deux approches utilisant les motifs spatio-temporels pour suivre les objets principaux dans les vidéos. La première est basée sur une recherche du chemin de poids minimum dans un graphe connectant les motifs spatio-temporels tandis que l’autre est basée sur une méthode de clustering permettant de regrouper les motifs pour suivre les objets plus longtemps. Nous présentons aussi deux applications industrielles de notre méthode / Detecting and following the main objects of a video is necessary to describe its content in order to, for example, allow for a relevant indexation of the multimedia content by the search engines. Current object tracking approaches either require the user to select the targets to follow, or rely on pre-trained classifiers to detect particular classes of objects such as pedestrians or car for example. Since those methods rely on user intervention or prior knowledge of the content to process, they cannot be applied automatically on amateur videos such as the ones found on YouTube. To solve this problem, we build upon the hypothesis that, in videos with a moving background, the main objects should appear more frequently than the background. Moreover, in a video, the topology of the visual elements composing an object is supposed consistent from one frame to another. We represent each image of the videos with plane graphs modeling their topology. Then, we search for substructures appearing frequently in the database of plane graphs thus created to represent each video. Our contributions cover both fields of graph mining and object tracking. In the first field, our first contribution is to present an efficient plane graph mining algorithm, named PLAGRAM. This algorithm exploits the planarity of the graphs and a new strategy to extend the patterns. The next contributions consist in the introduction of spatio-temporal constraints into the mining process to exploit the fact that, in a video, the motion of objects is small from on frame to another. Thus, we constrain the occurrences of a same pattern to be close in space and time by limiting the number of frames and the spatial distance separating them. We present two new algorithms, DYPLAGRAM which makes use of the temporal constraint to limit the number of extracted patterns, and DYPLAGRAM_ST which efficiently mines frequent spatio-temporal patterns from the datasets representing the videos. In the field of object tracking, our contributions consist in two approaches using the spatio-temporal patterns to track the main objects in videos. The first one is based on a search of the shortest path in a graph connecting the spatio-temporal patterns, while the second one uses a clustering approach to regroup them in order to follow the objects for a longer period of time. We also present two industrial applications of our method
|
227 |
應用文本主題與關係探勘於多文件自動摘要方法之研究:以電影評論文章為例 / Application of text topic and relationship mining for multi-document summarization: using movie reviews as an example林孟儀 Unknown Date (has links)
由於網際網路的普及造成資訊量愈來愈大,在資訊的搜尋、整理與閱讀上會耗費許多時間,因此本研究提出一應用文本主題及關係探勘的方法,將多份文件自動生成一篇摘要,以幫助使用者能降低資訊的閱讀時間,並能快速理解文件所欲表達之意涵。
本研究以電影評論文章為例,結合文章結構的概念,將影評摘要分為「電影資訊」、「電影劇情介紹」及「心得結論」三部分,其中「電影資訊」及「心得結論」為透過本研究建置之電影領域相關詞庫比對得出。接著將餘下之段落歸屬於「電影劇情介紹」,並透過LDA主題模型將段落分群,再運用主題關係地圖的概念挑選各群之代表段落並排序,最後將各段落去除連接詞及將代名詞還原為其所指之主詞,以形成一篇列點式影評摘要。
研究結果顯示,本研究所實驗之三部電影,產生之摘要能涵蓋較多的資訊內容,提升了摘要之多樣性,在與最佳範本摘要的相似度比對上,分別提升了10.8228%、14.0123%及25.8142%,可知本研究方法能有效掌握文件之重點內容,生成之摘要更為全面,藉由此方法讓使用者自動彙整電影評論文章,以生成一精簡之摘要,幫助使用者節省其在資訊的搜尋及閱讀的時間,以便能快速了解相關電影之資訊及評論。 / The rapid development of information technology over the past decades has dramatically increased the amount of online information. Because of the time-wasting on absorbing large amounts of information for users, we would like to present a method in this thesis by using text topic and relationship mining for multi-document summarization to help users grasp the theme of multiple documents quickly and easily by reading the accurate summary without reading the whole documents.
We use movie reviews as an example of multi-document summarization and apply the concept of article structures to categorize summary into film data, film orientation and conclusion by comparing the thesaurus of movie review field built by this thesis. Then we cluster the paragraphs in the structure of film orientation into different topics by Latent Dirichlet Allocation (LDA). Next, we apply the concept of text relationship map, a network of paragraphs and the node in the network referring to a paragraph and an edge indicating that the corresponding paragraphs are related to each other, to extract the most important paragraph in each topic and order them. Finally, we remove conjunctions and replace pronouns with the name it indicates in each extracted paragraph s and generate a bullet-point summary.
From the result, the summary produced by this thesis can cover different topics of contents and improve the diversity of the summary. The similarities compared with the produced summaries and the best-sample summaries raise of 10.8228%, 14.0123% and 25.8142% respectively. The method presented in this thesis grasps the key contents effectively and generates a comprehensive summary. By providing this method, we try to let users aggregate the movie reviews automatically and generate a simplified summary to help them reduce the time in searching and reading articles.
|
228 |
Zpracování uživatelských recenzí / Processing of User ReviewsCihlářová, Dita January 2019 (has links)
Very often, people buy goods on the Internet that they can not see and try. They therefore rely on reviews of other customers. However, there may be too many reviews for a human to handle them quickly and comfortably. The aim of this work is to offer an application that can recognize in Czech reviews what features of a product are most commented and whether the commentary is positive or negative. The results can save a lot of time for e-shop customers and provide interesting feedback to the manufacturers of the products.
|
229 |
Finanční analýza vybrané firmy / Financial Analysis of the Selected FirmValihrachová, Lea January 2013 (has links)
At the time of the ongoing global economic crisis, successful management of the company depends on the ability and experience of management and quality of information on which decisions are made. Financial analysis provides a number of methods that allows evaluate the situation in which a company is in order to provide the basis for effective business management. The result is a healthy company with a positive outlook for the future.
|
230 |
Hodnocení finanční situace podniku a návrhy na její zlepšení / Evaluation of the Financial Situation in the Company and Proposals to Its ImprovementKoláčná, Veronika January 2015 (has links)
This thesis assesses the financial situation of the company focused on the surface finishing. The research period is between the years 2009-2013. The thesis is divided into several parts. The first part discusses the theoretical methods and analyzes a company's rating. The second part is practical and uses the theoretical knowledge from the previous section. The last chapter is based on the results of analyzing and according to these results, this part proposes solutions which can lead to improvement of the company.
|
Page generated in 0.0386 seconds