• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 128
  • 41
  • 13
  • 12
  • 6
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 242
  • 73
  • 68
  • 67
  • 64
  • 59
  • 51
  • 45
  • 38
  • 38
  • 35
  • 34
  • 32
  • 31
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Transformer-based Source Code Description Generation : An ensemble learning-based approach / Transformatorbaserad Generering av Källkodsbeskrivning : En ensemblemodell tillvägagångssätt

Antonios, Mantzaris January 2022 (has links)
Code comprehension can be significantly benefited from high-level source code summaries. For the majority of the developers, understanding another developer’s code or code that was written in the past by them, is a timeconsuming and frustrating task. This is necessary though in software maintenance or in cases where several people are working on the same project. A fast, reliable and informative source code description generator can automate this procedure, which is often avoided by developers. The rise of Transformers has turned the attention to them leading to the development of various Transformer-based models that tackle the task of source code summarization from different perspectives. Most of these models though are treating each other in a competitive manner when their complementarity could be proven beneficial. To this end, an ensemble learning-based approach is followed to explore the feasibility and effectiveness of the collaboration of more than one powerful Transformer-based models. The used base models are PLBart and GraphCodeBERT, two models with different focuses, and the ensemble technique is stacking. The results show that such a model can improve the performance and informativeness of individual models. However, it requires changes in the configuration of the respective models, that might harm them, and also further fine-tuning at the aggregation phase to find the most suitable base models’ weights and next-token probabilities combination, for the at the time ensemble. The results also revealed the need for human evaluation since metrics like BiLingual Evaluation Understudy (BLEU) are not always representative of the quality of the produced summary. Even if the outcome is promising, further work should follow, driven by this approach and based on the limitations that are not resolved in this work, for the development of a potential State Of The Art (SOTA) model. / Mjukvaruunderhåll samt kodförståelse är två områden som märkbart kan gynnas av källkodssammanfattning på hög nivå. För majoriteten av dagens utvecklare är det en tidskrävande och frustrerande uppgift att förstå en annan utvecklares kod.. För majoriteten av utvecklarna är det en tidskrävande och frustrerande uppgift att förstå en annan utvecklares kod eller kod som skrivits tidigare an dem. Detta är nödvändigt vid underhåll av programvara eller när flera personer arbetar med samma projekt. En snabb, pålitlig och informativ källkodsbeskrivningsgenerator kan automatisera denna procedur, som ofta undviks av utvecklare. Framväxten av Transformers har riktat uppmärksamheten mot dem, vilket har lett till utvecklingen av olika Transformer-baserade modeller som tar sig an uppgiften att sammanfatta källkod ur olika perspektiv. De flesta av dessa modeller behandlar dock varandra på ett konkurrenskraftigt sätt när deras komplementaritet kan bevisas vara mer fördelaktigt. För detta ändamål följs en ensembleinlärningsbaserad strategi för att utforska genomförbarheten och effektiviteten av samarbetet mellan mer än en kraftfull transformatorbaserad modell. De använda basmodellerna är PLBart och GraphCodeBERT, två modeller med olika fokus, och ensemblingstekniken staplas. Resultaten visar att en sådan modell kan förbättra prestanda och informativitet hos enskilda modeller. Det kräver dock förändringar i konfigurationen av respektive modeller som kan leda till skada, och även ytterligare finjusteringar i aggregeringsfasen för att hitta de mest lämpliga basmodellernas vikter och nästa symboliska sannolikhetskombination för den dåvarande ensemblen. Resultaten visade också behovet av mänsklig utvärdering eftersom mätvärden som BLEU inte alltid är representativa för kvaliteten på den producerade sammanfattningen. Även om resultaten är lovande bör ytterligare arbete följa, drivet av detta tillvägagångssätt och baserat på de begränsningar som inte är lösta i detta arbete, för utvecklingen av en potentiell SOTA-modell.
212

SynopSys: Large Graph Analytics in the SAP HANA Database Through Summarization

Rudolf, Michael, Paradies, Marcus, Bornhövd, Christof, Lehner, Wolfgang 19 September 2022 (has links)
Graph-structured data is ubiquitous and with the advent of social networking platforms has recently seen a significant increase in popularity amongst researchers. However, also many business applications deal with this kind of data and can therefore benefit greatly from graph processing functionality offered directly by the underlying database. This paper summarizes the current state of graph data processing capabilities in the SAP HANA database and describes our efforts to enable large graph analytics in the context of our research project SynopSys. With powerful graph pattern matching support at the core, we envision OLAP-like evaluation functionality exposed to the user in the form of easy-to-apply graph summarization templates. By combining them, the user is able to produce concise summaries of large graph-structured datasets. We also point out open questions and challenges that we plan to tackle in the future developments on our way towards large graph analytics.
213

Enhancing factuality and coverage in summarization via referencing key extracted content

Belanger Albarran, Georges 04 1900 (has links)
Les résumés abstraits de dialogues permettent aux gens de comprendre rapidement les aspects clés des conversations dont la synthèse nécessiterait autrement des efforts considérables. Malgré les progrès considérables réalisés par les grands modèles de langage (LLM), même les modèles les plus puissants souffrent encore d’hallucinations lorsqu’ils génèrent des résumés abstraits et ne parviennent pas à couvrir des aspects importants du contenu sous-jacent. En outre, la vérification humaine de la factualité d’un résumé abstrait peut nécessiter un effort considérable. L’un des moyens de minimiser la charge cognitive liée à la vérification de la qualité d’un résumé consiste à faire en sorte que le résumé cite des phrases dans le contenu original. Cependant, il est rare que les ensembles de données de résumés abstraits citent des passages de texte du contenu original. Même les meilleurs LLM ont du mal à effectuer un résumé basé sur des citations. Pour résoudre ce problème, nous créons l’ensemble de données Tweetsumm++, composé de résumés abstraits soutenus par des citations de dialogues entre clients et entreprises sur Twitter. Nous examinons également une méthode d’entraînement et de formulation de problèmes multitâches qui apprend à effectuer conjointement un résumé extractif et un résumé abstractif faisant référence au contenu extrait. Dans notre configuration, le modèle est également chargé d’étiqueter les phrases clés dans des catégories telles que ISSUE, RESOLUTION,WORKAROUND et autres, qui représentent les principaux éléments clés d’un dialogue. Nous explorons l’impact de la mise au point d’un LLM Mixtral open-source pour effectuer un résumé abstractif basé sur des citations et une catégorisation des phrases clés. En outre, étant donné que l’acquisition d’étiquettes pour un tel ensemble de données est coûteuse, nous explorons une nouvelle méthode d’auto-étiquetage basée sur le feedback de l’IA qui bénéficie du format de résumé basé sur les citations et peut améliorer les modèles en ce qui concerne la qualité des citations. / Abstractive summaries of dialogues allow people to quickly understand key aspects of conversations that might otherwise take considerable effort to synthesize. Despite the tremendous progress made by large language models (LLMs), even the most powerful models still suffer from hallucinations when generating abstractive summaries and fail to cover important aspects of the underlying content. Furthermore, human verification of the factuality of an abstractive summary can entail significant effort. One way to minimize the cognitive load of quality checking an abstractive summary is to have the summary cite sentences within the original content. However, it is uncommon for abstractive summarization datasets to cite passages of text from the original content. Even the best LLMs struggle to perform citation-backed summarization. To address this issue, we create the Tweetsumm++ dataset composed of citation-backed abstractive summaries of dialogues between customers and companies on Twitter. We also examine a multi-task problem formulation and training method that learns to jointly perform extractive, and abstractive summarization which reference the extracted content. In our setup, the model is also tasked with tagging key sentences into categories such as ISSUE, RESOLUTION, WORKAROUND, and others that represent the main key elements of a dialogue. We explore the impact of fine-tuning an open-source Mixtral LLM to perform citation-backed abstractive summarization and key sentence categorization. Further, since acquiring labels for such a dataset is costly, we explore a novel self-labeling method based on AI feedback that benefits from the citation-based summarization format and can improve models with respect to citation quality.
214

Contextual short-term memory for LLM-based chatbot / Kontextuellt korttidsminne för en LLM-baserad chatbot

Lauri Aleksi Törnwall, Mikael January 2023 (has links)
The evolution of Language Models (LMs) has enabled building chatbot systems that are capable of human-like dialogues without the need for fine-tuning the chatbot for a specific task. LMs are stateless, which means that a LM-based chatbot does not have a recollection of the past conversation unless it is explicitly included in the input prompt. LMs have limitations in the length of the input prompt, and longer input prompts require more computational and monetary resources, so for longer conversations, it is often infeasible to include the whole conversation history in the input prompt. In this project a short-term memory module is designed and implemented to provide the chatbot context of the past conversation. We are introducing two methods, LimContext method and FullContext method, for producing an abstractive summary of the conversation history, which encompasses much of the relevant conversation history in a compact form that can then be supplied with the input prompt in a resource-effective way. To test these short-term memory implementations in practice, a user study is conducted where these two methods are introduced to 9 participants. Data is collected during the user study and each participant answers a survey after the conversation. These results are analyzed to assess the user experience of the two methods and the user experience between the two methods, and to assess the effectiveness of the prompt design for both answer generation and abstractive summarization tasks. According to the statistical analysis, the FullContext method method produced a better user experience, and this finding was in line with the user feedback. / Utvecklingen av LMs har gjort det möjligt att bygga chatbotsystem kapabla till mänskliga dialoger utan behov av att finjustera chatboten för ett specifikt uppdrag. LMs är stateless, vilket betyder att en chatbot baserad på en LM inte sparar tidigare delar av konversationen om de inte uttryckligen ingår i prompten. LMs begränsar längden av prompten, och längre prompter kräver mer beräknings- och monetära resurser. Således är det ofta omöjligt att inkludera hela konversationshistoriken i prompten. I detta projekt utarbetas och implementeras en korttidsminnesmodul, vars syfte är att tillhandahålla chatboten kontexten av den tidigare konversationen. Vi introducerar två metoder, LimContext metod och FullContext metod, för att ta fram en abstrakt sammanfattning av konversationshistoriken. Sammanfattningen omfattar mycket av det relevanta samtalet i en kompakt form, och kan sedan resurseffektivt förses med den påföljande prompten. För att testa dessa korttidsminnesimplementationer i praktiken genomförs en användarstudie där de två metoderna introduceras för 9-deltagare. Data samlas in under användarstudier. Varje deltagare svarar på en enkät efter samtalet. Resultaten analyseras för att bedöma användarupplevelsen av de två metoderna och användarupplevelsen mellan de två metoderna, och för att bedöma effektiviteten av den snabba designen för både svarsgenerering och abstrakta summeringsuppgifter. Enligt den statistiska analysen gav metoden FullContext metod en bättre användarupplevelse. Detta fynd var även i linje med användarnas feedback.
215

Περίληψη βίντεο με μη επιβλεπόμενες τεχνικές ομαδοποίησης

Μπεσύρης, Δημήτριος 11 October 2013 (has links)
Η ραγδαία ανάπτυξη που παρουσιάστηκε τα τελευταία χρόνια σε διάφορους τομείς της πληροφορικής με την αύξηση της ισχύος επεξεργασίας και της δυνατότητας αποθήκευσης ενός τεράστιου όγκου δεδομένων έδωσε νέα ώθηση στον τομέα διαχείρισης, αναζήτησης, σύνοψης και εξαγωγής της πληροφορίας από ένα βίντεο. Για την διαχείριση αυτής της πληροφορίας αναπτύχθηκαν τεχνικές περίληψης βίντεο. Η περίληψη ενός βίντεο υπό μορφή μιας στατικής ακολουθίας χαρακτηριστικών καρέ, μειώνει τον απαραίτητο όγκο της πληροφορίας που απαιτείται σε συστήματα αναζήτησης, ενώ διαμορφώνει την βάση για την αντιμετώπιση του σημασιολογικού περιεχομένου του σε εφαρμογές ανάκτησης. Το ερευνητικό αντικείμενο της παρούσας διδακτορικής διατριβής αναφέρεται σε τεχνικές αυτόματης περίληψης βίντεο με χρήση της θεωρίας γράφων, για την ανάπτυξη μη επιβλεπόμενων αλγόριθμων ομαδοποίησης. Κάθε καρέ της ακολουθίας του βίντεο δεν αντιμετωπίζεται ως ένα διακριτό στοιχείο, αλλά λαμβάνεται υπόψη ο βαθμός συσχέτισης μεταξύ τους. Με αυτόν τον τρόπο το πρόβλημα της ομαδοποίησης ανάγεται από μια τυπική διαδικασία αναγνώρισης ομάδων σε ένα σύστημα ανάλυσης της δομής που περιέχεται στο σύνολο των δεδομένων. Ακόμη παρουσιάζεται μια νέα τεχνική βελτίωσης του βαθμού ομοιότητας των καρέ, η οποία βασίζεται στο θεωρητικό φορμαλισμό τεχνικών ημί-επιβλεπόμενης εκμάθησης, με χρήση όμως αλγόριθμων δυναμικής συμπίεσης, για την αναπαράσταση του οπτικού περιεχομένου τους. Τα αναλυτικά πειραματικά αποτελέσματα που παρατίθενται, αποδεικνύουν την βελτίωση της απόδοσης των προτεινόμενων μεθόδων σε σχέση με γνωστές τεχνικές περίληψης. Τέλος, προτείνονται κάποιες μελλοντικές κατευθύνσεις έρευνας στο αντικείμενο που πραγματεύεται η παρούσα διατριβή, με άμεσες επεκτάσεις στο πεδίο ανάκτησης εικόνας και βίντεο. / The rapid development witnessed in the recent years enabling the storage and processing of a huge amount of data, in various fields of computer technology and image/video understanding, has given new impetus to the field of video manipulation, browsing, indexing, and retrieval. Video summarization, as a static sequence of key frames, reduces the amount of information required for video searching, while provides the basis for understanding the semantic content in video retrieval applications. The research subject of this doctoral thesis is the incorporation of graph theory and unsupervised clustering algorithms in Automatic Video Summarization applications of large video sequences. In this context, every frame from a video sequence is not processed as a discrete element, but the relations between the frames are considered. Thus, the clustering problem is transformed from a typical computation procedure, to the problem of data structure analysis. Detailed experimental results demonstrate the performance improvement provided by the proposed methods in comparison with well-known video summarization techniques from the literature. Finally, future research directions are proposed, directly applicable to the fields of image and video retrieval.
216

Extractive Multi-document Summarization of News Articles

Grant, Harald January 2019 (has links)
Publicly available data grows exponentially through web services and technological advancements. To comprehend large data-streams multi-document summarization (MDS) can be used. In this research, the area of multi-document summarization is investigated. Multiple systems for extractive multi-document summarization are implemented using modern techniques, in the form of the pre-trained BERT language model for word embeddings and sentence classification. This is combined with well proven techniques, in the form of the TextRank ranking algorithm, the Waterfall architecture and anti-redundancy filtering. The systems are evaluated on the DUC-2002, 2006 and 2007 datasets using the ROUGE metric. Where the results show that the BM25 sentence representation implemented in the TextRank model using the Waterfall architecture and an anti-redundancy technique outperforms the other implementations, providing competitive results with other state-of-the-art systems. A cohesive model is derived from the leading system and tried in a user study using a real-world application. The user study is conducted using a real-time news detection application with users from the news-domain. The study shows a clear favour for cohesive summaries in the case of extractive multi-document summarization. Where the cohesive summary is preferred in the majority of cases.
217

Auxílio à leitura de textos em português facilitado: questões de acessibilidade / Reading assistance for texts in facilitated portuguese: accessibility issues

Watanabe, Willian Massami 05 August 2010 (has links)
A grande capacidade de disponibilização de informações que a Web possibilita se traduz em múltiplas possibilidades e oportunidades para seus usuários. Essas pessoas são capazes de acessar conteúdos provenientes de todas as partes do planeta, independentemente de onde elas estejam. Mas essas possibilidades não são estendidas a todos, sendo necessário mais que o acesso a um computador e a Internet para que sejam realizadas. Indivíduos que apresentem necessidades especiais (deficiência visual, cognitiva, dificuldade de locomoção, entre outras) são privados do acesso a sites e aplicações web que façam mal emprego de tecnologias web ou possuam o conteúdo sem os devidos cuidados para com a acessibilidade. Um dos grupos que é privado do acesso a esse ambiente é o de pessoas com dificuldade de leitura (analfabetos funcionais). A ampla utilização de recursos textuais nas aplicações pode tornar difícil ou mesmo impedir as interações desses indivíduos com os sistemas computacionais. Nesse contexto, este trabalho tem por finalidade o desenvolvimento de tecnologias assistivas que atuem como facilitadoras de leitura e compreensão de sites e aplicações web a esses indivíduos (analfabetos funcionais). Essas tecnologias assistivas utilizam recursos de processamento de língua natural visando maximizar a compreensão do conteúdo pelos usuários. Dentre as técnicas utilizadas são destacadas: simplificação sintática, sumarização automática, elaboração léxica e reconhecimento das entidades nomeadas. Essas técnicas são utilizadas com a finalidade de promover a adaptação automática de conteúdos disponíveis na Web para usuários com baixo nível de alfabetização. São descritas características referentes à acessibilidade de aplicações web e princípios de design para usuários com baixo nível de alfabetização, para garantir a identificação e entendimento das funcionalidades que são implementadas nas duas tecnologias assistivas resultado deste trabalho (Facilita e Facilita Educacional). Este trabalho contribuiu com a identificação de requisitos de acessibilidade para usuários com baixo nível de alfabetização, modelo de acessibilidade para automatizar a conformidade com a WCAG e desenvolvimento de soluções de acessibilidade na camada de agentes de usuários / The large capacity of Web for providing information leads to multiple possibilities and opportunities for users. The development of high performance networks and ubiquitous devices allow users to retrieve content from any location and in different scenarios or situations they might face in their lives. Unfortunately the possibilities offered by the Web are not necessarily currently available to all. Individuals who do not have completely compliant software or hardware that are able to deal with the latest technologies, or have some kind of physical or cognitive disability, find it difficult to interact with web pages, depending on the page structure and the ways in which the content is made available. When specifically considering the cognitive disabilities, users classified as functionally illiterate face severe difficulties accessing web content. The heavy use of texts on interfaces design creates an accessibility barrier to those who cannot read fluently in their mother tongue due to both text length and linguistic complexity. In this context, this work aims at developing an assistive technologies that assists functionally illiterate users during their reading and understanding of websites textual content. These assistive technologies make use of natural language processing (NLP) techniques that maximize reading comprehension for users. The natural language techniques that this work uses are: syntactic simplification, automatic summarization, lexical elaboration and named entities recognition. The techniques are used with the goal of automatically adapting textual content available on the Web for users with low literacy levels. This work describes the accessibility characteristics incorporated into both resultant applications (Facilita and Educational Facilita) that focus on low literacy users limitations towards computer usage and experience. This work contributed with the identification of accessibility requirements for low-literacy users, elaboration of an accessibility model for automatizing WCAG conformance and development of accessible solutions in the user agents layer of web applications
218

Sumarização e extração de conceitos de notas explicativas em relatórios financeiros: ênfase nas notas das principais práticas contábeis

Cagol, Adriano 27 April 2017 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-04-18T16:33:53Z No. of bitstreams: 1 Adriano Cagol_.pdf: 619508 bytes, checksum: 490415002d6a9bb9ff9bb7f968e23b21 (MD5) / Made available in DSpace on 2018-04-18T16:33:53Z (GMT). No. of bitstreams: 1 Adriano Cagol_.pdf: 619508 bytes, checksum: 490415002d6a9bb9ff9bb7f968e23b21 (MD5) Previous issue date: 2017-04-27 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / As demonstrações financeiras apresentam o desempenho financeiro das empresas e são uma importante ferramenta para análise da situação patrimonial e financeira, bem como para tomada de decisões de investidores, credores, fornecedores, clientes, entre outros. Nelas constam as notas explicativas que descrevem em detalhes as práticas e políticas de comunicação dos métodos de contabilidade da empresa, além de informações adicionais. Dependendo dos objetivos, não é possível uma correta análise da situação de uma entidade através das demonstrações financeiras, sem a interpretação e análise das notas explicativas que as acompanham. Porém, apesar da importância, a análise automática das notas explicativas das demonstrações financeiras ainda é um obstáculo. Em vista desta deficiência, este trabalho propõe um modelo que aplica técnicas de mineração textual para efetivar a extração de conceitos e a sumarização das notas explicativas, relativas à seção de principais práticas contábeis adotadas pela empresa, no sentido de identificar e estruturar os principais métodos de apuração de contas contábeis e a geração de resumos. Um algoritmo de extração de conceitos e seis algoritmos de sumarização foram aplicados sobre as notas explicativas das demonstrações financeiras de empresas da Comissão de Valores Mobiliários do Brasil. O trabalho mostra que a extração de conceitos gera resultados promissores para identificação do método de apuração da conta contábil, visto que apresenta acurácia de 100% na nota explicativa do estoque e do imobilizado e acurácia de 96,97% na nota explicativa do reconhecimento da receita. Além disso, avalia os algoritmos de sumarização com a medida ROUGE, apontando os mais promissores, com destaque para o LexRank, que no geral conseguiu as melhores avaliações. / Financial statements present the financial performance of companies and are an important tool for analyzing the financial and equity situation, as well as for making decisions of investors, creditors, suppliers, customers, among others. These are listed explanatory notes that describe in detail how practices and policies of accounting methods of the company. Depending on the objectives, a correct analysis of the situation of a company on the financial statements is not possible without an interpretation and analysis of the footnotes. However, despite the importance, an automatic analysis of the footnotes to the financial statements is still an obstacle. In view of this deficiency, this work proposes a model that applies text mining techniques without the sense of identifying the main methods of calculating the accounting accounts, the reports in the footnotes, with concept extraction, as well as generating a summary that contemplates the main idea of these, through summarization. A concept extraction algorithm and six summarization algorithms are applied in financial statements of companies of Brazilian Securities and Exchange Commission. The work shows that concept extraction generates promising results for the identification of the method of calculating the accounting account, since it presents a 100% accuracy in the footnote of inventory and property, plant and equipment, and accuracy of 96.97% in the footnote on revenue recognition. In addition, it evaluates the algorithms for summarization with the ROUGE measure, pointing out the most promising ones, especially LexRank, which in general obtained the best evaluations.
219

Da improcedência à procedência liminar: hipóteses de incidência e aplicação da norma do art. 285-A do Código de Processo Civil de lege lata e de lege ferenda / From the injunction dismissal to the judgment of injunction on merit: hypotheses of the incidence and application of the rule of article 285-A of the Brazilian Code of Civil Procedure de lege lata and de lege ferenda

Lima, Lucas Rister de Sousa 09 October 2014 (has links)
Made available in DSpace on 2016-04-26T20:23:19Z (GMT). No. of bitstreams: 1 Lucas Rister de Sousa Lima.pdf: 2245580 bytes, checksum: 854e86021e2ee1b949f289b37dd1f66b (MD5) Previous issue date: 2014-10-09 / Over time and as society evolved, the civil procedural system has tended to conceive techniques intended to expedite judicial protection and case-law uniformity, in order to optimize the services provided by the Judiciary and make them more efficient. Article 285-A of the Brazilian Code of Civil Procedure embodies this trend, with features of both aspects and that, ultimately, in addition to abiding by the constitutional model in force, attempts to align it with and adapt it to new prevailing social standards (particularly in connection with dual jurisdiction) on behalf of procedural economy and rationality. This rule stands as a very important tool for better utilization of the civil procedural system in general, as a time-saving method for judges, clerks of justice and other practitioners of the law, avoiding the activities with little or no influence on the outcome of proceedings, thereby contributing to better adjudication results, with decreased expenditure of time and energy, as prescribed by the principle of timely judicial protection. Moreover, as it implies a substantial change in the how procedural acts unfold (beginning, in fact, at 'the end' of a proceeding s first phase), empirical application of the technique is somewhat hampered, which is not to say that it should cease to be applied or, or that its contribution to the improvement of the system as a whole should be denied, as this study attempts to demonstrate. The technique s power and potentialities in the face of an increasingly mass-oriented society with countless repetitive activities (and its clear reflections on the design of the Judiciary itself) allow concluding, without offense to the Constitution (especially the principle of due process and the adversarial principle) and in clear obedience of the principle of equality, in favor of extending the faculties of article 285-A of the Brazilian Code of Civil Procedure to the plaintiff as well, who would be granted the same privileges afforded to defendants under similar circumstances / Com o tempo e a evolução da sociedade, verificou-se uma tendência do sistema processual civil em conceber técnicas de sumarização da tutela jurisdicional e uniformização da jurisprudência, de molde a otimizar e tornar mais eficientes os serviços prestados pelo Poder Judiciário. O art. 285-A do Código de Processo Civil brasileiro não é nada mais do que uma norma que materializa essa tendência, com traços de ambas as vertentes e que, em última análise, além de respeitar o modelo constitucional vigente, procura alinhá-lo e adequá-lo ao novo arquétipo social vigente (especialmente à chamada dualidade de jurisdições), em prestígio da economia e da racionalidade do processo. Afigura-se o aludido preceptivo em ferramenta muito importante para a oxigenação e o melhor aproveitamento do sistema processual civil de uma maneira geral, com vistas a poupar o tempo de juízes, serventuários da justiça e demais operadores do direito, com a prática de atividades que pouco ou nada influirão para o resultado final do processo, contribuindo, assim, para que se extraiam melhores resultados da prestação jurisdicional, com menor dispêndio de tempo e energia, em prestígio ao princípio da tempestividade da tutela jurisdicional. Ademais, é técnica que, por implicar sensível mudança na forma como ordinariamente ocorrem os atos de um processo (que, deveras, começa pelo fim de sua primeira fase), acaba gerando certa dificuldade na sua adequada aplicação no plano empírico, mas que, nem por isso, deve deixar de ser aplicada ou recusada a sua contribuição para o bem do sistema como um todo, como se procurará demonstrar no curso do presente trabalho. A pujança e o potencial verificados na aludida técnica, diante de uma sociedade cada vez mais massificada e com inúmeras atividades repetitivas (o que reflete, peremptoriamente, no próprio desenho do Poder Judiciário), permitem concluir, sem ofensa à Constituição Federal (notadamente aos princípios do devido processo legal e do contraditório) e prestigiando o princípio da igualdade, pela possibilidade de se estender a norma nela contida também para o autor, ao qual passaria a ser franqueada, mediante alteração legislativa, igual benesse à conferida ao réu, em semelhantes condições
220

Immersive Dynamic Scenes for Virtual Reality from a Single RGB-D Camera

Lai, Po Kong 26 September 2019 (has links)
In this thesis we explore the concepts and components which can be used as individual building blocks for producing immersive virtual reality (VR) content from a single RGB-D sensor. We identify the properties of immersive VR videos and propose a system composed of a foreground/background separator, a dynamic scene re-constructor and a shape completer. We initially explore the foreground/background separator component in the context of video summarization. More specifically, we examined how to extract trajectories of moving objects from video sequences captured with a static camera. We then present a new approach for video summarization via minimization of the spatial-temporal projections of the extracted object trajectories. New evaluation criterion are also presented for video summarization. These concepts of foreground/background separation can then be applied towards VR scene creation by extracting relative objects of interest. We present an approach for the dynamic scene re-constructor component using a single moving RGB-D sensor. By tracking the foreground objects and removing them from the input RGB-D frames we can feed the background only data into existing RGB-D SLAM systems. The result is a static 3D background model where the foreground frames are then super-imposed to produce a coherent scene with dynamic moving foreground objects. We also present a specific method for extracting moving foreground objects from a moving RGB-D camera along with an evaluation dataset with benchmarks. Lastly, the shape completer component takes in a single view depth map of an object as input and "fills in" the occluded portions to produce a complete 3D shape. We present an approach that utilizes a new data minimal representation, the additive depth map, which allows traditional 2D convolutional neural networks to accomplish the task. The additive depth map represents the amount of depth required to transform the input into the "back depth map" which would exist if there was a sensor exactly opposite of the input. We train and benchmark our approach using existing synthetic datasets and also show that it can perform shape completion on real world data without fine-tuning. Our experiments show that our data minimal representation can achieve comparable results to existing state-of-the-art 3D networks while also being able to produce higher resolution outputs.

Page generated in 0.0363 seconds