• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 92
  • 36
  • 31
  • 28
  • 28
  • 24
  • 24
  • 23
  • 21
  • 17
  • 13
  • 13
  • 10
  • 10
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Recovering the Semantics of Tabular Web Data

Braunschweig, Katrin 26 October 2015 (has links) (PDF)
The Web provides a platform for people to share their data, leading to an abundance of accessible information. In recent years, significant research effort has been directed especially at tables on the Web, which form a rich resource for factual and relational data. Applications such as fact search and knowledge base construction benefit from this data, as it is often less ambiguous than unstructured text. However, many traditional information extraction and retrieval techniques are not well suited for Web tables, as they generally do not consider the role of the table structure in reflecting the semantics of the content. Tables provide a compact representation of similarly structured data. Yet, on the Web, tables are very heterogeneous, often with ambiguous semantics and inconsistencies in the quality of the data. Consequently, recognizing the structure and inferring the semantics of these tables is a challenging task that requires a designated table recovery and understanding process. In the literature, many important contributions have been made to implement such a table understanding process that specifically targets Web tables, addressing tasks such as table detection or header recovery. However, the precision and coverage of the data extracted from Web tables is often still quite limited. Due to the complexity of Web table understanding, many techniques developed so far make simplifying assumptions about the table layout or content to limit the amount of contributing factors that must be considered. Thanks to these assumptions, many sub-tasks become manageable. However, the resulting algorithms and techniques often have a limited scope, leading to imprecise or inaccurate results when applied to tables that do not conform to these assumptions. In this thesis, our objective is to extend the Web table understanding process with techniques that enable some of these assumptions to be relaxed, thus improving the scope and accuracy. We have conducted a comprehensive analysis of tables available on the Web to examine the characteristic features of these tables, but also identify unique challenges that arise from these characteristics in the table understanding process. To extend the scope of the table understanding process, we introduce extensions to the sub-tasks of table classification and conceptualization. First, we review various table layouts and evaluate alternative approaches to incorporate layout classification into the process. Instead of assuming a single, uniform layout across all tables, recognizing different table layouts enables a wide range of tables to be analyzed in a more accurate and systematic fashion. In addition to the layout, we also consider the conceptual level. To relax the single concept assumption, which expects all attributes in a table to describe the same semantic concept, we propose a semantic normalization approach. By decomposing multi-concept tables into several single-concept tables, we further extend the range of Web tables that can be processed correctly, enabling existing techniques to be applied without significant changes. Furthermore, we address the quality of data extracted from Web tables, by studying the role of context information. Supplementary information from the context is often required to correctly understand the table content, however, the verbosity of the surrounding text can also mislead any table relevance decisions. We first propose a selection algorithm to evaluate the relevance of context information with respect to the table content in order to reduce the noise. Then, we introduce a set of extraction techniques to recover attribute-specific information from the relevant context in order to provide a richer description of the table content. With the extensions proposed in this thesis, we increase the scope and accuracy of Web table understanding, leading to a better utilization of the information contained in tables on the Web.
42

Návrh vestavaného systému inteligentného vidění na platformě NVIDIA / Embedded Vision System on NVIDIA platform

Krivoklatský, Filip January 2019 (has links)
This diploma thesis deals with design of embedded computer vision system and transfer of existing computer vision application for 3D object detection from Windows OS to designed embedded system with Linux OS. Thesis focuses on design of communication interface for system control and camera video transfer through local network with video compression. Then, detection algorithm is enhanced by transferring computationally expensive functions to GPU using CUDA technology. Finally, a user application with graphical interface is designed for system control on Windows platform.
43

Měření kvality pro HEVC / Video Quality Measurement for HEVC

Klejmová, Eva January 2014 (has links)
This diploma thesis deals with standard objective and subjective video quality assessments and with analysis of their applicability to HEVC. Also basic description of video compression standard H.265/HEVC is presented. The main focus of the thesis is a creation of the database of compressed video sequences. Important parameters and features of the reference encoder HM-12 are discussed. Selected methods of objective video quality assessments are implemented on the created database. A part of this thesis is also a suggestion of method for objective video quality assessment, application of this method and associated data collection. Final data is statistically analyzed and it’s correlation with objective tests is discussed.
44

The political relationship between Caesar and Cicero to the conclusion of the Civil War.

Pitt, Edith Seaton. January 1943 (has links)
No description available.
45

Differential splicing in lymphoma

Zimmermann, Karin 05 September 2018 (has links)
Alternatives Spleißen ist ein wesentlicher Mechanismus, um Proteindiversität in Eukaryoten zu gewährleisten. Gewebespezifität sowie entwicklungsrelevante Prozesse werden unter anderem massgeblich davon beeinflusst. Aberrante (alternative) Spleißvorgänge können wiederum zu veränderten Proteinisoformen führen, die verschiedenste Krankheiten wie Krebs verursachen oder zu veränderter Medikamentenwirksamkeit beitragen können. In dieser Arbeit untersuchen wir differentielles Spleißen im Kontext von Krebserkrankungen. Dazu betrachten wir drei Aspekte, die uns wichtig erscheinen. Der erste Teil dieser Arbeit beschäftigt sich mit dem systematischen Vergleich verschiedener Methoden für die Detektion von differentiellem Spleißen in Exon-ArrayDaten. Anhand artifizieller und experimentell validierter Daten identifizieren wir Methoden, die über verschiedene Parameterszenarien hinweg robuste Ergebnisse liefern, und ermitteln bestimmte Datenparameter, die die Ergebnisgüte sowie die Qualität der angewandten Methoden beeinflussen. Im zweiten Teil identifizieren wir Spleiß-regulatorischer Proteine, die für die beobachteten Spleissveränderungen zwischen Krebs und einer Kontrolle verantwortlich sein könnten. Zu diesem Zweck stellen wir eine von uns entwickelte Methode basierend auf einem Netzwerkansatz vor. Hierbei werden Spleißfaktoren und differentiell gesplicete Exons in ein Netzwerk integriert und anschliessend anhand der Unterschiede in ihrer Zentralität geordnet. Im dritten Teil analysieren wir die Vergleichbarkeit zweier Datentypen, generiert durch unterschiedliche Technologien, in Bezug auf die Detektion von differentiellem Spleißen. Dazu beziehen wir mehrere Vergleichsebenen mit ein und wenden Methoden an, die für beide Technologien geeignet sind um eine methodenbasierte Beeinträchtigung der Vergleichbarkeit auszuschließen. Die Anwendung unseres Ansatzes auf zwei Datensätze identifiziert ähnliche Trends in der Vergleichbarkeit bei einer sich unterscheidenden Gesamtkonkordanz. / Alternative splicing is a crucial mechanism in eukaryotes, which provides an ample protein diversity that is necessary for maintaining an organism. In contrast, aberrant (alternative) splicing may lead to altered protein isoforms contributing to diseases such as cancer. In this thesis, we study differential splicing in cancer, i.e. splicing changes observed between cancerous and control tissues. We seek to identify methods best suited for the detection of differential splicing, we investigate regulatory factors potentially causal for the splicing changes observed, and we study the comparability of two data types obtained from different technologies with respect to differential splicing detection. The first part of the thesis assesses the performance of methods for detecting differential splicing from exon arrays as existing methods are often of low concordance. We examine global data parameters and their potential influence on results and method performance using artificial and validated experimental data. Overall, our evaluation indicates methods that perform robustly well across artificial and experimental data and identifies parameters impacting result performance. The second part aims at identifying regulatory factors responsible for splicing changes observed between cancer, and healthy tissue. Therefor, we develop a novel, network based approach which first integrates differentially spliced exons with splicing regulatory proteins (splicing factors), using transcriptomics data, and then ranks splicing factors according to their potential involvement in cancer. Third, we compare differential splicing detection based on RNA sequencing and exon array data by developing a multi-level comparison framework using two differential splicing detection methods applicable to both, RNA sequencing and exon array data, to avoid method inherent bias. We apply our multi-level framework to two data sets, leading, despite varying overall concordance, to similar trends in comparability.
46

Snapshots in large-scale distributed file systems

Stender, Jan 21 January 2013 (has links)
Viele moderne Dateisysteme unterstützen Snapshots zur Erzeugung konsistenter Online-Backups, zur Wiederherstellung verfälschter oder ungewollt geänderter Dateien, sowie zur Rückverfolgung von Änderungen an Dateien und Verzeichnissen. Während frühere Arbeiten zu Snapshots in Dateisystemen vorwiegend lokale Dateisysteme behandeln, haben moderne Trends wie Cloud- oder Cluster-Computing dazu geführt, dass die Datenhaltung in verteilten Speichersystemen an Bedeutung gewinnt. Solche Systeme umfassen häufig eine Vielzahl an Speicher-Servern, was besondere Herausforderungen mit Hinblick auf Skalierbarkeit, Verfügbarkeit und Ausfallsicherheit mit sich bringt. Diese Arbeit beschreibt einen Snapshot-Algorithmus für großangelegte verteilte Dateisysteme und dessen Integration in XtreemFS, ein skalierbares objektbasiertes Dateisystem für Grid- und Cloud-Computing-Umgebungen. Die zwei Bausteine des Algorithmus sind ein System zur effizienten Erzeugung und Verwaltung von Dateiinhalts- und Metadaten-Versionen, sowie ein skalierbares, ausfallsicheres Verfahren zur Aggregation bestimmter Versionen in einem Snapshot. Um das Problem einer fehlenden globalen Zeit zu bewältigen, implementiert der Algorithmus ein weniger restriktives, auf Zeitstempeln lose synchronisierter Server-Uhren basierendes Konsistenzmodell für Snapshots. Die wesentlichen Beiträge der Arbeit sind: 1) ein formales Modell von Snapshots und Snapshot-Konsistenz in verteilten Dateisystemen; 2) die Beschreibung effizienter Verfahren zur Verwaltung von Metadaten- und Dateiinhalts-Versionen in objektbasierten Dateisystemen; 3) die formale Darstellung eines skalierbaren, ausfallsicheren Snapshot-Algorithmus für großangelegte objektbasierte Dateisysteme; 4) eine detaillierte Beschreibung der Implementierung des Algorithmus in XtreemFS. Eine umfangreiche Auswertung belegt, dass der vorgestellte Algorithmus die Nutzerdatenrate kaum negativ beeinflusst, und dass er mit großen Zahlen an Snapshots und Versionen skaliert. / Snapshots are present in many modern file systems, where they allow to create consistent on-line backups, to roll back corruptions or inadvertent changes of files, and to keep a record of changes to files and directories. While most previous work on file system snapshots refers to local file systems, modern trends like cloud and cluster computing have shifted the focus towards distributed storage infrastructures. Such infrastructures often comprise large numbers of storage servers, which presents particular challenges in terms of scalability, availability and failure tolerance. This thesis describes snapshot algorithm for large-scale distributed file systems and its integration in XtreemFS, a scalable object-based file system for grid and cloud computing environments. The two building blocks of the algorithm are a version management scheme, which efficiently records versions of file content and metadata, as well as a scalable and failure-tolerant mechanism that aggregates specific versions in a snapshot. To overcome the lack of a global time in a distributed system, the algorithm implements a relaxed consistency model for snapshots, which is based on timestamps assigned by loosely synchronized server clocks. The main contributions of the thesis are: 1) a formal model of snapshots and snapshot consistency in distributed file systems; 2) the description of efficient schemes for the management of metadata and file content versions in object-based file systems; 3) the formal presentation of a scalable, fault-tolerant snapshot algorithm for large-scale object-based file systems; 4) a detailed description of the implementation of the algorithm as part of XtreemFS. An extensive evaluation shows that the proposed algorithm has no severe impact on user I/O, and that it scales to large numbers of snapshots and versions.
47

Delphin 6 Output File Specification

Vogelsang, Stefan, Nicolai, Andreas 12 April 2016 (has links) (PDF)
Abstract This paper describes the file formats of the output data and geometry files generated by the Delphin program, a simulation model for hygrothermal transport in porous media. The output data format is suitable for any kind of simulation output generated by transient transport simulation models. Implementing support for the Delphin output format enables use of the advanced post-processing functionality provided by the Delphin post-processing tool and its dedicated physical analysis functionality.
48

Verification of Data-aware Business Processes in the Presence of Ontologies

Santoso, Ario 14 November 2016 (has links) (PDF)
The meet up between data, processes and structural knowledge in modeling complex enterprise systems is a challenging task that has led to the study of combining formalisms from knowledge representation, database theory, and process management. Moreover, to ensure system correctness, formal verification also comes into play as a promising approach that offers well-established techniques. In line with this, significant results have been obtained within the research on data-aware business processes, which studies the marriage between static and dynamic aspects of a system within a unified framework. However, several limitations are still present. Various formalisms for data-aware processes that have been studied typically use a simple mechanism for specifying the system dynamics. The majority of works also assume a rather simple treatment of inconsistency (i.e., reject inconsistent system states). Many researches in this area that consider structural domain knowledge typically also assume that such knowledge remains fixed along the system evolution (context-independent), and this might be too restrictive. Moreover, the information model of data-aware processes sometimes relies on relatively simple structures. This situation might cause an abstraction gap between the high-level conceptual view that business stakeholders have, and the low-level representation of information. When it comes to verification, taking into account all of the aspects above makes the problem more challenging. In this thesis, we investigate the verification of data-aware processes in the presence of ontologies while at the same time addressing all limitations above. Specifically, we provide the following contributions: (1) We propose a formal framework called Golog-KABs (GKABs), by leveraging on the state of the art formalisms for data-aware processes equipped with ontologies. GKABs enable us to specify semantically-rich data-aware business processes, where the system dynamics are specified using a high-level action language inspired by the Golog programming language. (2) We propose a parametric execution semantics for GKABs that is able to elegantly accommodate a plethora of inconsistency-aware semantics based on the well-known notion of repair, and this leads us to consider several variants of inconsistency-aware GKABs. (3) We enhance GKABs towards context-sensitive GKABs that take into account the contextual information during the system evolution. (4) We marry these two settings and introduce inconsistency-aware context-sensitive GKABs. (5) We introduce the so-called Alternating-GKABs that allow for a more fine-grained analysis over the evolution of inconsistency-aware context-sensitive systems. (6) In addition to GKABs, we introduce a novel framework called Semantically-Enhanced Data-Aware Processes (SEDAPs) that, by utilizing ontologies, enable us to have a high-level conceptual view over the evolution of the underlying system. We provide not only theoretical results, but have also implemented this concept of SEDAPs. We also provide numerous reductions for the verification of sophisticated first-order temporal properties over all of the settings above, and show that verification can be addressed using existing techniques developed for Data-Centric Dynamic Systems (which is a well-established data-aware processes framework), under suitable boundedness assumptions for the number of objects freshly introduced in the system while it evolves. Notably, all proposed GKAB extensions have no negative impact on computational complexity.
49

Human Mobility and Application Usage Prediction Algorithms for Mobile Devices

Baumann, Paul 27 October 2016 (has links) (PDF)
Mobile devices such as smartphones and smart watches are ubiquitous companions of humans’ daily life. Since 2014, there are more mobile devices on Earth than humans. Mobile applications utilize sensors and actuators of these devices to support individuals in their daily life. In particular, 24% of the Android applications leverage users’ mobility data. For instance, this data allows applications to understand which places an individual typically visits. This allows providing her with transportation information, location-based advertisements, or to enable smart home heating systems. These and similar scenarios require the possibility to access the Internet from everywhere and at any time. To realize these scenarios 83% of the applications available in the Android Play Store require the Internet to operate properly and therefore access it from everywhere and at any time. Mobile applications such as Google Now or Apple Siri utilize human mobility data to anticipate where a user will go next or which information she is likely to access en route to her destination. However, predicting human mobility is a challenging task. Existing mobility prediction solutions are typically optimized a priori for a particular application scenario and mobility prediction task. There is no approach that allows for automatically composing a mobility prediction solution depending on the underlying prediction task and other parameters. This approach is required to allow mobile devices to support a plethora of mobile applications running on them, while each of the applications support its users by leveraging mobility predictions in a distinct application scenario. Mobile applications rely strongly on the availability of the Internet to work properly. However, mobile cellular network providers are struggling to provide necessary cellular resources. Mobile applications generate a monthly average mobile traffic volume that ranged between 1 GB in Asia and 3.7 GB in North America in 2015. The Ericsson Mobility Report Q1 2016 predicts that by the end of 2021 this mobile traffic volume will experience a 12-fold increase. The consequences are higher costs for both providers and consumers and a reduced quality of service due to congested mobile cellular networks. Several countermeasures can be applied to cope with these problems. For instance, mobile applications apply caching strategies to prefetch application content by predicting which applications will be used next. However, existing solutions suffer from two major shortcomings. They either (1) do not incorporate traffic volume information into their prefetching decisions and thus generate a substantial amount of cellular traffic or (2) require a modification of mobile application code. In this thesis, we present novel human mobility and application usage prediction algorithms for mobile devices. These two major contributions address the aforementioned problems of (1) selecting a human mobility prediction model and (2) prefetching of mobile application content to reduce cellular traffic. First, we address the selection of human mobility prediction models. We report on an extensive analysis of the influence of temporal, spatial, and phone context data on the performance of mobility prediction algorithms. Building upon our analysis results, we present (1) SELECTOR – a novel algorithm for selecting individual human mobility prediction models and (2) MAJOR – an ensemble learning approach for human mobility prediction. Furthermore, we introduce population mobility models and demonstrate their practical applicability. In particular, we analyze techniques that focus on detection of wrong human mobility predictions. Among these techniques, an ensemble learning algorithm, called LOTUS, is designed and evaluated. Second, we present EBC – a novel algorithm for prefetching mobile application content. EBC’s goal is to reduce cellular traffic consumption to improve application content freshness. With respect to existing solutions, EBC presents novel techniques (1) to incorporate different strategies for prefetching mobile applications depending on the available network type and (2) to incorporate application traffic volume predictions into the prefetching decisions. EBC also achieves a reduction in application launch time to the cost of a negligible increase in energy consumption. Developing human mobility and application usage prediction algorithms requires access to human mobility and application usage data. To this end, we leverage in this thesis three publicly available data set. Furthermore, we address the shortcomings of these data sets, namely, (1) the lack of ground-truth mobility data and (2) the lack of human mobility data at short-term events like conferences. We contribute with JK2013 and UbiComp Data Collection Campaign (UbiDCC) two human mobility data sets that address these shortcomings. We also develop and make publicly available a mobile application called LOCATOR, which was used to collect our data sets. In summary, the contributions of this thesis provide a step further towards supporting mobile applications and their users. With SELECTOR, we contribute an algorithm that allows optimizing the quality of human mobility predictions by appropriately selecting parameters. To reduce the cellular traffic footprint of mobile applications, we contribute with EBC a novel approach for prefetching of mobile application content by leveraging application usage predictions. Furthermore, we provide insights about how and to what extent wrong and uncertain human mobility predictions can be detected. Lastly, with our mobile application LOCATOR and two human mobility data sets, we contribute practical tools for researchers in the human mobility prediction domain.
50

Simulation Of Conjugate Heat Transfer Problems Using Least Squares Finite Element Method

Goktolga, Mustafa Ugur 01 October 2012 (has links) (PDF)
In this thesis study, a least-squares finite element method (LSFEM) based conjugate heat transfer solver was developed. In the mentioned solver, fluid flow and heat transfer computations were performed separately. This means that the calculated velocity values in the flow calculation part were exported to the heat transfer part to be used in the convective part of the energy equation. Incompressible Navier-Stokes equations were used in the flow simulations. In conjugate heat transfer computations, it is required to calculate the heat transfer in both flow field and solid region. In this study, conjugate behavior was accomplished in a fully coupled manner, i.e., energy equation for fluid and solid regions was solved simultaneously and no boundary conditions were defined on the fluid-solid interface. To assure that the developed solver works properly, lid driven cavity flow, backward facing step flow and thermally driven cavity flow problems were simulated in three dimensions and the findings compared well with the available data from the literature. Couette flow and thermally driven cavity flow with conjugate heat transfer in two dimensions were modeled to further validate the solver. Finally, a microchannel conjugate heat transfer problem was simulated. In the flow solution part of the microchannel problem, conservation of mass was not achieved. This problem was expected since the LSFEM has problems related to mass conservation especially in high aspect ratio channels. In order to overcome the mentioned problem, weight of continuity equation was increased by multiplying it with a constant. Weighting worked for the microchannel problem and the mass conservation issue was resolved. Obtained results for microchannel heat transfer problem were in good agreement in general with the previous experimental and numerical works. In the first computations with the solver / quadrilateral and triangular elements for two dimensional problems, hexagonal and tetrahedron elements for three dimensional problems were tried. However, since only the quadrilateral and hexagonal elements gave satisfactory results, they were used in all the above mentioned simulations.

Page generated in 0.0153 seconds