• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 9
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 93
  • 35
  • 30
  • 30
  • 30
  • 24
  • 24
  • 23
  • 21
  • 17
  • 12
  • 12
  • 10
  • 10
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Gaius Marius : a political biography

Evans, Richard J., 1954- 01 1900 (has links)
The political career of Gaius Marius (ca. 157-86 BC}, which spans the years between 120 and 86 BC, was memorable not only for its unprecedented personal and public triumphs, but was also of momentous significance in the whole history of the Roman Republic. At precisely the time that Marius achieved a supreme position in the state, the military might of the Romans, hitherto invincible at least in fairly recent times (second century}, had been dealt a series of humiliating setbacks abroad. Firstly, in North Africa by a rather minor despot, Jugurtha the king of Numidia. Secondly, much closer to home in Illyria and in southern Gaul by the migrating Germanic tribes, the Cimbri and the Teutones. Against this background of quite unremitting disaster, Marius obtained a place in republican political life which had not been witnessed before. In his pursuit of senatorial offices, Marius initially experienced both victories and disappointments (success in the tribunician elections but failure in elections for the aedileship) before finally winning the prestigious consulship in the elections held in 108. Thereafter, he was consul a further six times, and five of these consulships were held in successive years between 104 and 100. Just as he was dominant on the field of battle against the Numidians and the Germanic tribes, so, too, did he control the politics of the city during the decade from 108 to 99: The chapters which follow below set out to trace Marius' long rise to preeminence, his contribution to the intricate tribunician legislation of the period in which he flourished and, moreover, his involvement with other senior political figures who were his contemporaries. Furthermore, this biographical study seeks to fully expose the fact that, as a result of his participation in the politics of the time, Marius' career became an obvious example which other equally ambitious politicians (for instance, Sulla, Pompey, Crassus, Caesar and Octavian) sought to emulate or even to surpass. Consequently, Marius may not have realised the extent of the dangers which he bequeathed to the res publica but, inadvertently or not, he caused the beginning of the fall of the Roman Republic. / D. Litt. et Phil. (Ancient History) / History
42

Transcoding H.265/HEVC / Transcoding H.265/HEVC

Tamanna, Sina January 2013 (has links)
Video transcoding is the process of converting compressed video signals to adapt video characteristics such as video bit rate, video resolution, or video codec, so as to meet the specifications of communication channels and endpoint devices. A straightforward transcoding solution is to fully decode and encode the video. However this method is computationally expensive and thus unsuitable in applications with tight resource constraints such as in software-based real-time environment. Therefore, efficient transcoding meth- ods are required to reduce the transcoding complexity while preserving video quality. Prior transcoding methods are suitable for video coding standards such as H.264/AVC and MPEG-2. H.265/HEVC has introduced new coding concepts, e.g., the quad-tree-based block structure, that are fundamentally different from those in prior standards. These concepts require existing transcoding methods to be adapted and novel solutions to be developed. This work primarily addressed the issue of efficient HEVC transcoding for bit rate adaptation (reduction). The goal is to understand the transcoding behaviour for some straightforward transcoding strategies, and to subsequently optimize the complexity/quality trade-off by providing heuristics to reduce the number of coding options to evaluate. A transcoder prototype is developed based on the HEVC reference software HM-8.2. The proposed transcoder reduces the transcoding time compared to full decoding and encoding by at least 80% while inducing a coding performance drop within a margin for 5%. The thesis has been carried out in collaboration with Ericsson Research in Stockholm / Video content is produced daily through variety of electronic devices, however, storing and transmitting video signals in raw format is impractical due to its excessive resource requirement. Today popular video coding standards such as MPEG-4 and H.264 are used to compress the video signals before storing and transmitting. Accordingly, efficient video coding plays an important role in video communications. While video applications become wide-spread, there is a need for high compression and low complexity video coding algorithms that preserve image quality. Standard organizations ISO, ITO, VCEG of ITU-T, and collaboration of many companies have developed video coding standards in the past to meet video coding requirements of the day. The Advanced Video Coding (AVC/H.264) standard is the most widely used video coding method. AVC is commonly known to be one of the major standards used in Blue Ray devices for video compression. It is also widely used by video streaming services, TV broadcasting, and video conferencing applications. Currently the most important development in this area is the introduction of H.265/HEVC standard which has been finalized in January 2013. The aim of standardization is to produce video compression specification that is capable of compression twice as effective as H.264/AVC standard in terms of coding complexity and quality. There is a wide range of platforms that receive digital video. TVs, personal computers, mobile phones, and tablets each have different computational, display, and connectivity capabilities, thus video has to be converted to meet the specifications of target platform. This conversion is achieved through video transcoding. For transcoding, straightforward solution is to decode the compressed video signal and re-encode it to the target compression format, but this process is computationally complex. Particularly in real-time applications, there is a need to exploit the information that is already available through the compressed video bit-stream to speed-up the conversion. The objective of this thesis is to investigate efficient transcoding methods for HEVC. Using decode/re-encode as the performance reference, methods for advanced transcoding will be investigated. / 0760609667 Bäckgårdsvägen 49, 14341 Stockholm
43

A Family of Role-Based Languages

Kühn, Thomas 29 August 2017 (has links) (PDF)
Role-based modeling has been proposed in 1977 by Charles W. Bachman, as a means to model complex and dynamic domains, because roles are able to capture both context-dependent and collaborative behavior of objects. Consequently, they were introduced in various fields of research ranging from data modeling via conceptual modeling through to programming languages. More importantly, because current software systems are characterized by increased complexity and context-dependence, there is a strong demand for new concepts beyond object-oriented design. Although mainstream modeling languages, i.e., Entity-Relationship Model, Unified Modeling Language, are good at capturing a system's structure, they lack ways to model the system's behavior, as it dynamically emerges through collaborating objects. In turn, roles are a natural concept capturing the behavior of participants in a collaboration. Moreover, roles permit the specification of interactions independent from the interacting objects. Similarly, more recent approaches use roles to capture context-dependent properties of objects. The notion of roles can help to tame the increased complexity and context-dependence. Despite all that, these years of research had almost no influence on current software development practice. To make things worse, until now there is no common understanding of roles in the research community and no approach fully incorporates both the context-dependent and the relational nature of roles. In this thesis, I will devise a formal model for a family of role-based modeling languages to capture the various notions of roles. Together with a software product line of Role Modeling Editors, this, in turn, enables the generation of a role-based language family for Role-based Software Infrastructures (RoSI).
44

Recovering the Semantics of Tabular Web Data

Braunschweig, Katrin 26 October 2015 (has links) (PDF)
The Web provides a platform for people to share their data, leading to an abundance of accessible information. In recent years, significant research effort has been directed especially at tables on the Web, which form a rich resource for factual and relational data. Applications such as fact search and knowledge base construction benefit from this data, as it is often less ambiguous than unstructured text. However, many traditional information extraction and retrieval techniques are not well suited for Web tables, as they generally do not consider the role of the table structure in reflecting the semantics of the content. Tables provide a compact representation of similarly structured data. Yet, on the Web, tables are very heterogeneous, often with ambiguous semantics and inconsistencies in the quality of the data. Consequently, recognizing the structure and inferring the semantics of these tables is a challenging task that requires a designated table recovery and understanding process. In the literature, many important contributions have been made to implement such a table understanding process that specifically targets Web tables, addressing tasks such as table detection or header recovery. However, the precision and coverage of the data extracted from Web tables is often still quite limited. Due to the complexity of Web table understanding, many techniques developed so far make simplifying assumptions about the table layout or content to limit the amount of contributing factors that must be considered. Thanks to these assumptions, many sub-tasks become manageable. However, the resulting algorithms and techniques often have a limited scope, leading to imprecise or inaccurate results when applied to tables that do not conform to these assumptions. In this thesis, our objective is to extend the Web table understanding process with techniques that enable some of these assumptions to be relaxed, thus improving the scope and accuracy. We have conducted a comprehensive analysis of tables available on the Web to examine the characteristic features of these tables, but also identify unique challenges that arise from these characteristics in the table understanding process. To extend the scope of the table understanding process, we introduce extensions to the sub-tasks of table classification and conceptualization. First, we review various table layouts and evaluate alternative approaches to incorporate layout classification into the process. Instead of assuming a single, uniform layout across all tables, recognizing different table layouts enables a wide range of tables to be analyzed in a more accurate and systematic fashion. In addition to the layout, we also consider the conceptual level. To relax the single concept assumption, which expects all attributes in a table to describe the same semantic concept, we propose a semantic normalization approach. By decomposing multi-concept tables into several single-concept tables, we further extend the range of Web tables that can be processed correctly, enabling existing techniques to be applied without significant changes. Furthermore, we address the quality of data extracted from Web tables, by studying the role of context information. Supplementary information from the context is often required to correctly understand the table content, however, the verbosity of the surrounding text can also mislead any table relevance decisions. We first propose a selection algorithm to evaluate the relevance of context information with respect to the table content in order to reduce the noise. Then, we introduce a set of extraction techniques to recover attribute-specific information from the relevant context in order to provide a richer description of the table content. With the extensions proposed in this thesis, we increase the scope and accuracy of Web table understanding, leading to a better utilization of the information contained in tables on the Web.
45

Návrh vestavaného systému inteligentného vidění na platformě NVIDIA / Embedded Vision System on NVIDIA platform

Krivoklatský, Filip January 2019 (has links)
This diploma thesis deals with design of embedded computer vision system and transfer of existing computer vision application for 3D object detection from Windows OS to designed embedded system with Linux OS. Thesis focuses on design of communication interface for system control and camera video transfer through local network with video compression. Then, detection algorithm is enhanced by transferring computationally expensive functions to GPU using CUDA technology. Finally, a user application with graphical interface is designed for system control on Windows platform.
46

Měření kvality pro HEVC / Video Quality Measurement for HEVC

Klejmová, Eva January 2014 (has links)
This diploma thesis deals with standard objective and subjective video quality assessments and with analysis of their applicability to HEVC. Also basic description of video compression standard H.265/HEVC is presented. The main focus of the thesis is a creation of the database of compressed video sequences. Important parameters and features of the reference encoder HM-12 are discussed. Selected methods of objective video quality assessments are implemented on the created database. A part of this thesis is also a suggestion of method for objective video quality assessment, application of this method and associated data collection. Final data is statistically analyzed and it’s correlation with objective tests is discussed.
47

The political relationship between Caesar and Cicero to the conclusion of the Civil War.

Pitt, Edith Seaton. January 1943 (has links)
No description available.
48

Delphin 6 Output File Specification

Vogelsang, Stefan, Nicolai, Andreas 12 April 2016 (has links) (PDF)
Abstract This paper describes the file formats of the output data and geometry files generated by the Delphin program, a simulation model for hygrothermal transport in porous media. The output data format is suitable for any kind of simulation output generated by transient transport simulation models. Implementing support for the Delphin output format enables use of the advanced post-processing functionality provided by the Delphin post-processing tool and its dedicated physical analysis functionality.
49

Verification of Data-aware Business Processes in the Presence of Ontologies

Santoso, Ario 14 November 2016 (has links) (PDF)
The meet up between data, processes and structural knowledge in modeling complex enterprise systems is a challenging task that has led to the study of combining formalisms from knowledge representation, database theory, and process management. Moreover, to ensure system correctness, formal verification also comes into play as a promising approach that offers well-established techniques. In line with this, significant results have been obtained within the research on data-aware business processes, which studies the marriage between static and dynamic aspects of a system within a unified framework. However, several limitations are still present. Various formalisms for data-aware processes that have been studied typically use a simple mechanism for specifying the system dynamics. The majority of works also assume a rather simple treatment of inconsistency (i.e., reject inconsistent system states). Many researches in this area that consider structural domain knowledge typically also assume that such knowledge remains fixed along the system evolution (context-independent), and this might be too restrictive. Moreover, the information model of data-aware processes sometimes relies on relatively simple structures. This situation might cause an abstraction gap between the high-level conceptual view that business stakeholders have, and the low-level representation of information. When it comes to verification, taking into account all of the aspects above makes the problem more challenging. In this thesis, we investigate the verification of data-aware processes in the presence of ontologies while at the same time addressing all limitations above. Specifically, we provide the following contributions: (1) We propose a formal framework called Golog-KABs (GKABs), by leveraging on the state of the art formalisms for data-aware processes equipped with ontologies. GKABs enable us to specify semantically-rich data-aware business processes, where the system dynamics are specified using a high-level action language inspired by the Golog programming language. (2) We propose a parametric execution semantics for GKABs that is able to elegantly accommodate a plethora of inconsistency-aware semantics based on the well-known notion of repair, and this leads us to consider several variants of inconsistency-aware GKABs. (3) We enhance GKABs towards context-sensitive GKABs that take into account the contextual information during the system evolution. (4) We marry these two settings and introduce inconsistency-aware context-sensitive GKABs. (5) We introduce the so-called Alternating-GKABs that allow for a more fine-grained analysis over the evolution of inconsistency-aware context-sensitive systems. (6) In addition to GKABs, we introduce a novel framework called Semantically-Enhanced Data-Aware Processes (SEDAPs) that, by utilizing ontologies, enable us to have a high-level conceptual view over the evolution of the underlying system. We provide not only theoretical results, but have also implemented this concept of SEDAPs. We also provide numerous reductions for the verification of sophisticated first-order temporal properties over all of the settings above, and show that verification can be addressed using existing techniques developed for Data-Centric Dynamic Systems (which is a well-established data-aware processes framework), under suitable boundedness assumptions for the number of objects freshly introduced in the system while it evolves. Notably, all proposed GKAB extensions have no negative impact on computational complexity.
50

Human Mobility and Application Usage Prediction Algorithms for Mobile Devices

Baumann, Paul 27 October 2016 (has links) (PDF)
Mobile devices such as smartphones and smart watches are ubiquitous companions of humans’ daily life. Since 2014, there are more mobile devices on Earth than humans. Mobile applications utilize sensors and actuators of these devices to support individuals in their daily life. In particular, 24% of the Android applications leverage users’ mobility data. For instance, this data allows applications to understand which places an individual typically visits. This allows providing her with transportation information, location-based advertisements, or to enable smart home heating systems. These and similar scenarios require the possibility to access the Internet from everywhere and at any time. To realize these scenarios 83% of the applications available in the Android Play Store require the Internet to operate properly and therefore access it from everywhere and at any time. Mobile applications such as Google Now or Apple Siri utilize human mobility data to anticipate where a user will go next or which information she is likely to access en route to her destination. However, predicting human mobility is a challenging task. Existing mobility prediction solutions are typically optimized a priori for a particular application scenario and mobility prediction task. There is no approach that allows for automatically composing a mobility prediction solution depending on the underlying prediction task and other parameters. This approach is required to allow mobile devices to support a plethora of mobile applications running on them, while each of the applications support its users by leveraging mobility predictions in a distinct application scenario. Mobile applications rely strongly on the availability of the Internet to work properly. However, mobile cellular network providers are struggling to provide necessary cellular resources. Mobile applications generate a monthly average mobile traffic volume that ranged between 1 GB in Asia and 3.7 GB in North America in 2015. The Ericsson Mobility Report Q1 2016 predicts that by the end of 2021 this mobile traffic volume will experience a 12-fold increase. The consequences are higher costs for both providers and consumers and a reduced quality of service due to congested mobile cellular networks. Several countermeasures can be applied to cope with these problems. For instance, mobile applications apply caching strategies to prefetch application content by predicting which applications will be used next. However, existing solutions suffer from two major shortcomings. They either (1) do not incorporate traffic volume information into their prefetching decisions and thus generate a substantial amount of cellular traffic or (2) require a modification of mobile application code. In this thesis, we present novel human mobility and application usage prediction algorithms for mobile devices. These two major contributions address the aforementioned problems of (1) selecting a human mobility prediction model and (2) prefetching of mobile application content to reduce cellular traffic. First, we address the selection of human mobility prediction models. We report on an extensive analysis of the influence of temporal, spatial, and phone context data on the performance of mobility prediction algorithms. Building upon our analysis results, we present (1) SELECTOR – a novel algorithm for selecting individual human mobility prediction models and (2) MAJOR – an ensemble learning approach for human mobility prediction. Furthermore, we introduce population mobility models and demonstrate their practical applicability. In particular, we analyze techniques that focus on detection of wrong human mobility predictions. Among these techniques, an ensemble learning algorithm, called LOTUS, is designed and evaluated. Second, we present EBC – a novel algorithm for prefetching mobile application content. EBC’s goal is to reduce cellular traffic consumption to improve application content freshness. With respect to existing solutions, EBC presents novel techniques (1) to incorporate different strategies for prefetching mobile applications depending on the available network type and (2) to incorporate application traffic volume predictions into the prefetching decisions. EBC also achieves a reduction in application launch time to the cost of a negligible increase in energy consumption. Developing human mobility and application usage prediction algorithms requires access to human mobility and application usage data. To this end, we leverage in this thesis three publicly available data set. Furthermore, we address the shortcomings of these data sets, namely, (1) the lack of ground-truth mobility data and (2) the lack of human mobility data at short-term events like conferences. We contribute with JK2013 and UbiComp Data Collection Campaign (UbiDCC) two human mobility data sets that address these shortcomings. We also develop and make publicly available a mobile application called LOCATOR, which was used to collect our data sets. In summary, the contributions of this thesis provide a step further towards supporting mobile applications and their users. With SELECTOR, we contribute an algorithm that allows optimizing the quality of human mobility predictions by appropriately selecting parameters. To reduce the cellular traffic footprint of mobile applications, we contribute with EBC a novel approach for prefetching of mobile application content by leveraging application usage predictions. Furthermore, we provide insights about how and to what extent wrong and uncertain human mobility predictions can be detected. Lastly, with our mobile application LOCATOR and two human mobility data sets, we contribute practical tools for researchers in the human mobility prediction domain.

Page generated in 0.0539 seconds