• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27368
  • 5236
  • 1472
  • 1279
  • 1279
  • 1279
  • 1279
  • 1279
  • 1269
  • 1205
  • 867
  • 671
  • 512
  • 158
  • 156
  • Tagged with
  • 42845
  • 42845
  • 14614
  • 10962
  • 3179
  • 2976
  • 2818
  • 2596
  • 2582
  • 2519
  • 2476
  • 2470
  • 2387
  • 2288
  • 2085
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Large-scale Geometric Data Decomposition, Processing and Structured Mesh Generation

Yu, Wuyi 11 April 2016 (has links)
Mesh generation is a fundamental and critical problem in geometric data modeling and processing. In most scientific and engineering tasks that involve numerical computations and simulations on 2D/3D regions or on curved geometric objects, discretizing or approximating the geometric data using a polygonal or polyhedral meshes is always the first step of the procedure. The quality of this tessellation often dictates the subsequent computation accuracy, efficiency, and numerical stability. When compared with unstructured meshes, the structured meshes are favored in many scientific/engineering tasks due to their good properties. However, generating high-quality structured mesh remains challenging, especially for complex or large-scale geometric data. In industrial Computer-aided Design/Engineering (CAD/CAE) pipelines, the geometry processing to create a desirable structural mesh of the complex model is the most costly step. This step is semi-manual, and often takes up to several weeks to finish. Several technical challenges remains unsolved in existing structured mesh generation techniques. This dissertation studies the effective generation of structural mesh on large and complex geometric data. We study a general geometric computation paradigm to solve this problem via model partitioning and divide-and-conquer. To apply effective divide-and-conquer, we study two key technical components: the shape decomposition in the divide stage, and the structured meshing in the conquer stage. We test our algorithm on vairous data set, the results demonstrate the efficiency and effectiveness of our framework. The comparisons also show our algorithm outperforms existing partitioning methods in final meshing quality. We also show our pipeline scales up efficiently on HPC environment.
42

Bounded rationality in decision making under uncertainty| Towards optimal granularity

Lorkowski, Joseph A. 28 January 2016 (has links)
<p> Starting from well-known studies by Kahmenan and Tversky, researchers have found many examples when our decision making seems to be irrational. We show that this seemingly irrational decision making can be explained if we take into account that human abilities to process information are limited. As a result, instead of the exact <i>values</i> of different quantities, we operate with <i>granules</i> that contain these values. On several examples, we show that optimization under such granularity restriction indeed leads to observed human decision making. Thus, granularity helps explain seemingly irrational human decision making.</p><p> Similar arguments can be used to <i>explain</i> the success of <i>heuristic techniques</i> in expert decision making. We use these explanations to <i>predict</i> the <i>quality</i> of the resulting <i>decisions.</i> Finally, we explain how we can <i> improve</i> on the existing <i>heuristic techniques</i> by formulating and solving the corresponding optimization problems.</p>
43

The Performance of Random Prototypes in Hierarchical Models of Vision

Stewart, Kendall Lee 30 December 2015 (has links)
<p> I investigate properties of HMAX, a computational model of hierarchical processing in the primate visual cortex. High-level cortical neurons have been shown to respond highly to particular natural shapes, such as faces. HMAX models this property with a dictionary of natural shapes, called prototypes, that respond to the presence of those shapes. The resulting set of similarity measurements is an effective descriptor for classifying images. Curiously, prior work has shown that replacing the dictionary of natural shapes with entirely random prototypes has little impact on classification performance. This work explores that phenomenon by studying the performance of random prototypes on natural scenes, and by comparing their performance to that of sparse random projections of low-level image features.</p>
44

Software signature derivation from sequential digital forensic analysis

Nelson, Alexander J. 22 July 2016 (has links)
<p> Hierarchical storage system namespaces are notorious for their immense size, which is a significant hindrance for any computer inspection. File systems for computers start with tens of thousands of files, and the Registries of Windows computers start with hundreds of thousands of cells. An analysis of a storage system, whether for digital forensics or locating old data, depends on being able to reduce the namespaces down to the features of interest. Typically, having such large volumes to analyze is seen as a challenge to identifying relevant content. However, if the origins of files can be identified&mdash;particularly dividing between software and human origins&mdash;large counts of files become a boon to profiling how a computer has been used. It becomes possible to identify software that has influenced the computer's state, which gives an important overview of storage system contents not available to date. </p><p> In this work, I apply document search to observed changes in a class of forensic artifact, cell names of the Windows Registry, to identify effects of software on storage systems. Using the search model, a system's Registry becomes a query for matching software signatures. To derive signatures, file system differential analysis is extended from between two storage system states to many sequences of states. The workflow that creates these signatures is an example of analytics on data lineage, from branching data histories. The signatures independently indicate past presence or usage of software, based on consistent creation of measurably distinct artifacts. A signature search engine is demonstrated against a machine with a selected set of applications installed and executed. The optimal search engine according to that machine is then turned against a separate corpus of machines with a set of present applications identified by several non-Registry forensic artifact sources, including the file systems, memory, and network captures. The signature search engine corroborates those findings, using only the Windows Registry.</p>
45

Neuroevolution Based Inverse Reinforcement Learning

Budhraja, Karan Kumar 23 July 2016 (has links)
<p> Motivated by such learning in nature, the problem of Learning from Demonstration is targeted at learning to perform tasks based on observed examples. One of the approaches to Learning from Demonstration is Inverse Reinforcement Learning, in which actions are observed to infer rewards. This work combines a feature based state evaluation approach to Inverse Reinforcement Learning with neuroevolution, a paradigm for modifying neural networks based on their performance on a given task. Neural networks are used to learn from a demonstrated expert policy and are evolved to generate a policy similar to the demonstration. The algorithm is discussed and evaluated against competitive feature-based Inverse Reinforcement Learning approaches. At the cost of execution time, neural networks allow for non-linear combinations of features in state evaluations. These valuations may correspond to state value or state reward. This results in better correspondence to observed examples as opposed to using linear combinations. This work also extends existing work on Bayesian Non-Parametric Feature construction for Inverse Reinforcement Learning by using non-linear combinations of intermediate data to improve performance. The algorithm is observed to be specifically suitable for a linearly solvable non-deterministic Markov Decision Processes in which multiple rewards are sparsely scattered in state space. Performance of the algorithm is shown to be limited by parameters used, implying adjustable capability. A conclusive performance hierarchy between evaluated algorithms is constructed.</p>
46

Quantifying the effects of uncertainty to manage cyber-security risk and enable adaptivity in power grid wide area monitoring and control applications

Wang, Yujue 19 July 2016 (has links)
<p> The smooth operation of the power grid is based on the effective Wide Area Monitoring and Control systems, which is supposed to provide reliable and secure communication of data. Due to the complexity of the system and inaccuracy of modeling, uncertainty is unavoidable in such systems. So it is of great interest to characterize and quantify the uncertainty properly, which is significant to the functionality of power grid. </p><p> Trust, as a subjective and expressive concept connoting one party's (the trustor's) reliance on and belief in the performance of another party (the trustee), is modeled to help administrators (trustors) of WAMC systems evaluate the trustworthiness of data sources (trustees), which is essentially a measurement of uncertainty of this system. Both evidence based methods and data based methods are developed to evaluate trustworthiness and describe uncertainty respectively. </p><p> By modeling both aleatory and epistemic uncertainty with subjective logic and probability distributions respectively, a framework quantifying uncertainty is proposed. Quantification of the uncertainties can greatly help the system administrators to select the most fitting security implementation to achieve both security and QoS with a certain confidence. Based on the quantification framework, an adaptive security mechanism is prototyped, which can adjust the security scheme online according to dynamic requirements and environmental changes, to make the best ongoing trade-off between security assurance and QoS.</p>
47

ALGORITHMS AND TECHNIQUES FOR TRANSITIONING TO SOFTWARE DEFINED NETWORKS

Patil, Prithviraj Pradiprao 22 July 2016 (has links)
Software Defined Networking (SDN) has seen growing deployment in the large wired data center networks due to its advantages like better network manageability and higher-level abstractions. At the core of SDN is the separation and centralization of the control plane from the forwarding elements in the network as opposed to the distributed control plane of current networks. However various issues need to be addressed for an efficient transition to SDN from existing legacy networks. In this thesis, we address following three important challenges in this regard. (1) The task of deploying the distributed controllers continues to be performed in a manual and static way. To address this problem, we present a novel approach called InitSDN to bootstrapping the distributed software defined network architecture and deploying the distributed controllers. (2) Data center networks (DCNs) rely heavily on the use of group communications for various tasks such as management utilities, collaborative applications, distributed databases, etc. SDN provides new opportunities for re-engineering multicast protocols that can address current limitations with IP multicast. To that end we present a novel approach to using SDN-based multicast (SDMC) for flexible, network load-aware, and switch memory-efficient group communication in DCNs. (3) SDN has been slow to be used in the wireless scenario like wireless mesh net- works (WSN) compared to wired data center networks. This is due to the fact that SDN (and its underlying OpenFlow protocol) was designed initially to run in the wired network where SDN controller has wired access to all the switches in the network. To address this challenge, we propose a pure opneflow based approach for adapting SDN in wireless mesh netowrks by extending current OpenFlow protocol for routing in the wireless network.
48

A Novel Technique and Infrastructure for Online Analytics of Social Networks

Liu, Lian 22 July 2016 (has links)
The popularity of online social networks has grown at an exponential scale since they connect people all over the world enabling them to remain in touch with each other despite the geographical distance among them. These networks are a source of enormous amount of data that can be analyzed to make informed decisions on a variety of aspects, ranging from addressing societal problems to discovering potential security and terrorism-related events. Unfortunately, most efforts at analyzing such data tend to be offline, which may not be useful when actions must be taken in a timely fashion or the volume of generated data overwhelms computation, storage and networking resources. This Masters thesis investigates novel mechanisms for online processing of social network data. To validate the ideas, this thesis uses the LDBC social network benchmark provided as a challenge problem at the ACM Distributed and Event-based Systems (DEBS) conference, and demonstrates the techniques developed to address the first query from the challenge problem. The thesis will discuss the architectural choices we made in developing an online social network analysis solution.
49

A binary classifier for test case feasibility applied to automatically generated tests of event-driven software

Robbins, Bryan Thomas, III 29 June 2016 (has links)
<p> Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These <i>infeasible</i> test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide.</p><p> In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments.</p><p> The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier.</p><p> To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.</p>
50

Top-K Query Processing in Edge-Labeled Graph Data

Park, Noseong 29 June 2016 (has links)
<p> Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed.</p><p> Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs.</p><p> Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features.</p><p> The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. </p><p> An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. </p><p> The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores. </p>

Page generated in 0.1126 seconds