• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27376
  • 5236
  • 1472
  • 1279
  • 1279
  • 1279
  • 1279
  • 1279
  • 1269
  • 1206
  • 867
  • 671
  • 512
  • 158
  • 156
  • Tagged with
  • 42859
  • 42859
  • 14620
  • 10965
  • 3180
  • 2978
  • 2818
  • 2597
  • 2582
  • 2520
  • 2479
  • 2470
  • 2387
  • 2288
  • 2088
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Bag-of-Concepts as a Movie Genome and Representation

Zhou, Colin 30 August 2016 (has links)
<p> As online retailers, media providers, and others have learned, the ability to provide quality recommendations for users is a strong profit multiplier: it drives sales, maintains retention, and provides value for users. Video streaming, retail, and video databases particularly have had extensive effort and research devoted to finding effective ways to make representations. One of these is MovieLens.org, a video recommender website that uses a representation called the tag genome to understand movies and make tuned recommendations. The tag genome representation is built from user input on the website, where users can apply tags to movies and rate them in order to provide the information needed to make recommendations. </p><p> In this work, we draw from research in information retrieval to implement a bag-of-concepts representation for movies and make tuned recommendations in the same manner as MovieLens. Our implementation is fully unsupervised and does not require the user data needed in the implementation of MovieLens while still having similar properties to the tag genome that enable interesting tuned recommendations.</p>
82

Visualization of spatio-temporal data in two dimensional space

Baskaran, Savitha 11 March 2017 (has links)
<p> Spatio-temporal data is becoming very popular in the recent times, as there are large number of datasets that collect both location and temporal information in the real time. The main challenge is that extracting useful insights from such large data set is extremely complex and laborious. In this thesis, we have proposed a novel 2D technique to visualize the spatio-temporal big data. The visualization of the combined interaction between the spatial and temporal data is of high importance to uncover the insights and identify the trends within the data.</p><p> Maps have been a successful way to represent the spatial information. Additionally, in this work, colors are used to represent the temporal data. Every data point has the time information which is converted into relevant color, based on the HSV color model. The variation in the time is represented by transition from one color to another and hence provide smooth interpolation. The proposed solution will help the user to quickly understand the data and gain insights.</p>
83

Decentralized Scheduling for Many-Task Applications in the Hybrid Cloud

Peterson, Brian Lyle 02 May 2017 (has links)
While Cloud Computing has transformed how we solve many computing tasks, some scientific and many-task applications are not efficiently executed on cloud resources. Decentralized scheduling, as studied in grid computing, can provide a scalable system to organize cloud resources and schedule a variety of work. By measuring simulations of two algorithms, the fully decentralized Organic Grid, and the partially decentralized Air Traffic Controller from IBM, we establish that decentralization is a workable approach, and that there are bottlenecks that can impact partially centralized algorithms. Through measurements in the cloud, we verify that our simulation approach is sound, and assess the variable performance of cloud resources. We propose a scheduler that measures the capabilities of the resources available to execute a task and distributes work dynamically at run time. Our scheduling algorithm is evaluated experimentally, and we show that performance-aware scheduling in a cloud environment can provide improvements in execution time. This provides a framework by which a variety of parameters can be weighed to make job-specific and context-aware scheduling decisions. Our measurements examine the usefulness of benchmarking as a metric used to measure a node's performance, and drive scheduling. Benchmarking provides an advantage over simple queue-based scheduling on distributed systems whose members vary in actual performance, but the NAS benchmark we use does not always correlate perfectly with actual performance. The utilized hardware is examined, as are enforced performance variations, and we observe changes in performance that result in running on a system in which different workers receive different CPU allocations. As we see that performance metrics are useful near the end of the execution of a large job, we create a new metric from historical data of partially completed work, and use that to drive execution time down further. Interdependent task graph work is introduced and described as a next step in improving cloud scheduling. Realistic task graph problems are defined and a scheduling approach is introduced. This dissertation lays the groundwork to expand the types of problems that can be solved efficiently in the cloud environment.
84

Game-Theoretically Allocating Resources to Catch Evaders and Payments to Stabilize Teams

Li, Yuqian January 2016 (has links)
<p>Allocating resources optimally is a nontrivial task, especially when multiple</p><p>self-interested agents with conflicting goals are involved. This dissertation</p><p>uses techniques from game theory to study two classes of such problems:</p><p>allocating resources to catch agents that attempt to evade them, and allocating</p><p>payments to agents in a team in order to stabilize it. Besides discussing what</p><p>allocations are optimal from various game-theoretic perspectives, we also study</p><p>how to efficiently compute them, and if no such algorithms are found, what</p><p>computational hardness results can be proved.</p><p>The first class of problems is inspired by real-world applications such as the</p><p>TOEFL iBT test, course final exams, driver's license tests, and airport security</p><p>patrols. We call them test games and security games. This dissertation first</p><p>studies test games separately, and then proposes a framework of Catcher-Evader</p><p>games (CE games) that generalizes both test games and security games. We show</p><p>that the optimal test strategy can be efficiently computed for scored test</p><p>games, but it is hard to compute for many binary test games. Optimal Stackelberg</p><p>strategies are hard to compute for CE games, but we give an empirically</p><p>efficient algorithm for computing their Nash equilibria. We also prove that the</p><p>Nash equilibria of a CE game are interchangeable.</p><p>The second class of problems involves how to split a reward that is collectively</p><p>obtained by a team. For example, how should a startup distribute its shares, and</p><p>what salary should an enterprise pay to its employees. Several stability-based</p><p>solution concepts in cooperative game theory, such as the core, the least core,</p><p>and the nucleolus, are well suited to this purpose when the goal is to avoid</p><p>coalitions of agents breaking off. We show that some of these solution concepts</p><p>can be justified as the most stable payments under noise. Moreover, by adjusting</p><p>the noise models (to be arguably more realistic), we obtain new solution</p><p>concepts including the partial nucleolus, the multiplicative least core, and the</p><p>multiplicative nucleolus. We then study the computational complexity of those</p><p>solution concepts under the constraint of superadditivity. Our result is based</p><p>on what we call Small-Issues-Large-Team games and it applies to popular</p><p>representation schemes such as MC-nets.</p> / Dissertation
85

Configuration and management of wireless sensor networks

Kim, Min Y. 12 1900 (has links)
Wireless sensor networks (WSNs) are expected to play an essential role in the upcoming age of pervasive computing. As a new research area, there are several open problems that need to be investigated. One such problem is configuration and management of WSNs. To deploy sensors efficiently in a wide area, we need to consider coverage, purpose and geographic situation. By considering these elements, we can make general deployment strategies. Another issue is management of various sensors in wide area. To handle these issues, we need approaches from different view, management levels, WSN functionalities, and management functional areas. In this thesis, I describe some of the key configuration and management problems in WSNs. Then, I present a newly developed application to address these problems.
86

Validating network security policies via static analysis of router ACL configuration

Wong, Eric Gregory Wen Wie 12 1900 (has links)
Approved for public release, distribution unlimited / The security of a network depends on how its design fulfills the organization's security policy. One aspect of security is reachability: whether two hosts can communicate. Network designers and operators face a very difficult problem in verifying the reachability of a network, because of the lack of automated tools, and calculations by hand are impractical because of the often sheer size of networks. The reachability of a network is influenced by packet filters, routing protocols, and packet transformations. A general framework for calculating the joint effort of these three factors was published recently. This thesis partially validates that framework through a detailed Java implementation, with the creation of an automated solution which demonstrates that the effect of statically configured packet filters on the reachability upper bounds of a network can be computed efficiently. The automated solution performs its computations purely based on the data obtained from parsing router configuration files. Mapping all packet filter rules into a data structure called PacketSet, consisting of tuples of permitted ranges of packet header fields, is the key to easy manipulation of the data obtained from the router configurations files. This novel approach facilitates the validation of the security policies of very large networks, which was previously not possible, and paves the way for a complete automated solution for static analysis of network reachability.
87

A performance analysis of ad-hoc ocean sensor network

Lim, Kwang Yong. 12 1900 (has links)
This thesis presents the simulation results and performance analysis of IEEE 802.15.4 in an oceanic environment. The 802.15.4 standard allows simple sensors and actuators to co-exist in a single wireless platform. The simulation is performed using Network Simulator, version 2 (NS2) which is an open-source network simulator tool. NS2 is an event driven network simulator developed at UC Berkeley that simulates a variety of networks. Leveraging on the capabilities of NS2, the performance of the IEEE 802.15.4 protocol has been studied based on variations in node density, mobility as well as loading conditions. The mobility model selected for the simulation has considered the ocean effects on the mobile nodes, in particular the surface current. However, the available mobility models (Random Waypoint, Gauss-Markov, Manhattan Grid and Reference Point Group) do not represent the real life mobility in an oceanic environment scenario. As a result, actual data of surface measurement in the Monterey bay area is used to generate the node movements. The results from this analysis provide insights into the performance of IEEE 802.15.4 and its suitability for operating in an oceanic environment.
88

Information Centric Strategies for Scalable Data Transport in Cyber Physical Systems (CPSs)

Kavuri, Ajay Krishna Teja 26 May 2017 (has links)
<p> Cyber-Physical Systems (CPSs) represent the next generation of computing that is ubiquitous, wireless and intelligent. These networked sens- ing systems are at the intersection of sensing, communication, control, and computing [16]. Such systems will have applications in numerous elds such as vehicular systems and transportation, medical and health care systems, smart homes and buildings, etc. The proliferation of such sensing systems will trigger an exponential increase in the computational devices that exchange data over existing network infrastructure. </p><p> Transporting data at scale in such systems is a challenge [21] mainly due to the underlying network infrastructure which is still resource con- strained and bandwidth-limited. Eorts have been made to improve the network infrastructure [5] [2] [15]. The focus of this thesis is to put forward information-centric strategies that optimize the data transport over existing network infrastructure. </p><p> This thesis proposes four dierent information-centric strategies: (1) Strategy to minimize network congestion in a generic sensing system by estimating data with adaptive updates, (2) An adaptive information exchange strategy based on rate of change of state for static and mobile networks, (3) Spatio-temporal strategy that maintains spatial resolution by reducing redundant transmissions, (4) Proximity-dependent data transfer strategy to ensure most updated information in high-density regions. Each of these strategies is experimentally veried to optimize the data transport in their respective setting.</p>
89

Machine Learning for the Automated Identification of Cyberbullying and Cyberharassment

Ducharme, Daniel N. 21 April 2017 (has links)
<p> Cyberbullying and cyberharassement are a growing issue that is straining the resources of human moderation teams. This is leading to an increase in suicide among the affected teens who are unable to get away from the harassment. By utilizing n-grams and support vector machines, this research was able to classify YouTube comments with an overall accuracy of 81.8%. This increased to 83.9% when utilizing retraining that added the misclassified comments to the training set. To accomplish this, a 350 comment balanced training set, with 7% of the highest entropy 3 length n-grams, and a polynomial kernel with the C error factor of 1, a degree of 2, and a Coef0 of 1 were used in the LibSVM implementation of the support vector machine algorithm. The 350 comments were also trimmed with a k-nearest neighbor algorithm where k was set to 4% of the training set size. With the algorithm designed to be heavily multi-threaded and capable of being run across multiple servers, the system was able to achieve that accuracy while classifying 3 comments per second, running on consumer grade hardware over Wi-Fi.</p>
90

Gridlock in Networks| The Leximin Method for Hierarchical Community Detection

McCarthy, Arya D. 09 June 2017 (has links)
<p> Community detection (CD) is an important task in network science. Identifying the community structure and hierarchy of communities reveals latent properties of the network. This task has real-world relevance in social network analysis, taxonomy, bioinformatics, and graph mining in general. Nevertheless, there is no common definition of a community and no common, efficient method of identifying communities. As is common, we formulate CD as optimization of modularity. Modularity quantifies the separation of a network into distinct, highly interconnected groups. Maximizing modularity is NP-hard.</p><p> To solve the optimization problem, we present a polynomial-time approximation method. It greedily maximizes modularity with a heuristic for sparsest cuts in a network. This involves maximizing max-min fair throughput between all pairs of network nodes. We evaluate the approximation&rsquo;s effectiveness for CD on synthetic networks with known community structure. We show competitive results in terms of the standard measure of CD accuracy, normalized mutual information (NMI). Further, our method is less sensitive to network perturbations than existing community detection algorithms. Our method also detects ties in hierarchical structure, which other techniques do not.</p><p> In graphs without a strong community structure, our method does not impose arbitrary structure. In these cases, we can show that the max-min fair flow can be split onto edge-disjoint paths of a multigraph corresponding to the original network.</p>

Page generated in 0.0991 seconds