• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27475
  • 5236
  • 1475
  • 1311
  • 1311
  • 1311
  • 1311
  • 1311
  • 1301
  • 1211
  • 867
  • 671
  • 512
  • 158
  • 156
  • Tagged with
  • 43011
  • 43011
  • 14694
  • 11017
  • 3184
  • 2986
  • 2820
  • 2604
  • 2592
  • 2535
  • 2507
  • 2487
  • 2389
  • 2289
  • 2120
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Leveraging Internet Background Radiation for Opportunistic Network Analysis

Benson, Karyn 14 October 2016 (has links)
<p> In this dissertation, we evaluate the potential of unsolicited Internet traffic, called Internet Background Radiation (IBR), to provide insights into address space usage and network conditions. IBR is primarily collected through darknets, which are blocks of IP addresses dedicated to collecting unsolicited traffic resulting from scans, backscatter, misconfigurations, and bugs. We expect these pervasively sourced components to yield visibility into networks that are hard to measure (e.g., hosts behind firewalls or not appearing in logs) with traditional active and passive techniques. Using the largest collections of IBR available to academic researchers, we test this hypothesis by: (1) identifying the phenomena that induce many hosts to send IBR, (2) characterizing the factors that influence our visibility, including aspects of the traffic itself and measurement infrastructure, and (3) extracting insights from 11 diverse case studies, after excluding obvious cases of sender inauthenticity. </p><p> Through IBR, we observe traffic from nearly every country, most ASes with routable prefixes, and millions of /24 blocks. Misconfigurations and bugs, often involving P2P networks, result in the widest coverage in terms of visible networks, though scanning traffic is applicable for in-depth and repeated analysis due to its large volume. We find, notwithstanding the extraordinary popularity of some IP addresses, similar observations using IBR collected in different darknets, and a predictable degradation using smaller darknets. Although the mix of IBR components evolves, our observations are consistent over time.</p><p> Our case studies highlight the versatility of IBR and help establish guidelines for when researchers should consider using unsolicited traffic for opportunistic network analysis. Based on our experience, IBR may assist in: corroborating inferences made through other datasets (e.g., DHCP lease durations) supplementing current state-of-the art techniques (e.g., IPv4 address space utilization), exposing weaknesses in other datasets (e.g., missing router interfaces), identifying abused resources (e.g., open resolvers), testing Internet tools by acting as a diverse traffic sample (e.g., uptime heuristics), and reducing the number of required active probes (e.g., path change inferences). In nearly every case study, IBR improves our analysis of an Internet-wide behavior. We expect future studies to reap similar benefits by including IBR. </p>
122

B-Activ - Health care Android framework

Kamathi, Anand 28 September 2016 (has links)
<p> The healthcare application domain has potential for research in the computer science field and Android domain. The built-in sensors and interfaces for virtual reality plugged in to the Android platform makes it a viable option for developers and end users. The B-Activ Android application builds a platform, which unlike other healthcare applications, ensures that the user is provided with essential input to indulge in an active life. External factors such as climate, pollution levels in the vicinity, and the user&rsquo;s Body Mass Index (BMI) affect a person&rsquo;s involvement in exercise and are central to the B-Activ application. B-Activ allows users to interact through traffic and pollution updates with people in the same city. The scope of B-Activ is to ensure that the user is active enough through simple exercises in order to control the cholesterol level and obesity thereby reducing the chances of deadly diseases.</p>
123

Soft Shadow Mip-Maps

Shen, Yang 28 September 2016 (has links)
<p> This document introduces the Soft Shadow Mip-Maps technique, which consists of three methods for overcoming the fundamental limitations of filtering-oriented soft shadows. Filtering-oriented soft shadowing techniques filter shadow maps with varying filter sizes determined by desired penumbra widths. Different varieties of this approach have been commonly applied in interactive and real-time applications. Nonetheless, they share some fundamental limitations. First, soft shadow filter size is not always guaranteed to be the correct size for producing the right penumbra width based on the light source size. Second, filtering with large kernels for soft shadows requires a large number of samples, thereby increasing the cost of filtering. Stochastic approximations for filtering introduce noise and prefiltering leads to inaccuracies. Finally, calculating shadows based on a single blocker estimation can produce significantly inaccurate penumbra widths when the shadow penumbras of different blockers overlap. </p><p> We discuss three methods to overcome these limitations. First, we introduce a method for computing the soft shadow filter size for a receiver with a blocker distance. Then, we present a filtering scheme based on shadow mip-maps. Mipmap-based filtering uses shadow mip-maps to efficiently generate soft shadows using a constant size filter kernel for each layer, and linear interpolation between layers. Finally, we introduce an improved blocker estimation approach. With the improved blocker estimating, we explore the shadow contribution of every blocker by calculating the light occluded by potential blockers. Hence, the calculated penumbra areas correspond to the blockers correctly. Finally, we discuss how to select filter kernels for filtering. </p><p> These approaches successively solve issues regarding shadow penumbra width calculation apparent in prior techniques. Our result shows that we can produce correct penumbra widths, as evident in our comparisons to ray-traced soft shadows. Nonetheless, the Soft Shadow Mip-Maps technique suffers from light bleeding issues. This is because our method only calculates shadows using the geometry that is available in the shadow depth map. Therefore, the occluded geometry is not taken into consideration, which leads to light bleeding. Another limitation of our method is that using lower resolution shadow mip-map layers limits the resolution of the shadow placement. As a result, when a blocker moves slowly, its shadow follows it with discrete steps, the size of which is determined by the corresponding mip-map layer resolution.</p>
124

Predicting Transcript Production Rates in Yeast With Sparse Linear Models

Huang, Yezhou January 2016 (has links)
<p>To provide biological insights into transcriptional regulation, a couple of groups have recently presented models relating the promoter DNA-bound transcription factors (TFs) to downstream gene’s mean transcript level or transcript production rates over time. However, transcript production is dynamic in response to changes of TF concentrations over time. Also, TFs are not the only factors binding to promoters; other DNA binding factors (DBFs) bind as well, especially nucleosomes, resulting in competition between DBFs for binding at same genomic location. Additionally, not only TFs, but also some other elements regulate transcription. Within core promoter, various regulatory elements influence RNAPII recruitment, PIC formation, RNAPII searching for TSS, and RNAPII initiating transcription. Moreover, it is proposed that downstream from TSS, nucleosomes resist RNAPII elongation.</p><p>Here, we provide a machine learning framework to predict transcript production rates from DNA sequences. We applied this framework in the S. cerevisiae yeast for two scenarios: a) to predict the dynamic transcript production rate during the cell cycle for native promoters; b) to predict the mean transcript production rate over time for synthetic promoters. As far as we know, our framework is the first successful attempt to have a model that can predict dynamic transcript production rates from DNA sequences only: with cell cycle data set, we got Pearson correlation coefficient Cp = 0.751 and coefficient of determination r2 = 0.564 on test set for predicting dynamic transcript production rate over time. Also, for DREAM6 Gene Promoter Expression Prediction challenge, our fitted model outperformed all participant teams, best of all teams, and a model combining best team’s k-mer based sequence features and another paper’s biologically mechanistic features, in terms of all scoring metrics.</p><p>Moreover, our framework shows its capability of identifying generalizable fea- tures by interpreting the highly predictive models, and thereby provide support for associated hypothesized mechanisms about transcriptional regulation. With the learned sparse linear models, we got results supporting the following biological insights: a) TFs govern the probability of RNAPII recruitment and initiation possibly through interactions with PIC components and transcription cofactors; b) the core promoter amplifies the transcript production probably by influencing PIC formation, RNAPII recruitment, DNA melting, RNAPII searching for and selecting TSS, releasing RNAPII from general transcription factors, and thereby initiation; c) there is strong transcriptional synergy between TFs and core promoter elements; d) the regulatory elements within core promoter region are more than TATA box and nucleosome free region, suggesting the existence of still unidentified TAF-dependent and cofactor-dependent core promoter elements in yeast S. cerevisiae; e) nucleosome occupancy is helpful for representing +1 and -1 nucleosomes’ regulatory roles on transcription.</p> / Dissertation
125

Algorithms for Geometric Matching, Clustering, and Covering

Pan, Jiangwei January 2016 (has links)
<p>With the popularization of GPS-enabled devices such as mobile phones, location data are becoming available at an unprecedented scale. The locations may be collected from many different sources such as vehicles moving around a city, user check-ins in social networks, and geo-tagged micro-blogging photos or messages. Besides the longitude and latitude, each location record may also have a timestamp and additional information such as the name of the location. Time-ordered sequences of these locations form trajectories, which together contain useful high-level information about people's movement patterns.</p><p>The first part of this thesis focuses on a few geometric problems motivated by the matching and clustering of trajectories. We first give a new algorithm for computing a matching between a pair of curves under existing models such as dynamic time warping (DTW). The algorithm is more efficient than standard dynamic programming algorithms both theoretically and practically. We then propose a new matching model for trajectories that avoids the drawbacks of existing models. For trajectory clustering, we present an algorithm that computes clusters of subtrajectories, which correspond to common movement patterns. We also consider trajectories of check-ins, and propose a statistical generative model, which identifies check-in clusters as well as the transition patterns between the clusters. </p><p>The second part of the thesis considers the problem of covering shortest paths in a road network, motivated by an EV charging station placement problem. More specifically, a subset of vertices in the road network are selected to place charging stations so that every shortest path contains enough charging stations and can be traveled by an EV without draining the battery. We first introduce a general technique for the geometric set cover problem. This technique leads to near-linear-time approximation algorithms, which are the state-of-the-art algorithms for this problem in either running time or approximation ratio. We then use this technique to develop a near-linear-time algorithm for this</p><p>shortest-path cover problem.</p> / Dissertation
126

Quality Guided Variable Bit Rate Texture Compression

Griffin, Wesley 06 October 2016 (has links)
<p> The primary goal of computer graphics is to create images by rendering a scene under two constraints: quality, producing the image with as few artifacts as possible, and time, producing the image as fast as possible. Technology advances have both helped to satisfy these constraints, with Graphics Processing Unit (GPU) advances reducing image rendering times, and to exacerbate these constraints, with new HD and virtual reality displays increasing rendering resolutions. To meet both constraints, rendering uses texture mapping which maps 2D textures onto scene objects. Over time, the count and resolution of textures has increased, resulting in dramatic growth of data storage requirements. Compression can help to reduce these storage requirements. </p><p> I present a rigorous texture compression evaluation methodology using final rendered images. My method can account for masking effects introduced by the texture mapping process while leveraging the perceptual-rigor of current Image Quality Assessment metrics. Building on this evaluation methodology, I present a demonstration of guided texture compression optimization that minimizes the bitrate of compressed textures while maximizing the quality of final rendered images. Guided texture compression will help with the scalability problem for optimizing texture compression in real-world scenarios.</p>
127

Finding important entities in graphs

Mavroforakis, Charalampos 05 February 2019 (has links)
Graphs are established as one of the most prominent means of data representation. They are composed of simple entities -- nodes and edges -- and reflect the relationship between them. Their impact extends to a broad variety of domains, e.g., biology, sociology and the Web. In these settings, much of the data value can be captured by a simple question; how can we evaluate the importance of these entities? The aim of this dissertation is to explore novel importance measures that are meaningful and can be computed efficiently on large datasets. First, we focus on the spanning edge centrality, an edge importance measure recently introduced to evaluate phylogenetic trees. We propose very efficient methods that approximate this measure in near-linear time and apply them to large graphs with millions of nodes. We demonstrate that this centrality measure is a useful tool for the analysis of networks outside its original application domain. Next, we turn to importance measures for nodes and propose the absorbing random walk centrality. This measure evaluates a group of nodes in a graph according to how central they are with respect to a set of query nodes. Specifically, given a query set and a candidate group of nodes, we start random walks from the queries and measure their length until they reach one of the candidates. The most central group of nodes will collectively minimize the expected length of these random walks. We prove several computational properties of this measure and provide an algorithm, whose solutions offer an approximation guarantee. Additionally, we develop efficient heuristics that allow us to use this importance measure in large datasets. Finally, we consider graphs in which each node is assigned a set of attributes. We define an important connected subgraph to be one for which the total weight of its edges is small, while the number of attributes covered by its nodes is large. To select such an important subgraph, we develop an efficient approximation algorithm based on the primal-dual schema.
128

The Representation of Association Semantics with Annotations in a Biodiversity Informatics System

Unknown Date (has links)
A specialized variation of associations for biodiversity data is defined and developed that makes the capture and discovery of information about biological images easier and more efficient. Biodiversity is the study of the diversity of plants and animals within a given region. Storing, understanding, and retrieving biodiversity data is a complex problem. Biodiversity experts disagree on the structure and the basic ontologies. Much of the knowledge on this subject is contained in private collections, paper notebooks, and the minds biologists. Collaboration among scientists is still problematic because of the logistics involved in sharing collections. This research adds value to image repositories by collecting and publishing semantically rich user specified associations among images and other objects. Current database and annotation techniques rely on structured data sets and ontologies to make storing, associating, and retrieving data efficient and reliable. A problem with biodiversity data is that the information is usually stored as ad-hoc text associated with non-standardized schemas and ontologies. This research developed a method that allows the storage of ad-hoc semantic associations through a complex relationship of working sets, phylogenetic character states, and image annotations. MorphBank is a collaborative research project supported by an NSF BDI grant (0446224 - $2,249,530.00) titled "Web Image Database Technology for Comparative Morphology and Biodiversity Research". MorphBank is an on-line museum-quality collection of biological images that facilitates the collaboration of biologists from around the world. This research proves the viability of using association semantics through annotations of biodiversity informatics for storing and discovery of new information. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the Requirements for the degree of Doctor of Philosophy. / Degree Awarded: Spring Semester, 2007. / Date of Defense: December 8, 2006. / Association Semantics, Annotations, Biodiversity, Computer Science, Database, Information System / Includes bibliographical references. / Greg Riccardi, Professor Directing Dissertation; Fredrik Ronquist, Outside Committee Member; Robert van Engelen, Committee Member; Ashok Srinivasan, Committee Member.
129

Evaluating Urban Deployment Scenarios for Vehicular Wireless Networks

Unknown Date (has links)
Vehicular wireless networks are gaining commercial interest. Mobile connectivity, road safety, and traffic congestion management are some applications that have arisen with this networking paradigm. Existing research primarily focuses on developing mobility models and evaluating routing protocols in ideal open-field environments. It provides limited information of whether vehicular networks can be deployed in an urban setting. This thesis evaluates the practicality of deployment scenarios for a vehicular ad hoc network with a wireless mesh infrastructure support. The deployment scenarios include: (1) a mesh-enhanced peer-to-peer ad hoc routing deployment model where both the mobile nodes and static wireless infrastructure nodes participate in routing, (2) a mesh-enhanced infrastructural routing deployment model where only the static wireless infrastructure nodes participate in routing and (3) a scenario where static wireless infrastructure nodes in deployments (1) and (2) have the ability to communicate over multiple wireless channels. These deployment scenarios are evaluated with a mobility model that restricts the movement of vehicles to street boundaries based on real world maps and imposes simple traffic rules. This study also proposes a method of capturing the effect of obstacles on wireless communication based on empirical experiments in urban environments. The results indicate that (1) the mesh-enhanced infrastructural routing deployment yields significantly better performance compared to mesh enhanced peer-to-peer ad hoc routing deployment; (2) in the mesh-enhanced infrastructural routing deployment scenario increasing the density of infrastructure nodes is beneficial while increasing the density of mobile nodes has no significant effect; (3) in the mesh-enhanced peer-to-peer ad hoc routing deployment scenario, higher density of infrastructure nodes as well as mobile nodes can lead to decreased performance; (4) using multiple channels of communication on infrastructure nodes yields highly increased performance; and (5) the effect of obstacles could be represented in simulations through parameters, which could be set based on empirical experiments. / A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science. / Degree Awarded: Summer Semester, 2006. / Date of Defense: June 19, 2006. / Deployment, Vehicular Networks, Infrastructure / Includes bibliographical references. / Kartik Gopalan, Professor Co-Directing Thesis; An-I Andy Wang, Professor Co-Directing Thesis; Zhenhai Duan, Committee Member.
130

Bcq a Bin-Based Core Stateless Packet Scheduler for Scalable and Flexible Support of Guaranteed Services

Unknown Date (has links)
IP Networks have become an integral part of our daily lives. As we become more dependent on this technology, we realize the importance and use of networks that can be configured to cater to various classes of services and users. Given the potential scalability in providing Quality of Services (QoS), core-stateless packet scheduling algorithms have attracted lot of attention in recent years. Unlike traditional stateful packet schedulers that require routers to maintain per-flow state and perform per-flow operations, core-stateless packet schedulers service packets based on some state carried in packet headers (such as reservation rate of a flow), and as a consequence, no per-flow state needs to be maintained at core routers, and no per-flow operations performed, which significantly reduce the complexity and improve the scalability of the packet scheduling algorithms. On the other hand, although core-stateless packet schedulers remove the requirement of per-flow state and operations, they aim to emulate the scheduling operations of the corresponding stateful packet schedulers. An important implication of this emulation is that they need to sort packets according to the control state carried in the packet headers and service packets in that order. This sorting operation can be quite expensive when the packet queue is long, which may not be acceptable in high-speed backbone networks. In this thesis, we present a bin-based core-stateless packet scheduling algorithm, BCQ, to overcome this problem. Like other core-stateless packet scheduling algorithms, BCQ does not require core routers to maintain per-flow state and perform per-flow operations. It schedules packets based on the notion of virtual time stamps. Virtual time stamps are computed using only some control state that can be carried in packet headers (and a few constant parameters of the scheduler). However, unlike current core-state packet scheduling algorithm, a BCQ scheduler maintain a number of packet bins, each representing a range of virtual times. Arriving packets at a BCQ scheduler are classified into the packet bins maintained by the BCQ, based on the virtual time stamps of the packets. Bins are serviced according to the range of virtual times they represent, packets in bins with earlier virtual times are serviced first. Packets within each bin are serviced in FIFO order. We formally present the BCQ scheduler in this thesis and conduct simulations to study its performance. Our simulation results show that BCQ is a scalable and flexible packet scheduling algorithm. By controlling the size of bins (therefore the cost of BCQ), BCQ can achieve different desirable performances. For example, when the bin size is sufficient large, all arriving packets will be falling in one bin, and no packet sorting is conducted (BCQ becomes a FIFO scheduler). On the other hand, as we gradually decrease the bin size, BCQ can provide different QoS performance (at greater cost). When the bin size is sufficient small, BCQ can provide the same end-to-end delay performance as other core-stateless schedulers. / A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science. / Degree Awarded: Fall Semester, 2005. / Date of Defense: September 23, 2005. / Quality of Service, Core Stateless, BCQ / Includes bibliographical references. / Zhenhai Duan, Professor Directing Thesis; Xin Yuan, Committee Member; Kartik Gopalan, Committee Member.

Page generated in 0.2754 seconds