• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 505
  • 208
  • 197
  • 160
  • 21
  • Tagged with
  • 1172
  • 765
  • 691
  • 428
  • 428
  • 401
  • 401
  • 398
  • 398
  • 115
  • 115
  • 103
  • 87
  • 86
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Spectral Methods in Extremal Combinatorics

Filmus, Yuval 09 January 2014 (has links)
Extremal combinatorics studies how large a collection of objects can be if it satisfies a given set of restrictions. Inspired by a classical theorem due to Erdos, Ko and Rado, Simonovits and Sos posed the following problem: determine how large a collection of graphs on the vertex set {1,...,n} can be, if the intersection of any two of them contains a triangle. They conjectured that the largest possible collection, containing 1/8 of all graphs, consists of all graphs containing a fixed triangle (a triangle-star). The first major contribution of this thesis is a confirmation of this conjecture. We prove the Simonovits–Sos conjecture in the following strong form: the only triangle-intersecting families of measure at least 1/8 are triangle-stars (uniqueness), and every triangle-intersecting family of measure 1/8−e is O(e)-close to a triangle-star (stability). In order to prove the stability part of our theorem, we utilize a structure theorem for Boolean functions on {0,1}^m whose Fourier expansion is concentrated on the first t+1 levels, due to Kindler and Safra. The second major contribution of this thesis consists of two analogs of this theorem for Boolean functions on S_m whose Fourier expansion is concentrated on the first two levels. In the same way that the Kindler–Safra theorem is useful for studying triangle-intersecting families, our structure theorems are useful for studying intersecting families of permutations, which are families in which any two permutations agree on the image of at least one point. Using one of our theorems, we give a simple proof of the following result of Ellis, Friedgut and Pilpel: an intersecting family of permutations on S_m of size (1−e)(m−1)! is O(e)-close to a double coset, a family which consists of all permutations sending some point i to some point j.
12

Spectral Methods in Extremal Combinatorics

Filmus, Yuval 09 January 2014 (has links)
Extremal combinatorics studies how large a collection of objects can be if it satisfies a given set of restrictions. Inspired by a classical theorem due to Erdos, Ko and Rado, Simonovits and Sos posed the following problem: determine how large a collection of graphs on the vertex set {1,...,n} can be, if the intersection of any two of them contains a triangle. They conjectured that the largest possible collection, containing 1/8 of all graphs, consists of all graphs containing a fixed triangle (a triangle-star). The first major contribution of this thesis is a confirmation of this conjecture. We prove the Simonovits–Sos conjecture in the following strong form: the only triangle-intersecting families of measure at least 1/8 are triangle-stars (uniqueness), and every triangle-intersecting family of measure 1/8−e is O(e)-close to a triangle-star (stability). In order to prove the stability part of our theorem, we utilize a structure theorem for Boolean functions on {0,1}^m whose Fourier expansion is concentrated on the first t+1 levels, due to Kindler and Safra. The second major contribution of this thesis consists of two analogs of this theorem for Boolean functions on S_m whose Fourier expansion is concentrated on the first two levels. In the same way that the Kindler–Safra theorem is useful for studying triangle-intersecting families, our structure theorems are useful for studying intersecting families of permutations, which are families in which any two permutations agree on the image of at least one point. Using one of our theorems, we give a simple proof of the following result of Ellis, Friedgut and Pilpel: an intersecting family of permutations on S_m of size (1−e)(m−1)! is O(e)-close to a double coset, a family which consists of all permutations sending some point i to some point j.
13

Computational Prediction of Target Genes of MicroRNAs

Radfar, Hossein 01 April 2014 (has links)
MicroRNAs (miRNAs) are a class of short (21-25 nt) non-coding endogenous RNAs that mediate the expression of their direct target genes post-transcriptionally. The goal of this thesis is to identify the target genes of miRNAs using computational methods. The most popular computational target prediction methods rely on sequence based determinants to predict targets. However, these determinants are neither sufficient nor necessary to identify functional target sites, and commonly ignore the cellular conditions in which miRNAs interact with their targets \emph{in vivo}. Since miRNAs activity reduces the steady-state abundance of mRNA targets, the main goal of this thesis is to augment large scale gene expression profiles as a supplement to sequence-based computational miRNA target prediction techniques. We develop two computational miRNA target prediction methods: InMiR and BayMiR; in addition, we study the interaction between miRNAs and lncRNAs using long RNA expression data. InMiR is a computational method that predicts the targets of intronic miRNAs based on the expression profiles of their host genes across a large number of datasets. InMiR can also predict which host genes have expression profiles that are good surrogates for those of their intronic miRNAs. Host genes that InMiR predicts are bad surrogates contain significantly more miRNA target sites in their 3 UTRs and are significantly more likely to have predicted Pol II-III promoters in their introns. We also develop BayMiR that scores miRNA-mRNA pairs based on the endogenous footprint of miRNAs on gene expression in a genome-wide scale. BayMiR provides an ``endogenous target repression" index, that identifies the contribution of each miRNA in repressing a target gene in presence of other targeting miRNAs. This thesis also addresses the interactions between miRNAs and lncRNAs. Our analysis on expression abundance of long RNA transcripts (mRNA and lncRNA) shows that the lncRNA target set of some miRNAs have relatively low abundance in the tissues that these miRNAs are highly active. We also found lncRNAs and mRNAs that shared many targeting miRNAs are significantly positively correlated, indicating that these set of highly expressed lncRNAs may act as miRNA sponges to promote mRNA regulation.
14

Computational Prediction of Target Genes of MicroRNAs

Radfar, Hossein 01 April 2014 (has links)
MicroRNAs (miRNAs) are a class of short (21-25 nt) non-coding endogenous RNAs that mediate the expression of their direct target genes post-transcriptionally. The goal of this thesis is to identify the target genes of miRNAs using computational methods. The most popular computational target prediction methods rely on sequence based determinants to predict targets. However, these determinants are neither sufficient nor necessary to identify functional target sites, and commonly ignore the cellular conditions in which miRNAs interact with their targets \emph{in vivo}. Since miRNAs activity reduces the steady-state abundance of mRNA targets, the main goal of this thesis is to augment large scale gene expression profiles as a supplement to sequence-based computational miRNA target prediction techniques. We develop two computational miRNA target prediction methods: InMiR and BayMiR; in addition, we study the interaction between miRNAs and lncRNAs using long RNA expression data. InMiR is a computational method that predicts the targets of intronic miRNAs based on the expression profiles of their host genes across a large number of datasets. InMiR can also predict which host genes have expression profiles that are good surrogates for those of their intronic miRNAs. Host genes that InMiR predicts are bad surrogates contain significantly more miRNA target sites in their 3 UTRs and are significantly more likely to have predicted Pol II-III promoters in their introns. We also develop BayMiR that scores miRNA-mRNA pairs based on the endogenous footprint of miRNAs on gene expression in a genome-wide scale. BayMiR provides an ``endogenous target repression" index, that identifies the contribution of each miRNA in repressing a target gene in presence of other targeting miRNAs. This thesis also addresses the interactions between miRNAs and lncRNAs. Our analysis on expression abundance of long RNA transcripts (mRNA and lncRNA) shows that the lncRNA target set of some miRNAs have relatively low abundance in the tissues that these miRNAs are highly active. We also found lncRNAs and mRNAs that shared many targeting miRNAs are significantly positively correlated, indicating that these set of highly expressed lncRNAs may act as miRNA sponges to promote mRNA regulation.
15

First-order Decision-theoretic Planning in Structured Relational Environments

Sanner, Scott 28 July 2008 (has links)
We consider the general framework of first-order decision-theoretic planning in structured relational environments. Most traditional solution approaches to these planning problems ground the relational specification w.r.t. a specific domain instantiation and apply a solution approach directly to the resulting ground Markov decision process (MDP). Unfortunately, the space and time complexity of these solution algorithms scale linearly with the domain size in the best case and exponentially in the worst case. An alternate approach to grounding a relational planning problem is to lift it to a first-order MDP (FOMDP) specification. This FOMDP can then be solved directly, resulting in a domain-independent solution whose space and time complexity either do not scale with domain size or can scale sublinearly in the domain size. However, such generality does not come without its own set of challenges and the first purpose of this thesis is to explore exact and approximate solution techniques for practically solving FOMDPs. The second purpose of this thesis is to extend the FOMDP specification to succinctly capture factored actions and additive rewards while extending the exact and approximate solution techniques to directly exploit this structure. In addition, we provide a proof of correctness of the first-order symbolic dynamic programming approach w.r.t. its well-studied ground MDP counterpart.
16

First-order Decision-theoretic Planning in Structured Relational Environments

Sanner, Scott 28 July 2008 (has links)
We consider the general framework of first-order decision-theoretic planning in structured relational environments. Most traditional solution approaches to these planning problems ground the relational specification w.r.t. a specific domain instantiation and apply a solution approach directly to the resulting ground Markov decision process (MDP). Unfortunately, the space and time complexity of these solution algorithms scale linearly with the domain size in the best case and exponentially in the worst case. An alternate approach to grounding a relational planning problem is to lift it to a first-order MDP (FOMDP) specification. This FOMDP can then be solved directly, resulting in a domain-independent solution whose space and time complexity either do not scale with domain size or can scale sublinearly in the domain size. However, such generality does not come without its own set of challenges and the first purpose of this thesis is to explore exact and approximate solution techniques for practically solving FOMDPs. The second purpose of this thesis is to extend the FOMDP specification to succinctly capture factored actions and additive rewards while extending the exact and approximate solution techniques to directly exploit this structure. In addition, we provide a proof of correctness of the first-order symbolic dynamic programming approach w.r.t. its well-studied ground MDP counterpart.
17

SQL versus MongoDB from an application development point of view

Ankit, Bajpai January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Doina Caragea / There are many formats in which digital information is stored in order to share and re-use it by different applications. The web can hardly be called old and already there is huge research going on to come up with better formats and strategies to share information. Ten years ago formats such as XML, CSV were the primary data interchange formats. And these formats were huge improvements over SGML (Standard Generalized Markup Language). It’s no secret that in last few years there has been a huge transformation in the world of data interchange. More lightweight, bandwidth-non-intensive JSON has taken over traditional formats such as XML and CSV. BigData is the next big thing in computer sciences and JSON has emerged as a key player in BigData database technologies. JSON is the preferred format for web-centric, “NoSQL” databases. These databases are intended to accommodate massive scalability and designed to store data which does not follow any columnar or relational model. Almost all modern programming languages support object oriented concepts, and most of the entity modeling is done in the form of objects. JSON stands for Java Script object notation and as the name suggests this object oriented nature helps modeling entities very naturally. And hence the exchange of data between the application logic and database is seamless. The aim of this report is to develop two similar applications, one with traditional SQL as the backend, and the other with a JSON supporting MongoDB. I am going to build real life functionalities and test the performance of various queries. I will also discuss other aspects of databases such as building a Full Text Index (FTI) and search optimization. Finally I will plot graphs to study the trend in execution time of insertion, deletion, joins and co- relational queries with and without indexes for SQL database, and compare them with the execution trend of MongoDB queries.
18

Android application of Throck mobile

Vidiyala, Akhila January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Daniel Andresen / The aim of this project is to develop an android application for managing and organizing activities of various departments in ThrockMorton building at K-State. Mobile application development is a growing trend in computer industry. With the advancements in mobile technologies and efficient 3G and 4G wireless communications, a number of desktop applications are now becoming available as mobile applications. Android has the leading market share in the entire smart phone OS available. It gives lot of space for creative development as it is open source. There are various discussion forums and official android development support websites that encourage mobile and tablet application development. The ThrockMobile application provides many features for managing inventory at ThrockMorton in Kansas State University. The features include scanning a barcode of an asset and displaying its details and can even edit any of the fields if needed. The access to this application is restricted to only those devices whose device id is existing in the database. One can request access to the application through an email client integrated with the application. Look up feature lets you look up for a user, room, ip-address and asset. Preferences module lets you enter the details of the server from which data is to be requested. This application has been tested on android devices of varying screen sizes and android OS versions. The application serves requests at an average rate of 1.5sec/request, which is above the industry average time. I mentioned in detail the reason for the above performance average times and as future enhancements I have discussed the possible solutions.
19

Mechanism Design with Partial Revelation

Hyafil, Nathanael 28 July 2008 (has links)
With the emergence of the Internet as a global structure for communication and interaction, many “business to consumer” and “business to business” applications have migrated online, thus increasing the need for software agents that can act on behalf of people, institutions or companies with private and often conflicting interests. The design of such agents, and the protocols (i.e., mechanisms) through which they interact, has therefore naturally become an important research theme. Classical mechanism design techniques from the economics literature do not account for the costs entailed with the full revelation of preferences that they require. The aim of this thesis is to investigate how to design mechanisms that only require the revelation of partial preference information and are applicable in any mechanism design context. We call this partial revelation mechanism design. Reducing revelation costs is thus our main concern. With only partial revelation, the designer has some remaining uncertainty over the agents’ types, even after the mechanism has been executed. Thus, in general, the outcome chosen will not be optimal with respect to the designer’s objective function. This alone raises interesting questions about which (part of the) information should be elicited in order to minimize the degree of sub-optimality incurred by the mechanism. But this sub-optimality of the mechanism’s outcome choice function has additional important consequences: most of the results in classical mechanism design which guarantee that agents will reveal their type truthfully to the mechanism rely on the fact that the optimal outcome is chosen. We must therefore also investigate if, and how, appropriate incentives can be maintained in partial revelation mechanisms. We start by presenting our own model for partial revelation mechanism design. Our second contribution is a negative one regarding the quasi-impossibility of implementing partial revelation mechanisms with exact incentive properties. The rest of the thesis shows, in different settings, how this negative result can be bypassed in various settings, depending on the designer's objective (e.g., social welfare, revenue...) and the interaction type (sequential or one shot). Finally, we study how the approximation of the incentive properties can be further improved when necessary, and in the process, introduce and proves the existence of a new equilibrium concept.
20

Algorithms in 3D Shape Segmentation

Simari, Patricio Dario 23 February 2010 (has links)
Surfaces in 3D are often represented by polygonal meshes and point clouds obtained from 3D modeling tools or acquisition processes such as laser range scanning. While these formats are very flexible and allow the representation of a wide variety of shapes, they are rarely appropriate in their raw form for the range of applications that benefit from their use. Their decomposition into simpler constituting parts is referred to as shape segmentation, and its automation remains a challenging area within computer science. We will present and analyze different aspects of shape segmentation. We begin by looking at useful segmentation criteria and present a categorization of current methods according to which type of criteria they address, dividing them into affinity-based, model-fitting, and property-based approaches. We then present two algorithmic contributions in the form of a model-based and a property-based segmentation approaches. These share the goals of automatically finding redundancy in a shape and propose shape representations that leverage this redundancy to achieve descriptive compactness. The first is a method for segmenting a surface into piece-wise ellipsoidal parts, motivated by the fact that most organic objects and many manufactured objects have large curved areas. The second is an algorithm for robustly detecting global and local planar-reflective symmetry and a hierarchical segmentation approach based on this detection method. We note within these approaches the variation in segmentations resulting from different criteria and propose a way to generalize the segmentation problem to heterogenous criteria. We introduce a framework and relevant algorithms for multi-objective segmentation of 3D shapes which allow for the incorporation of domain-specific knowledge through multiple objectives, each of which refers to one or more segmentation labels. They can assert properties of an individual part or they can refer to part interrelations. We thus cast the segmentation problem as an optimization minimizing an aggregate objective function which combines all objectives as a weighted sum. We conclude with a summary and discussion of the contributions presented, lessons learned, and a look at the open questions remaining and potential avenues of continued research.

Page generated in 0.0187 seconds