• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2208
  • 363
  • 282
  • 176
  • 98
  • 72
  • 38
  • 36
  • 34
  • 25
  • 24
  • 21
  • 21
  • 20
  • 20
  • Tagged with
  • 4031
  • 532
  • 474
  • 469
  • 429
  • 426
  • 418
  • 407
  • 384
  • 366
  • 338
  • 315
  • 288
  • 284
  • 279
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

A Geometric Approach for Inference on Graphical Models

Lunagomez, Simon January 2009 (has links)
We formulate a novel approach to infer conditional independence models or Markov structure of a multivariate distribution. Specifically, our objective is to place informative prior distributions over graphs (decomposable and unrestricted) and sample efficiently from the induced posterior distribution. We also explore the idea of factorizing according to complete sets of a graph; which implies working with a hypergraph that cannot be retrieved from the graph alone. The key idea we develop in this paper is a parametrization of hypergraphs using the geometry of points in $R^m$. This induces informative priors on graphs from specified priors on finite sets of points. Constructing hypergraphs from finite point sets has been well studied in the fields of computational topology and random geometric graphs. We develop the framework underlying this idea and illustrate its efficacy using simulations. / Dissertation
542

Essays on Dynamic Demand Estimation

Wang, Yucai Emily January 2011 (has links)
<p>This dissertation consists of three chapters relating to dynamic demand models of storable goods and their application to taxes that are imposed on soft drinks. Broadly speaking, the first chapter builds the estimation strategy for dynamic demand models of storable goods that allows for unobservable heterogeneous preferences in household's tastes. The second chapter uses the estimation strategy developed in the first chapter to study the policy implications of taxes that are imposed on sugary soft drinks. The last chapter explores and provides an explanation for the level of pass-through for soda taxes. </p><p>To be more specific, the first chapter develops techniques for incorporating systematic brand preferences in dynamic demand models of storable goods. Dynamic demand models are important for correctly measuring price elasticities of products that can be stockpiled. However, most of the literature excludes systematic preferences over consumers' brand tastes. This chapter resolves this issue by incorporating random coefficient Logit models into a dynamic demand framework and hence allows for realistic demand substitution patterns. It builds on Hendel and Nevo's 2006 Econometrica paper, where the authors introduce a model of dynamic demand that flexibly incorporates observable heterogeneity and estimates it via a three-step procedure that separates brand and volume choices. While a powerful tool, this method is tricky to execute. Therefore, this chapter also discusses the difficulties that may face implementers.</p><p>The second chapter predicts the effects of taxes on sugar sweetened soft drinks (sugar taxes) on both total consumption and the welfare of different types of consumers. It specifies and estimates a structural dynamic demand model of storable goods with rational and forward-looking households. It flexibly incorporates persistent heterogeneous consumer preferences and develops a computationally attractive method for estimating its parameters. Sugar taxes have been proposed at both the national and state-level, and passed in three states, as a means of slowing or reversing the growth in obesity and diabetes. To accurately analyze the effects of these policies, this chapter takes two specific aspects of soft drinks into account: storability and differentiation. It compares the results from this model to two benchmark studies: a static model with consumer heterogeneity and a dynamic model without households' persistent heterogeneous tastes. It finds that failing to account for dynamics (i.e. storability) results in overestimated reduction in consumption and failing to account for persistent heterogeneous preferences (i.e. differentiation) results in overestimated reduction in consumption and underestimated welfare loss. The model and method developed here are readily applicable to many studies involving storable goods, such as firms' optimal pricing behavior and anti-trust policies analyses.</p><p>The third and last chapter focuses on the incidence of soda taxes by studying the pass-through level of these taxes. It lays out a framework for thinking about the determinants of the pass-through level. More specifically, it builds theoretical models that examine the pass-through under more complex supply structures with multiple manufactures and retailers. In addition to providing some intuition behind theoretical predictions of the models, this chapter also presents empirical results found in the data along with their implications.</p> / Dissertation
543

Dynamic Scheduling of Open Multiclass Queueing Networks in a Slowly Changing Environment

Chang, Junxia 22 November 2004 (has links)
This thesis investigates the dynamic scheduling of computer communication networks that can be periodically overloaded. Such networks are modelled as mutliclass queueing networks in a slowly changing environment. A hierarchy framework is established to search for a suitable scheduling policy for such networks through its connection with stochastic fluid models. In this work, the dynamic scheduling of a specific multiclass stochastic fluid model is studied first. Then, a bridge between the scheduling of stochastic fluid models and that of the queueing networks in a changing environment is established. In the multiclass stochastic fluid model, the focus is on a system with two fluid classes and a single server whose capacity can be shared arbitrarily among these two classes. The server may be overloaded transiently and it is under a quality of service contract which is indicated by a threshold value of each class. Whenever the fluid level of a certain class is above the designated threshold value, the penalty cost is incurred to the server. The optimal and asymptotically optimal resource allocation policies are specified for such a stochastic fluid model. Afterwards, a connection between the optimization of the queueing networks and that of the stochastic fluid models is established. This connection involves two steps. The first step is to approximate such networks by their corresponding stochastic fluid models with a proper scaling method. The second step is to construct a suitable policy for the queueing network through a successful interpretation of the stochastic fluid model solution, where the interpretation method is provided in this study. The results developed in this thesis facilitate the process of searching for a nearly optimal scheduling policy for queueing networks in a slowly changing environment.
544

Probabilistic Analysis and Threshold Investigations of Random Key Pre-distribution based Wireless Sensor Networks

Li, Wei-shuo 23 August 2010 (has links)
In this thesis, we present analytical analysis of key distribution schemes on wireless sensor networks. Since wireless sensor network is under unreliable environment, many random key pre-distribution based schemes have been developed to enhance security. Most of these schemes need to guarantee the existence of specific properties, such as disjoint secure paths or disjoint secure cliques, to achieve a secure cooperation among nodes. Two of the basic questions are as follows: 1. Under what conditions does a large-scale sensor network contain a certain structure? 2. How can one give a quantitative analysis behave as n grows to the infinity? However, analyzing such a structure or combinatorial problem is complicated in classical wireless network models such as percolation theories or random geometric graphs. Particularly, proofs in geometric setting models often blend stochastic geometric and combinatorial techniques and are more technically challenging. To overcome this problem, an approximative quasi-random graph is employed to eliminate some properties that are difficult to tackle. The most well-known solutions of this kind problems are probably Szemeredi's regularity lemma for embedding. The main difficulty from the fact that the above questions involve extremely small probabilities. These probabilities are too small to estimate by means of classical tools from probability theory, and thus a specific counting methods is inevitable.
545

Effectiveness comparison between Concolic and Random Testing

Lai, Yan-shun 31 October 2011 (has links)
The development of software today, the company has their own test system usually. Because there has a few bugs in the every software. And it will make the damage of company¡¦s property or security of information. We can find the bugs in the software by the test systems. But the few bugs will appear repeatedly even if you have been fixed it. In this time, it will be effective if we use the automatic test systems. They can solve the waste of time and cost. Appearance of the automatic test system has been solved the defect of the test method in the past. In this paper will mention two kind of automatic test systems, one of them is concolic testing, and another is random testing. In the 2009, there had the few of evidence to discuss that the concolic testing was more effective than the random testing, but there wasn¡¦t have the enough demonstration. So I hope to prove that the effectiveness comparison between concolic and random testing by this paper.
546

Fragment Based Protein Active Site Analysis Using Markov Random Field Combinations of Stereochemical Feature-Based Classifications

Pai Karkala, Reetal 2009 May 1900 (has links)
Recent improvements in structural genomics efforts have greatly increased the number of hypothetical proteins in the Protein Data Bank. Several computational methodologies have been developed to determine the function of these proteins but none of these methods have been able to account successfully for the diversity in the sequence and structural conformations observed in proteins that have the same function. An additional complication is the flexibility in both the protein active site and the ligand. In this dissertation, novel approaches to deal with both the ligand flexibility and the diversity in stereochemistry have been proposed. The active site analysis problem is formalized as a classification problem in which, for a given test protein, the goal is to predict the class of ligand most likely to bind the active site based on its stereochemical nature and thereby define its function. Traditional methods that have adapted a similar methodology have struggled to account for the flexibility observed in large ligands. Therefore, I propose a novel fragment-based approach to dealing with larger ligands. The advantage of the fragment-based methodology is that considering the protein-ligand interactions in a piecewise manner does not affect the active site patterns, and it also provides for a way to account for the problems associated with flexible ligands. I also propose two feature-based methodologies to account for the diversity observed in sequences and structural conformations among proteins with the same function. The feature-based methodologies provide detailed descriptions of the active site stereochemistry and are capable of identifying stereochemical patterns within the active site despite the diversity. Finally, I propose a Markov Random Field approach to combine the individual ligand fragment classifications (based on the stereochemical descriptors) into a single multi-fragment ligand class. This probabilistic framework combines the information provided by stereochemical features with the information regarding geometric constraints between ligand fragments to make a final ligand class prediction. The feature-based fragment identification methodology had an accuracy of 84% across a diverse set of ligand fragments and the mrf analysis was able to succesfully combine the various ligand fragments (identified by feature-based analysis) into one final ligand based on statistical models of ligand fragment distances. This novel approach to protein active site analysis was additionally tested on 3 proteins with very low sequence and structural similarity to other proteins in the PDB (a challenge for traditional methods) and in each of these cases, this approach successfully identified the cognate ligand. This approach addresses the two main issues that affect the accuracy of current automated methodologies in protein function assignment.
547

Capacity Proportional Unstructured Peer-to-Peer Networks

Reddy, Chandan Rama 2009 August 1900 (has links)
Existing methods to utilize capacity-heterogeneity in a P2P system either rely on constructing special overlays with capacity-proportional node degree or use topology adaptation to match a node's capacity with that of its neighbors. In existing P2P networks, which are often characterized by diverse node capacities and high churn, these methods may require large node degree or continuous topology adaptation, potentially making them infeasible due to their high overhead. In this thesis, we propose an unstructured P2P system that attempts to address these issues. We first prove that the overall throughput of search queries in a heterogeneous network is maximized if and only if traffic load through each node is proportional to its capacity. Our proposed system achieves this traffic distribution by biasing search walks using the Metropolis-Hastings algorithm, without requiring any special underlying topology. We then define two saturation metrics for measuring the performance of overlay networks: one for quantifying their ability to support random walks and the second for measuring their potential to handle the overhead caused by churn. Using simulations, we finally compare our proposed method with Gia, an existing system which uses topology adaptation, and find that the former performs better under all studied conditions, both saturation metrics, and such end-to-end parameters as query success rate, latency, and query-hits for various file replication schemes.
548

PROBABILISTIC PREDICTION USING EMBEDDED RANDOM PROJECTIONS OF HIGH DIMENSIONAL DATA

Kurwitz, Richard C. 2009 May 1900 (has links)
The explosive growth of digital data collection and processing demands a new approach to the historical engineering methods of data correlation and model creation. A new prediction methodology based on high dimensional data has been developed. Since most high dimensional data resides on a low dimensional manifold, the new prediction methodology is one of dimensional reduction with embedding into a diffusion space that allows optimal distribution along the manifold. The resulting data manifold space is then used to produce a probability density function which uses spatial weighting to influence predictions i.e. data nearer the query have greater importance than data further away. The methodology also allows data of differing phenomenology e.g. color, shape, temperature, etc to be handled by regression or clustering classification. The new methodology is first developed, validated, then applied to common engineering situations, such as critical heat flux prediction and shuttle pitch angle determination. A number of illustrative examples are given with a significant focus placed on the objective identification of two-phase flow regimes. It is shown that the new methodology is robust through accurate predictions with even a small number of data points in the diffusion space as well as flexible in the ability to handle a wide range of engineering problems.
549

Built-In Self Test (BIST) for Realistic Delay Defects

Tamilarasan, Karthik Prabhu 2010 December 1900 (has links)
Testing of delay defects is necessary in deep submicron (DSM) technologies. High coverage delay tests produced by automatic test pattern generation (ATPG) can be applied during wafer and package tests, but are difficult to apply during the board test, due to limited chip access. Delay testing at the board level is increasingly important to diagnose failures caused by supply noise or temperature in the board environment. An alternative to ATPG is the built-in self test (BIST). In combination with the insertion of test points, BIST is able to achieve high coverage of stuck-at and transition faults. The quality of BIST patterns on small delay defects is an open question. In this work we analyze the application of BIST to small delay defects using resistive short and open models in order to estimate the coverage and correlate the coverage to traditional delay fault models.
550

MULTILEVEL ANALYSES OF EFFECTS OF VARIATION IN BODY MASS INDEX ON SERUM LIPID CONCENTRATIONS IN MIDDLE-AGED JAPANESE MEN

KONDO, TAKAAKI, KIMATA, AKIKO, YAMAMOTO, KANAMI, UEYAMA, SAYOKO, UEYAMA, JUN, YATSUYA, HIROSHI, TAMAKOSHI, KOJI, HORI, YOKO 02 1900 (has links)
No description available.

Page generated in 0.0308 seconds