• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 323
  • 143
  • 42
  • 25
  • 12
  • 8
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 648
  • 104
  • 101
  • 99
  • 89
  • 76
  • 72
  • 70
  • 69
  • 68
  • 60
  • 56
  • 51
  • 50
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

An Efficient Implementation of an Exponential Random Number Generator in a Field Programmable Gate Array (FPGA)

Gautham, Smitha 29 April 2010 (has links)
Many physical, biological, ecological and behavioral events occur at times and rates that are exponentially distributed. Modeling these systems requires simulators that can accurately generate a large quantity of exponentially distributed random numbers, which is a computationally intensive task. To improve the performance of these simulators, one approach is to move portions of the computationally inefficient simulation tasks from software to custom hardware implemented in Field Programmable Gate Arrays (FPGAs). In this work, we study efficient FPGA implementations of exponentially distributed random number generators to improve simulator performance. Our approach is to generate uniformly distributed random numbers using standard techniques and scale them using the inverse cumulative distribution function (CDF). Scaling is implemented by curve fitting piecewise linear, quadratic, cubic, and higher order functions to solve for the inverse CDF. As the complexity of the scaling function increases (in terms of order and the number of pieces), number accuracy increases and additional FPGA resources (logic cells and block RAMs) are consumed. We analyze these tradeoffs and show how a designer with particular accuracy requirements and FPGA resource constraints can implement an accurate and efficient exponentially distributed random number generator.
52

Confidence and Prediction under Covariates and Prior Information / Konfidenz- und Prognoseintervalle unter Kovariaten und Vorinformation

Lurz, Kristina January 2015 (has links) (PDF)
The purpose of confidence and prediction intervals is to provide an interval estimation for an unknown distribution parameter or the future value of a phenomenon. In many applications, prior knowledge about the distribution parameter is available, but rarely made use of, unless in a Bayesian framework. This thesis provides exact frequentist confidence intervals of minimal volume exploiting prior information. The scheme is applied to distribution parameters of the binomial and the Poisson distribution. The Bayesian approach to obtain intervals on a distribution parameter in form of credibility intervals is considered, with particular emphasis on the binomial distribution. An application of interval estimation is found in auditing, where two-sided intervals of Stringer type are meant to contain the mean of a zero-inflated population. In the context of time series analysis, covariates are supposed to improve the prediction of future values. Exponential smoothing with covariates as an extension of the popular forecasting method exponential smoothing is considered in this thesis. A double-seasonality version of it is applied to forecast hourly electricity load under the use of meteorological covariates. Different kinds of prediction intervals for exponential smoothing with covariates are formulated. / Konfidenz- und Prognoseintervalle dienen der Intervallschätzung unbekannter Verteilungsparameter und künftiger Werte eines Phänomens. In vielen Anwendungen steht Vorinformation über einen Verteilungsparameter zur Verfügung, doch nur selten wird außerhalb von bayesscher Statistik davon Gebrauch gemacht. In dieser Dissertation werden exakte frequentistische Konfidenzintervalle unter Vorinformation kleinsten Volumens dargelegt. Das Schema wird auf Verteilungsparameter für die Binomial- und die Poissonverteilung angewandt. Der bayessche Ansatz von Intervallen für Verteilungsparameter wird in Form von Vertrauensintervallen behandelt, mit Fokus auf die Binomialverteilung. Anwendung findet Intervallschätzung in der Wirtschaftsprüfung, wo zweiseitige Intervalle vom Stringer-Typ den Mittelwert in Grundgesamtheiten mit vielen Nullern enthalten sollen. Im Zusammenhang mit Zeitreihenanalyse dienen Kovariaten der Verbesserung von Vorhersagen zukünftiger Werte. Diese Arbeit beschäftigt sich mit exponentieller Glättung mit Kovariaten als eine Erweiterung der gängigen Prognosemethode der exponentiellen Glättung. Eine Version des Modells, welche doppelte Saison berücksichtigt, wird in der Prognose des stündlichen Elektrizitätsbedarfs unter Zuhilfenahme von meteorologischen Variablen eingesetzt. Verschiedene Arten von Prognoseintervallen für exponentielle Glättung mit Kovariaten werden beschrieben.
53

A simulation-based approach to assess the goodness of fit of Exponential Random Graph Models

Li, Yin 11 1900 (has links)
Exponential Random Graph Models (ERGMs) have been developed for fitting social network data on both static and dynamic levels. However, the lack of large sample asymptotic properties makes it inadequate in assessing the goodness-of-fit of these ERGMs. Simulation-based goodness-of-fit plots were proposed by Hunter et al (2006), comparing the structured statistics of observed network with those of corresponding simulated networks. In this research, we propose an improved approach to assess the goodness of fit of ERGMs. Our method is shown to improve the existing graphical techniques. We also propose a simulation based test statistic with which the model comparison can be easily achieved. / Biostatistics
54

Modeling Dynamic Network with Centrality-based Logistic Regression

Kulmatitskiy, Nikolay 09 1900 (has links)
Statistical analysis of network data is an active field of study, in which researchers inves- tigate graph-theoretic concepts and various probability models that explain the behaviour of real networks. This thesis attempts to combine two of these concepts: an exponential random graph and a centrality index. Exponential random graphs comprise the most useful class of probability models for network data. These models often require the assumption of a complex dependence structure, which creates certain difficulties in the estimation of unknown model parameters. However, in the context of dynamic networks the exponential random graph model provides the opportunity to incorporate a complex network structure such as centrality without the usual drawbacks associated with parameter estimation. The thesis employs this idea by proposing probability models that are equivalent to the logistic regression models and that can be used to explain behaviour of both static and dynamic networks.
55

Is Building Construction Approaching the Threshold of Becoming Unsustainable: A Systems Theoretic Exploration towards a Post-Forrester Model for Taming Unsustainable Exponentialoids

Fernandez-Solis, Jose Luciano 15 November 2006 (has links)
The construction industry is a major emissions contributor and a main resource consumer. Because of this, the industry is formulating short and long-term 'sustainability targets'. The trend points towards an unsustainable future, within the next 75 years, due to actual and projected increases in resource consumption and emissions generation in response to global population growth, and improving living standards, affluence. There are no reliable studies that predict whether the required reductions in ecological impacts can actually be realized, and if so, on what time scale. In fact, currently no available system representations of the industry can serve as the basis for studying long-term sustainability through the twenty-first Century. Hard dynamic systems, based on reductionism, are no longer adequate representations to study the dynamics of complex systems. A worldview that includes complexity requires foremost a philosophical induction (a theory) of the nature of all the forces that move the industry and a mechanism for understanding how complex forces aggregate and affect growth. The dissertation examines the current understanding of the Theories of Complexity in general, and in building construction, as preparation for a deeper understanding on how sustainability and its opposite, exponential growth (or "exponentialoid") relate. Guaranteeing sustainability transcends the current arsenal of counter measures such as LEED, high-performance measures, waste containment, conservation, lessening demand, renewable resourcing, greening of the industry, creation of high-performance buildings, penalizing the polluter, carbon trading, and others… Sustainability is re-framed as the (artificial) force that tames an unsustainable exponentialoid. Sustainable forces are represented by elements of influence acting like vectorials that appear to have identifiable origin, direction and magnitude. A hypothetical example of how the heuristic/theory works is presented, pending future studies (needed to supply the necessary data required for a working model). This is pre-paradigmatic work, using both a novel worldview and method of analysis that points to increasingly detailed research work to be performed in the future.
56

Application of the Stretched Exponential Production Decline Model to Forecast Production in Shale Gas Reservoirs

Statton, James Cody 2012 May 1900 (has links)
Production forecasting in shale (ultra-low permeability) gas reservoirs is of great interest due to the advent of multi-stage fracturing and horizontal drilling. The well renowned production forecasting model, Arps? Hyperbolic Decline Model, is widely used in industry to forecast shale gas wells. Left unconstrained, the model often overestimates reserves by a great deal. A minimum decline rate is imposed to prevent overestimation of reserves but with less than ten years of production history available to analyze, an accurate minimum decline rate is currently unknown; an educated guess of 5% minimum decline is often imposed. Other decline curve models have been proposed with the theoretical advantage of being able to match linear flow followed by a transition to boundary dominated flow. This thesis investigates the applicability of the Stretched Exponential Production Decline Model (SEPD) and compares it to the industry standard, Arps' with a minimum decline rate. When possible, we investigate an SEPD type curve. Simulated data is analyzed to show advantages of the SEPD model and provide a comparison to Arps' model with an imposed minimum decline rate of 5% where the full production history is known. Long-term production behavior is provided by an analytical solution for a homogenous reservoir with homogenous hydraulic fractures. Various simulations from short-term linear flow (~1 year) to long-term linear flow (~20 years) show the ability of the models to handle onset of boundary dominated flow at various times during production history. SEPD provides more accurate reserves estimates when linear flow ends at 5 years or earlier. Both models provide sufficient reserves estimates for longer-term linear flow scenarios. Barnett Shale production data demonstrates the ability of the models to forecast field data. Denton and Tarrant County wells are analyzed as groups and individually. SEPD type curves generated with 2004 well groups provide forecasts for wells drilled in subsequent years. This study suggests a type curve is most useful when 24 months or less is available to forecast. The SEPD model generally provides more conservative forecasts and EUR estimates than Arps' model with a minimum decline rate of 5%.
57

A Flux Declination Predication Model for Nanoparticle-Containing Wastewaters Treated by a Simultaneous Electrocoagulation/Electrofiltration Process

Liu, Chun 15 February 2007 (has links)
A flux declination predication model for nanoparticle-containing wastewaters treated by a simultaneous electrocoagulation/electro- filtration (EC/EF) process was investigated by perceiving blocked membrane pores, concentration polarization layer, cake layer, and applied electric field strength in this study. As nanotechnology develops, it has been used in many applications. However, its environmental impacts have not been extensively studied. Membrane technology is one of the direct and effective treatment methods for removing nanoparticles from wastewater. But nanoparticle-containing wastewater treated by membrane technology would face the problem of membrane fouling. In this study, oxide chemical mechanical polishing (CMP) wastewater, copper CMP wastewater, and nanosized TiO2-containing wastewater were treated by a EC/EF treatment module. In the EC/EF treatment module, iron, aluminum, and stainless steel were respectively selected as th anode and cathode. Polyvinylidene fluoride (PVDF) with a nominal pore size of 0.1 £gm and carbon/Al2O3 tubular inorganic composite membranes with a pore size ranging from 2 to 10 nm were used in this work. In this work, the changes of the relevant performance of membrane with changes of applied pressure (9.8-19.6 kPa), crossflow velocity (0.3-0.5 m/s) and applied electric filed strength (25-233 V/cm) were studied. The simulation results of a modified mathematic model showed that the flux declination would be fitted finely by an exponential function. Experimental results showed that a higher transmembrane pressure would yield a higher cake concentration and a higher crossflow velocity would yield the steady flux quickly. Overall speaking, the flux declination for nanoparticle-containing wastewaters treated by a simulataneous EC/EF process was described properly as a exponential form. The exponential function could simply show the flux declination of different samples treated by different modules in different situations.
58

Characterizations of Distributions by Conditional Expectation

Chang, Tao-Wen 19 June 2001 (has links)
In this thesis, first we replace the condition X ¡Ø y in Huang and Su (2000) by X ¡Ù y and give necessary and sufficient conditions such that there exists a random variable X satisfying that E(g(X)| X ¡Ø y)=h(y) f(y )/ F(y), " y Î CX, where CX is the support of X.Next, we investigate necessary and sufficient conditions such that h(y)=E(g(X) | X ¡Ø y ), for a given function h and extend these results to bivariate case.
59

Constrained relative entropy minimization with applications to multitask learning

Koyejo, Oluwasanmi Oluseye 15 July 2013 (has links)
This dissertation addresses probabilistic inference via relative entropy minimization subject to expectation constraints. A canonical representation of the solution is determined without the requirement for convexity of the constraint set, and is given by members of an exponential family. The use of conjugate priors for relative entropy minimization is proposed, and a class of conjugate prior distributions is introduced. An alternative representation of the solution is provided as members of the prior family when the prior distribution is conjugate. It is shown that the solutions can be found by direct optimization with respect to members of such parametric families. Constrained Bayesian inference is recovered as a special case with a specific choice of constraints induced by observed data. The framework is applied to the development of novel probabilistic models for multitask learning subject to constraints determined by domain expertise. First, a model is developed for multitask learning that jointly learns a low rank weight matrix and the prior covariance structure between different tasks. The multitask learning approach is extended to a class of nonparametric statistical models for transposable data, incorporating side information such as graphs that describe inter-row and inter-column similarity. The resulting model combines a matrix-variate Gaussian process prior with inference subject to nuclear norm expectation constraints. In addition, a novel nonparametric model is proposed for multitask bipartite ranking. The proposed model combines a hierarchical matrix-variate Gaussian process prior with inference subject to ordering constraints and nuclear norm constraints, and is applied to disease gene prioritization. In many of these applications, the solution is found to be unique. Experimental results show substantial performance improvements as compared to strong baseline models. / text
60

Multiple comparison and selection of location parameters of exponential populations

吳焯基, Ng, Cheuk-key, Allen. January 1990 (has links)
published_or_final_version / Statistics / Doctoral / Doctor of Philosophy

Page generated in 0.0543 seconds