• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 615
  • 157
  • 86
  • 74
  • 54
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1425
  • 210
  • 188
  • 188
  • 181
  • 178
  • 123
  • 116
  • 102
  • 102
  • 98
  • 85
  • 80
  • 78
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

EMPIRICAL BAYES NONPARAMETRIC DENSITY ESTIMATION OF CROP YIELD DENSITIES: RATING CROP INSURANCE CONTRACTS

Ramadan, Anas 16 September 2011 (has links)
This thesis examines a newly proposed density estimator in order to evaluate its usefulness for government crop insurance programs confronted by the problem of adverse selection. While the Federal Crop Insurance Corporation (FCIC) offers multiple insurance programs including Group Risk Plan (GRP), what is needed is a more accurate method of estimating actuarially fair premium rates in order to eliminate adverse selection. The Empirical Bayes Nonparametric Kernel Density Estimator (EBNKDE) showed a substantial efficiency gain in estimating crop yield densities. The objective of this research was to apply EBNKDE empirically by means of a simulated game wherein I assumed the role of a private insurance company in order to test for profit gains from the greater efficiency and accuracy promised by using EBNKDE. Employing EBNKDE as well as parametric and nonparametric methods, premium insurance rates for 97 Illinois counties for the years 1991 to 2010 were estimated using corn yield data from 1955 to 2010 taken from the National Agricultural Statistics Service (NASS). The results of this research revealed substantial efficiency gain from using EBNKDE as opposed to other estimators such as Normal, Weibull, and Kernel Density Estimator (KDE). Still, further research using other crops yield data from other states will provide greater insight into EBNKDE and its performance in other situations.
142

Evaluation of the Effects of Canadian Climatic Conditions on Pavement Performance using the Mechanistic Empirical Pavement Design Guide

Saha, Jhuma Unknown Date
No description available.
143

Modeling the hydraulic characteristics of fully developed flow in corrugated steel pipe culverts

Toews, Jonathan Scott 25 September 2012 (has links)
The process of fish migration within rivers and streams is important, especially during the spawning season which often coincides with peak spring discharges in Manitoba. Current environmental regulations for fish passage through culverts require that the average velocity be limited to the prolonged swimming speed of the fish species present. In order to examine the validity of this approach, physical model results were used to calibrate and test a commercially available Computational Fluid Dynamics (CFD) model. Detailed analysis showed that CFD models and the empirical equations used were both able to give a better representation of the flow field than the average velocity. However, the empirical equations were able to provide a more accurate velocity distribution within the fully developed region. A relationship was then developed, to estimate the cumulative percent area less than a threshold velocity within CSP culverts, to be used as a guideline during the design phase.
144

The extended empirical likelihood

Wu, Fan 04 May 2015 (has links)
The empirical likelihood method introduced by Owen (1988, 1990) is a powerful nonparametric method for statistical inference. It has been one of the most researched methods in statistics in the last twenty-five years and remains to be a very active area of research today. There is now a large body of literature on empirical likelihood method which covers its applications in many areas of statistics (Owen, 2001). One important problem affecting the empirical likelihood method is its poor accuracy, especially for small sample and/or high-dimension applications. The poor accuracy can be alleviated by using high-order empirical likelihood methods such as the Bartlett corrected empirical likelihood but it cannot be completely resolved by high-order asymptotic methods alone. Since the work of Tsao (2004), the impact of the convex hull constraint in the formulation of the empirical likelihood on the finite sample accuracy has been better understood, and methods have been developed to break this constraint in order to improve the accuracy. Three important methods along this direction are [1] the penalized empirical likelihood of Bartolucci (2007) and Lahiri and Mukhopadhyay (2012), [2] the adjusted empirical likelihood by Chen, Variyath and Abraham (2008), Emerson and Owen (2009), Liu and Chen (2010) and Chen and Huang (2012), and [3] the extended empirical likelihood of Tsao (2013) and Tsao and Wu (2013). The latter is particularly attractive in that it retains not only the asymptotic properties of the original empirical likelihood, but also its important geometric characteristics. In this thesis, we generalize the extended empirical likelihood of Tsao and Wu (2013) to handle inferences in two large classes of one-sample and two-sample problems. In Chapter 2, we generalize the extended empirical likelihood to handle inference for the large class of parameters defined by one-sample estimating equations, which includes the mean as a special case. In Chapters 3 and 4, we generalize the extended empirical likelihood to handle two-sample problems; in Chapter 3, we study the extended empirical likelihood for the difference between two p-dimensional means; in Chapter 4, we consider the extended empirical likelihood for the difference between two p-dimensional parameters defined by estimating equations. In all cases, we give both the first- and second-order extended empirical likelihood methods and compare these methods with existing methods. Technically, the two-sample mean problem in Chapter 3 is a special case of the general two-sample problem in Chapter 4. We single out the mean case to form Chapter 3 not only because it is a standalone published work, but also because it naturally leads up to the more difficult two-sample estimating equations problem in Chapter 4. We note that Chapter 2 is the published paper Tsao and Wu (2014); Chapter 3 is the published paper Wu and Tsao (2014). To comply with the University of Victoria policy regarding the use of published work for thesis and in accordance with copyright agreements between authors and journal publishers, details of these published work are acknowledged at the beginning of these chapters. Chapter 4 is another joint paper Tsao and Wu (2015) which has been submitted for publication. / Graduate / 0463 / fwu@uvic.ca
145

A Model for Identifying Gentrification in East Nashville, Tennessee

Miller, William Jordan 01 January 2015 (has links)
Gentrification methodologies rarely intersect. Analysis of the process has been cornered to incorporate either in-depth, neighborhood case studies or large-scale empirical investigations. Understanding the timing and extent of gentrification has been limited by this dichotomy. This research attempts to fuse quantitative and qualitative methods to discern the impact of gentrification between census tracts in East Nashville, Tennessee. By employing archival research, field surveys, and census data analysis this project attempts to comprehend the conditions suitable for gentrification to occur and its subsequent effect on residents and the built environment. A model was generated to determine the relationship between a-priori knowledge and empirical indicators of gentrification. Trends were gleaned between these methods, although gentrification’s chaotic and complex nature makes it difficult to pin down.
146

Numerical Investigation of Ship's Continuous-Mode Icebreaking in Level Ice

Tan, Xiang January 2014 (has links)
This thesis is a summary of studies that were carried out as part of candidacy for aPhD degree. The purpose of these studies was to evaluate some factors in shipdesign that are intended for navigating in ice using numerical simulations. A semiempiricalnumerical procedure was developed by combining mathematical modelsthat describe the various elements of the continuous-mode icebreaking process inlevel ice. The numerical procedure was calibrated and validated using full- andmodel-scale measurements. The validated numerical model was in turn used toinvestigate and clarify issues that have not been previously considered.An icebreaker typically breaks ice by its power, its weight and a strengthened bowwith low stem angle. The continuous icebreaking process involves heave and pitchmotions that may not be negligible. The numerical procedure was formulated toaccount for all of the possible combinations of motions for six degrees of freedom(DOFs). The effects of the motion(s) for certain DOF(s) were investigated bycomparing simulations in which the relevant motion(s) were first constrained andthen relieved.In the continuous-mode icebreaking process, a ship interacts with an icebreakingpattern consisting of a sequence of individual icebreaking events. The interactionsamong the key characteristics of the icebreaking process, i.e., the icebreakingpattern, ship motions, and ice resistance, were studied using the numericalprocedure in which the ship motions and excitation forces were solved for in thetime domain and the ice edge geometry was simultaneously updated.Observations at various test scales have shown that the crushing pressure arisingfrom the ice–hull interaction depends on the contact area involved. A parametricstudy was carried out on the numerical procedure to investigate the effect of thecontact pressure on icebreaking.The loading rates associated with the ship’s forward speed have been anticipatedto play an important role in determining the bending failure loads, in view of thedynamic water flow underneath the ship and the inertia of the ice. The dynamicbending behavior of ice could also explain the speed dependence of the icebreakingresistance component. A dynamic bending failure criterion for ice was derived,incorporated into the numerical procedure and then validated using full-scale data.The results obtained using the dynamic and static bending failure criteria werecompared to each other.In addition, the effect of the propeller flow on the hull resistance for ships runningpropeller first in level ice was investigated by applying the information obtainedfrom model tests to the numerical procedure. The thrust deduction in ice wasdiscussed.
147

Modeling the hydraulic characteristics of fully developed flow in corrugated steel pipe culverts

Toews, Jonathan Scott 25 September 2012 (has links)
The process of fish migration within rivers and streams is important, especially during the spawning season which often coincides with peak spring discharges in Manitoba. Current environmental regulations for fish passage through culverts require that the average velocity be limited to the prolonged swimming speed of the fish species present. In order to examine the validity of this approach, physical model results were used to calibrate and test a commercially available Computational Fluid Dynamics (CFD) model. Detailed analysis showed that CFD models and the empirical equations used were both able to give a better representation of the flow field than the average velocity. However, the empirical equations were able to provide a more accurate velocity distribution within the fully developed region. A relationship was then developed, to estimate the cumulative percent area less than a threshold velocity within CSP culverts, to be used as a guideline during the design phase.
148

Genetic algorithms for cluster optimization

Roberts, Christopher January 2001 (has links)
No description available.
149

Psychology of Ownership and Asset Defense: Why People Value their Personal Information Beyond Privacy

Spiekermann, Sarah, Korunovska, Jana, Bauer, Christine 12 1900 (has links) (PDF)
Analysts, investors and entrepreneurs have for long recognized the value of comprehensive user profiles. While there is a market for trading such personal information among companies, the users, who are actually the providers of such information, are not asked to the negotiations table. To date, there is little information on how users value their personal information. In an online survey-based experiment 1059 Facebook users revealed how much they would be willing to pay for keeping their personal information. Our study reveals that as soon as people learn that some third party is interested in their personal information (asset consciousness prime), the value their information to a much higher degree than without this prime and start to defend their asset. Furthermore, we found that people develop a psychology of ownership towards their personal information. In fact, this construct is a significant contributor to information valuation, much higher than privacy concerns. (author's abstract)
150

Understanding Programmers' Working Context by Mining Interaction Histories

Zou, Lijie January 2013 (has links)
Understanding how software developers do their work is an important first step to improving their productivity. Previous research has generally focused either on laboratory experiments or coarsely-grained industrial case studies; however, studies that seek a finegrained understanding of industrial programmers working within a realistic context remain limited. In this work, we propose to use interaction histories — that is, finely detailed records of developers’ interactions with their IDE — as our main source of information for understanding programmer’s work habits. We develop techniques to capture, mine, and analyze interaction histories, and we present two industrial case studies to show how this approach can help to better understand industrial programmers’ work at a detailed level: we explore how the basic characteristics of software maintenance task structures can be better understood, how latent dependence between program artifacts can be detected at interaction time, and show how patterns of interaction coupling can be identified. We also examine the link between programmer interactions and some of the contextual factors of software development, such as the nature of the task being performed, the design of the software system, and the expertise of the developers. In particular, we explore how task boundaries can be automatically detected from interaction histories, how system design and developer expertise may affect interaction coupling, and whether newcomer and expert developers differ in their interaction history patterns. These findings can help us to better reason about the multidimensional nature of software development, to detect potential problems concerning task, design, expertise, and other contextual factors, and to build smarter tools that exploit the inherent patterns within programmer interactions and provide improved support for task-aware and expertise-aware software development.

Page generated in 0.0544 seconds