• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 621
  • 158
  • 86
  • 74
  • 55
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1436
  • 211
  • 191
  • 191
  • 183
  • 180
  • 124
  • 118
  • 104
  • 103
  • 99
  • 86
  • 82
  • 80
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

An empirical power model of a low power mobile platform

Magudilu Vijayaraj, Thejasvi Magudilu 20 September 2013 (has links)
Power is one of the today’s major constraints for both hardware and software design. Thus the need to understand the statistics and distribution of power consumption from a hardware and software perspective is high. Power models satisfy this requirement to a certain extent, by estimating the power consumption for a subset of applications, or by providing a detailed power consumption distribution of a system. Till date, many power models have been proposed for the desktop and mobile platforms. However, most of these models were created based on power measurements performed on the entire system when different microbenchmarks stressing different blocks of the system were run. Then the measured power and the profiled information of the subsystem stressing benchmarks were used to create a regression analysis based model. Here, the power/energy prediction accuracy of the models created in this way, depend on both the method and accuracy of the power measurements and the type of regression used in generating the model. This work tries to eliminate the dependency of the accuracy of the power models on the type of regression analysis used, by performing power measurements at a subsystem granularity. When the power measurement of a single subsystem is obtained while stressing it, one can know the exact power it is consuming, instead of obtaining the power consumption of the entire system - without knowing the power consumption of the subsystem of interest - and depending on the regression analysis to provide the answer. Here we propose a generic method that can be used to create power models of individual subsystems of mobile platforms, and validate the method by presenting an empirical power model of the OMAP4460 based Pandaboard-ES, created using the proposed method. The created model has an average percentage of energy prediction error of just around -2.7% for the entire Pandaboard-ES system.
142

EMPIRICAL BAYES NONPARAMETRIC DENSITY ESTIMATION OF CROP YIELD DENSITIES: RATING CROP INSURANCE CONTRACTS

Ramadan, Anas 16 September 2011 (has links)
This thesis examines a newly proposed density estimator in order to evaluate its usefulness for government crop insurance programs confronted by the problem of adverse selection. While the Federal Crop Insurance Corporation (FCIC) offers multiple insurance programs including Group Risk Plan (GRP), what is needed is a more accurate method of estimating actuarially fair premium rates in order to eliminate adverse selection. The Empirical Bayes Nonparametric Kernel Density Estimator (EBNKDE) showed a substantial efficiency gain in estimating crop yield densities. The objective of this research was to apply EBNKDE empirically by means of a simulated game wherein I assumed the role of a private insurance company in order to test for profit gains from the greater efficiency and accuracy promised by using EBNKDE. Employing EBNKDE as well as parametric and nonparametric methods, premium insurance rates for 97 Illinois counties for the years 1991 to 2010 were estimated using corn yield data from 1955 to 2010 taken from the National Agricultural Statistics Service (NASS). The results of this research revealed substantial efficiency gain from using EBNKDE as opposed to other estimators such as Normal, Weibull, and Kernel Density Estimator (KDE). Still, further research using other crops yield data from other states will provide greater insight into EBNKDE and its performance in other situations.
143

Evaluation of the Effects of Canadian Climatic Conditions on Pavement Performance using the Mechanistic Empirical Pavement Design Guide

Saha, Jhuma Unknown Date
No description available.
144

Modeling the hydraulic characteristics of fully developed flow in corrugated steel pipe culverts

Toews, Jonathan Scott 25 September 2012 (has links)
The process of fish migration within rivers and streams is important, especially during the spawning season which often coincides with peak spring discharges in Manitoba. Current environmental regulations for fish passage through culverts require that the average velocity be limited to the prolonged swimming speed of the fish species present. In order to examine the validity of this approach, physical model results were used to calibrate and test a commercially available Computational Fluid Dynamics (CFD) model. Detailed analysis showed that CFD models and the empirical equations used were both able to give a better representation of the flow field than the average velocity. However, the empirical equations were able to provide a more accurate velocity distribution within the fully developed region. A relationship was then developed, to estimate the cumulative percent area less than a threshold velocity within CSP culverts, to be used as a guideline during the design phase.
145

The extended empirical likelihood

Wu, Fan 04 May 2015 (has links)
The empirical likelihood method introduced by Owen (1988, 1990) is a powerful nonparametric method for statistical inference. It has been one of the most researched methods in statistics in the last twenty-five years and remains to be a very active area of research today. There is now a large body of literature on empirical likelihood method which covers its applications in many areas of statistics (Owen, 2001). One important problem affecting the empirical likelihood method is its poor accuracy, especially for small sample and/or high-dimension applications. The poor accuracy can be alleviated by using high-order empirical likelihood methods such as the Bartlett corrected empirical likelihood but it cannot be completely resolved by high-order asymptotic methods alone. Since the work of Tsao (2004), the impact of the convex hull constraint in the formulation of the empirical likelihood on the finite sample accuracy has been better understood, and methods have been developed to break this constraint in order to improve the accuracy. Three important methods along this direction are [1] the penalized empirical likelihood of Bartolucci (2007) and Lahiri and Mukhopadhyay (2012), [2] the adjusted empirical likelihood by Chen, Variyath and Abraham (2008), Emerson and Owen (2009), Liu and Chen (2010) and Chen and Huang (2012), and [3] the extended empirical likelihood of Tsao (2013) and Tsao and Wu (2013). The latter is particularly attractive in that it retains not only the asymptotic properties of the original empirical likelihood, but also its important geometric characteristics. In this thesis, we generalize the extended empirical likelihood of Tsao and Wu (2013) to handle inferences in two large classes of one-sample and two-sample problems. In Chapter 2, we generalize the extended empirical likelihood to handle inference for the large class of parameters defined by one-sample estimating equations, which includes the mean as a special case. In Chapters 3 and 4, we generalize the extended empirical likelihood to handle two-sample problems; in Chapter 3, we study the extended empirical likelihood for the difference between two p-dimensional means; in Chapter 4, we consider the extended empirical likelihood for the difference between two p-dimensional parameters defined by estimating equations. In all cases, we give both the first- and second-order extended empirical likelihood methods and compare these methods with existing methods. Technically, the two-sample mean problem in Chapter 3 is a special case of the general two-sample problem in Chapter 4. We single out the mean case to form Chapter 3 not only because it is a standalone published work, but also because it naturally leads up to the more difficult two-sample estimating equations problem in Chapter 4. We note that Chapter 2 is the published paper Tsao and Wu (2014); Chapter 3 is the published paper Wu and Tsao (2014). To comply with the University of Victoria policy regarding the use of published work for thesis and in accordance with copyright agreements between authors and journal publishers, details of these published work are acknowledged at the beginning of these chapters. Chapter 4 is another joint paper Tsao and Wu (2015) which has been submitted for publication. / Graduate / 0463 / fwu@uvic.ca
146

A Model for Identifying Gentrification in East Nashville, Tennessee

Miller, William Jordan 01 January 2015 (has links)
Gentrification methodologies rarely intersect. Analysis of the process has been cornered to incorporate either in-depth, neighborhood case studies or large-scale empirical investigations. Understanding the timing and extent of gentrification has been limited by this dichotomy. This research attempts to fuse quantitative and qualitative methods to discern the impact of gentrification between census tracts in East Nashville, Tennessee. By employing archival research, field surveys, and census data analysis this project attempts to comprehend the conditions suitable for gentrification to occur and its subsequent effect on residents and the built environment. A model was generated to determine the relationship between a-priori knowledge and empirical indicators of gentrification. Trends were gleaned between these methods, although gentrification’s chaotic and complex nature makes it difficult to pin down.
147

Numerical Investigation of Ship's Continuous-Mode Icebreaking in Level Ice

Tan, Xiang January 2014 (has links)
This thesis is a summary of studies that were carried out as part of candidacy for aPhD degree. The purpose of these studies was to evaluate some factors in shipdesign that are intended for navigating in ice using numerical simulations. A semiempiricalnumerical procedure was developed by combining mathematical modelsthat describe the various elements of the continuous-mode icebreaking process inlevel ice. The numerical procedure was calibrated and validated using full- andmodel-scale measurements. The validated numerical model was in turn used toinvestigate and clarify issues that have not been previously considered.An icebreaker typically breaks ice by its power, its weight and a strengthened bowwith low stem angle. The continuous icebreaking process involves heave and pitchmotions that may not be negligible. The numerical procedure was formulated toaccount for all of the possible combinations of motions for six degrees of freedom(DOFs). The effects of the motion(s) for certain DOF(s) were investigated bycomparing simulations in which the relevant motion(s) were first constrained andthen relieved.In the continuous-mode icebreaking process, a ship interacts with an icebreakingpattern consisting of a sequence of individual icebreaking events. The interactionsamong the key characteristics of the icebreaking process, i.e., the icebreakingpattern, ship motions, and ice resistance, were studied using the numericalprocedure in which the ship motions and excitation forces were solved for in thetime domain and the ice edge geometry was simultaneously updated.Observations at various test scales have shown that the crushing pressure arisingfrom the ice–hull interaction depends on the contact area involved. A parametricstudy was carried out on the numerical procedure to investigate the effect of thecontact pressure on icebreaking.The loading rates associated with the ship’s forward speed have been anticipatedto play an important role in determining the bending failure loads, in view of thedynamic water flow underneath the ship and the inertia of the ice. The dynamicbending behavior of ice could also explain the speed dependence of the icebreakingresistance component. A dynamic bending failure criterion for ice was derived,incorporated into the numerical procedure and then validated using full-scale data.The results obtained using the dynamic and static bending failure criteria werecompared to each other.In addition, the effect of the propeller flow on the hull resistance for ships runningpropeller first in level ice was investigated by applying the information obtainedfrom model tests to the numerical procedure. The thrust deduction in ice wasdiscussed.
148

Modeling the hydraulic characteristics of fully developed flow in corrugated steel pipe culverts

Toews, Jonathan Scott 25 September 2012 (has links)
The process of fish migration within rivers and streams is important, especially during the spawning season which often coincides with peak spring discharges in Manitoba. Current environmental regulations for fish passage through culverts require that the average velocity be limited to the prolonged swimming speed of the fish species present. In order to examine the validity of this approach, physical model results were used to calibrate and test a commercially available Computational Fluid Dynamics (CFD) model. Detailed analysis showed that CFD models and the empirical equations used were both able to give a better representation of the flow field than the average velocity. However, the empirical equations were able to provide a more accurate velocity distribution within the fully developed region. A relationship was then developed, to estimate the cumulative percent area less than a threshold velocity within CSP culverts, to be used as a guideline during the design phase.
149

Genetic algorithms for cluster optimization

Roberts, Christopher January 2001 (has links)
No description available.
150

Psychology of Ownership and Asset Defense: Why People Value their Personal Information Beyond Privacy

Spiekermann, Sarah, Korunovska, Jana, Bauer, Christine 12 1900 (has links) (PDF)
Analysts, investors and entrepreneurs have for long recognized the value of comprehensive user profiles. While there is a market for trading such personal information among companies, the users, who are actually the providers of such information, are not asked to the negotiations table. To date, there is little information on how users value their personal information. In an online survey-based experiment 1059 Facebook users revealed how much they would be willing to pay for keeping their personal information. Our study reveals that as soon as people learn that some third party is interested in their personal information (asset consciousness prime), the value their information to a much higher degree than without this prime and start to defend their asset. Furthermore, we found that people develop a psychology of ownership towards their personal information. In fact, this construct is a significant contributor to information valuation, much higher than privacy concerns. (author's abstract)

Page generated in 0.0494 seconds