• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4272
  • 2484
  • 960
  • 550
  • 488
  • 311
  • 123
  • 104
  • 103
  • 83
  • 75
  • 75
  • 73
  • 70
  • 67
  • Tagged with
  • 11563
  • 2035
  • 1183
  • 956
  • 849
  • 830
  • 753
  • 715
  • 698
  • 610
  • 603
  • 551
  • 539
  • 537
  • 527
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Casual analysis using two-part models : a general framework for specification, estimation and inference

Hao, Zhuang 22 June 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / The two-part model (2PM) is the most widely applied modeling and estimation framework in empirical health economics. By design, the two-part model allows the process governing observation at zero to systematically differ from that which determines non-zero observations. The former is commonly referred to as the extensive margin (EM) and the latter is called the intensive margin (IM). The analytic focus of my dissertation is on the development of a general framework for specifying, estimating and drawing inference regarding causally interpretable (CI) effect parameters in the 2PM context. Our proposed fully parametric 2PM (FP2PM) framework comprises very flexible versions of the EM and IM for both continuous and count-valued outcome models and encompasses all implementations of the 2PM found in the literature. Because our modeling approach is potential outcomes (PO) based, it provides a context for clear definition of targeted counterfactual CI parameters of interest. This PO basis also provides a context for identifying the conditions under which such parameters can be consistently estimated using the observable data (via the appropriately specified data generating process). These conditions also ensure that the estimation results are CI. There is substantial literature on statistical testing for model selection in the 2PM context, yet there has been virtually no attention paid to testing the “one-part” null hypothesis. Within our general modeling and estimation framework, we devise a relatively simple test of that null for both continuous and count-valued outcomes. We illustrate our proposed model, method and testing protocol in the context of estimating price effects on the demand for alcohol.
322

Modeling Distributions of Test Scores with Mixtures of Beta Distributions

Feng, Jingyu 08 November 2005 (has links) (PDF)
Test score distributions are used to make important instructional decisions about students. The test scores usually do not follow a normal distribution. In some cases, the scores appear to follow a bimodal distribution that can be modeled with a mixture of beta distributions. This bimodality may be due different levels of students' ability. The purpose of this study was to develop and apply statistical techniques for fitting beta mixtures and detecting bimodality in test score distributions. Maximum likelihood and Bayesian methods were used to estimate the five parameters of the beta mixture distribution for scores in four quizzes in a cell biology class at Brigham Young University. The mixing proportion was examined to draw conclusions about bimodality. We were successful in fitting the beta mixture to the data, but the methods were only partially successful in detecting bimodality.
323

A Heuristic Nonlinear Constructive Method for Electric Power Distribution System Reconfiguration

McDermott, Thomas E. 26 April 1998 (has links)
The electric power distribution system usually operates a radial configuration, with tie switches between circuits to provide alternate feeds. The losses would be minimized if all switches were closed, but this is not done because it complicates the system's protection against overcurrents. Whenever a component fails, some of the switches must be operated to restore power to as many customers as possible. As loads vary with time, switch operations may reduce losses in the system. Both of these are applications for reconfiguration. The problem is combinatorial, which precludes algorithms that guarantee a global optimum. Most existing reconfiguration algorithms fall into two categories. In the first, branch exchange, the system operates in a feasible radial configuration and the algorithm opens and closes candidate switches in pairs. In the second, loop cutting, the system is completely meshed and the algorithm opens candidate switches to reach a feasible radial configuration. Reconfiguration algorithms based on linearized transshipment, neural networks, heuristics, genetic algorithms, and simulated annealing have also been reported, but not widely used. These existing reconfiguration algorithms work with a simplified model of the power system, and they handle voltage and current constraints approximately, if at all. The algorithm described here is a constructive method, using a full nonlinear power system model that accurately handles constraints. The system starts with all switches open and all failed components isolated. An optional network power flow provides a lower bound on the losses. Then the algorithm closes one switch at a time to minimize the increase in a merit figure, which is the real loss divided by the apparent load served. The merit figure increases with each switch closing. This principle, called discrete ascent optimal programming (DAOP), has been applied to other power system problems, including economic dispatch and phase balancing. For reconfiguration, the DAOP method's greedy nature is mitigated with a backtracking algorithm. Approximate screening formulas have also been developed for efficient use with partial load flow solutions. This method's main advantage is the accurate treatment of voltage and current constraints, including the effect of control action. One example taken from the literature shows how the DAOP-based algorithm can reach an optimal solution, while adjusting line voltage regulators to satisfy the voltage constraints. / Ph. D.
324

Forget the Weights, Who gets the Benefits? How to Bring a Poverty Focus to the Economic Analysis of Projects

Potts, David J. 06 1900 (has links)
No / This paper examines the way in which the distributional impact of projects has been treated in the cost±bene®t analysis literature. It is suggested that excessive emphasis has been given to the estimation of distribution weights in the context of single ®gure measures of project worth and that more attention should be paid to estimation of the distribution e ects themselves. If projects really are to have some impact on poverty it is important that some attempt is made to measure what that impact is. Such an attempt requires both systematic measurement of direct income e ects as well as the possibility of measuring indirect e ects where these are expected to be important. An approach is suggested in which direct measurement of income e ects can be adjusted using shadow price estimates to determine indirect income e ects. The approach is illustrated with the example of a district heating project in the Republic of Latvia.
325

Cumulative Sum Control Charts for Censored Reliability Data

Olteanu, Denisa Anca 28 April 2010 (has links)
Companies routinely perform life tests for their products. Typically, these tests involve running a set of products until the units fail. Most often, the data are censored according to different censoring schemes, depending on the particulars of the test. On occasion, tests are stopped at a predetermined time and the units that are yet to fail are suspended. In other instances, the data are collected through periodic inspection and only upper and lower bounds on the lifetimes are recorded. Reliability professionals use a number of non-normal distributions to model the resulting lifetime data with the Weibull distribution being the most frequently used. If one is interested in monitoring the quality and reliability characteristics of such processes, one needs to account for the challenges imposed by the nature of the data. We propose likelihood ratio based cumulative sum (CUSUM) control charts for censored lifetime data with non-normal distributions. We illustrate the development and implementation of the charts, and we evaluate their properties through simulation studies. We address the problem of interval censoring, and we construct a CUSUM chart for censored ordered categorical data, which we illustrate by a case study at Becton Dickinson (BD). We also address the problem of monitoring both of the parameters of the Weibull distribution for processes with right-censored data. / Ph. D.
326

Siblings and Inheritances: A Phenomenological Study Exploring the Relational Outcomes Following the Inheritance Distribution Process

Fincher, Jayla Eileen 01 July 2016 (has links)
The purpose of this study was to contribute to a more complete understanding of the family inheritance experience by exploring the perspectives of beneficiaries. This qualitative study aimed to describe and discuss how individuals' sibling relationships were impacted following the distribution process of an inheritance that was intended to be equally distributed. Eight individuals participated in semi-structured interviews, with areas of inquiry covering perceptions of challenges and benefits of the distribution process, fairness of the outcome of distribution among siblings, and the impact the process has had on their sibling relationships. The data was analyzed using transcendental phenomenology. Findings suggest families experience varying degrees of conflict during inheritance distributions, but not all conflict was devastating to the relationships following the distributions. Specific relational aspects were identified in contributing to the level of satisfaction of the distribution, which subsequently affected their relationships afterward. Additionally, the handling of conflict and efforts to repair relational strains significantly contribute to relational outcomes following the distribution. The majority of participants reported stronger relationships following the inheritance distribution. The findings provide a foundation for further research to explore beneficiary's experiences of receiving an inheritance within multi-child families. / Master of Science
327

Algorithmic Distribution of Applied Learning on Big Data

Shukla, Manu 16 October 2020 (has links)
Machine Learning and Graph techniques are complex and challenging to distribute. Generally, they are distributed by modeling the problem in a similar way as single node sequential techniques except applied on smaller chunks of data and compute and the results combined. These techniques focus on stitching the results from smaller chunks as the best possible way to have the outcome as close to the sequential results on entire data as possible. This approach is not feasible in numerous kernel, matrix, optimization, graph, and other techniques where the algorithm needs access to all the data during execution. In this work, we propose key-value pair based distribution techniques that are widely applicable to statistical machine learning techniques along with matrix, graph, and time series based algorithms. The crucial difference with previously proposed techniques is that all operations are modeled on key-value pair based fine or coarse-grained steps. This allows flexibility in distribution with no compounding error in each step. The distribution is applicable not only in robust disk-based frameworks but also in in-memory based systems without significant changes. Key-value pair based techniques also provide the ability to generate the same result as sequential techniques with no edge or overlap effects in structures such as graphs or matrices to resolve. This thesis focuses on key-value pair based distribution of applied machine learning techniques on a variety of problems. For the first method key-value pair distribution is used for storytelling at scale. Storytelling connects entities (people, organizations) using their observed relationships to establish meaningful storylines. When performed sequentially these computations become a bottleneck because the massive number of entities make space and time complexity untenable. We present DISCRN, or DIstributed Spatio-temporal ConceptseaRch based StorytelliNg, a distributed framework for performing spatio-temporal storytelling. The framework extracts entities from microblogs and event data, and links these entities using a novel ConceptSearch to derive storylines in a distributed fashion utilizing key-value pair paradigm. Performing these operations at scale allows deeper and broader analysis of storylines. The novel parallelization techniques speed up the generation and filtering of storylines on massive datasets. Experiments with microblog posts such as Twitter data and GDELT(Global Database of Events, Language and Tone) events show the efficiency of the techniques in DISCRN. The second work determines brand perception directly from people's comments in social media. Current techniques for determining brand perception, such as surveys of handpicked users by mail, in person, phone or online, are time consuming and increasingly inadequate. The proposed DERIV system distills storylines from open data representing direct consumer voice into a brand perception. The framework summarizes the perception of a brand in comparison to peer brands with in-memory key-value pair based distributed algorithms utilizing supervised machine learning techniques. Experiments performed with open data and models built with storylines of known peer brands show the technique as highly scalable and accurate in capturing brand perception from vast amounts of social data compared to sentiment analysis. The third work performs event categorization and prospect identification in social media. The problem is challenging due to endless amount of information generated daily. In our work, we present DISTL, an event processing and prospect identifying platform. It accepts as input a set of storylines (a sequence of entities and their relationships) and processes them as follows: (1) uses different algorithms (LDA, SVM, information gain, rule sets) to identify themes from storylines; (2) identifies top locations and times in storylines and combines with themes to generate events that are meaningful in a specific scenario for categorizing storylines; and (3) extracts top prospects as people and organizations from data elements contained in storylines. The output comprises sets of events in different categories and storylines under them along with top prospects identified. DISTL utilizes in-memory key-value pair based distributed processing that scales to high data volumes and categorizes generated storylines in near real-time. The fourth work builds flight paths of drones in a distributed manner to survey a large area taking images to determine growth of vegetation over power lines allowing for adjustment to terrain and number of drones and their capabilities. Drones are increasingly being used to perform risky and labor intensive aerial tasks cheaply and safely. To ensure operating costs are low and flights autonomous, their flight plans must be pre-built. In existing techniques drone flight paths are not automatically pre-calculated based on drone capabilities and terrain information. We present details of an automated flight plan builder DIMPL that pre-builds flight plans for drones tasked with surveying a large area to take photographs of electric poles to identify ones with hazardous vegetation overgrowth. DIMPL employs a distributed in-memory key-value pair based paradigm to process subregions in parallel and build flight paths in a highly efficient manner. The fifth work highlights scaling graph operations, particularly pruning and joins. Linking topics to specific experts in technical documents and finding connections between experts are crucial for detecting the evolution of emerging topics and the relationships between their influencers in state-of-the-art research. Current techniques that make such connections are limited to similarity measures. Methods based on weights such as TF-IDF and frequency to identify important topics and self joins between topics and experts are generally utilized to identify connections between experts. However, such approaches are inadequate for identifying emerging keywords and experts since the most useful terms in technical documents tend to be infrequent and concentrated in just a few documents. This makes connecting experts through joins on large dense graphs challenging. We present DIGDUG, a framework that identifies emerging topics by applying graph operations to technical terms. The framework identifies connections between authors of patents and journal papers by performing joins on connected topics and topics associated with the authors at scale. The problem of scaling the graph operations for topics and experts is solved through dense graph pruning and graph joins categorized under their own scalable separable dense graph class based on key-value pair distribution. Comparing our graph join and pruning technique against multiple graph and join methods in MapReduce revealed a significant improvement in performance using our approach. / Doctor of Philosophy / Distribution of Machine Learning and Graph algorithms is commonly performed by modeling the core algorithm in the same way as the sequential technique except implemented on distributed framework. This approach is satisfactory in very few cases, such as depth-first search and subgraph enumerations in graphs, k nearest neighbors, and few additional common methods. These techniques focus on stitching the results from smaller data or compute chunks as the best possible way to have the outcome as close to the sequential results on entire data as possible. This approach is not feasible in numerous kernel, matrix, optimization, graph, and other techniques where the algorithm needs to perform exhaustive computations on all the data during execution. In this work, we propose key-value pair based distribution techniques that are exhaustive and widely applicable to statistical machine learning algorithms along with matrix, graph, and time series based operations. The crucial difference with previously proposed techniques is that all operations are modeled as key-value pair based fine or coarse-grained steps. This allows flexibility in distribution with no compounding error in each step. The distribution is applicable not only in robust disk-based frameworks but also in in-memory based systems without significant changes. Key-value pair based techniques also provide the ability to generate the same result as sequential techniques with no edge or overlap effects in structures such as graphs or matrices to resolve.
328

A REVISION OF THE TRIMYTINI OF AMERICA NORTH OF MEXICO (COLEOPTERA: TENEBRIONIDAE).

MacLachlan, William Bruce. January 1982 (has links)
No description available.
329

A hybrid method for load, stress and fatigue analysis of drill string screw connectors

Bahai, Hamid R. S. January 1993 (has links)
No description available.
330

Non-congruence of statistical distributions: how different is different?

Applegate, Terry Lee. January 1978 (has links)
Call number: LD2668 .T4 1978 A65 / Master of Science

Page generated in 0.1384 seconds