• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 226
  • 119
  • 48
  • 25
  • 19
  • 17
  • 15
  • 15
  • 8
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 615
  • 174
  • 145
  • 77
  • 76
  • 67
  • 56
  • 50
  • 47
  • 46
  • 45
  • 43
  • 42
  • 42
  • 41
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

The Effects of Alternative Contingencies on Instruction Following.

Patti, Nicole 05 1900 (has links)
The purpose of this experiment was to evaluate the effects of alternative contingencies on instruction following by an ABA design. Three college students consistently pressed keys 1-5-3 and 4-8-6 in the presence of the written instruction "Press 153" or "Press 486." During condition A, the contingencies for following and not following the instruction were the same: CON FR5 FR5 and CON FR20 FR20. During condition B, the contingencies for following and not following the instruction were different: CON FR20 FR5. For one participant, the schedule of reinforcement was then changed to FR30. The results showed that subjects followed instructions when the schedule of reinforcement was the same for instruction following and not following.
142

Bayesian Inference in Large-scale Problems

Johndrow, James Edward January 2016 (has links)
<p>Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here. </p><p>Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.</p><p>One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.</p><p>Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.</p><p>In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models. </p><p>Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data. </p><p>The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.</p><p>Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.</p> / Dissertation
143

A model of contextual factors and inter-organizational integration : A Ground Theory study of two supply chains

Hulthén, Hana January 2013 (has links)
The purpose of this thesis is to contribute to understanding of the effect of organizational context on supply chain integration. One result is a context- based model that can provide support for practitioners regarding what level of integration to establish with suppliers and customers. Given the notion that most organizations are dependent on other organizations, it leads to a need for not only cross-functional integration but also for integration across organizational boundaries. However, in many organizations the level of integration with suppliers and customers is often inappropriate, inefficient and limited mainly to dyadic integration of order processing and operational scheduling. The existing literature provides only a limited insight concerning the essential circumstances for the integration and the slow growth of the implementation of inter-organizational integration has been attributed primarily to lack of guidelines for creating business relationships with supply chain partners. In the literature, “the more integration the better performance” solutions have often been presented without consideration of very complex internal and external organizational environments of involved companies. During recent years, questions have been raised regarding the nature of integration with suppliers and customers and the extent to which it can be accomplished. Instead of all-encompassing integration, selectivity has been suggested in terms of what level of integration should be applied to each link of the supply chain. The problem for an organization is not to find “one best way”; rather it is to search for solutions that advance integration and differentiation simultaneously. Preferable level of integration depends on many contextual factors associated with e.g. focal company, industry, competitive environment, and nature and type of products. However, in the previous research the focus has primarily been on studying single or limited sets of contextual factors and their impact on integration. These results are often fragmented, leading to multiple frameworks and models. A unifying model providing recommendations in terms of what level of integration to establish with suppliers and customers considering organization’s specific circumstances is desirable. In this study, a large number of contextual factors of integration with suppliers and customers were identified and structured. Additionally, the relationship between these factors and level of the integration was clarified. The study is based on the Grounded Theory methodology. To understand the effect of context on level of integration, two supply chains (triads) from two different industries - medical devices and fast moving consumer goods - have been selected as core samples. Findings are based on in-depth analysis of qualitative data obtained from fourteen interviews with practitioners such as CEOs, SC managers, sales managers, purchasing managers, and logisticians. Following the Grounded Theory methodology, the analysis of the collected data was conducted in three major rounds divided into six steps. The results were compared with a theoretical frame of reference. The main result of this study is a model that describes the relationship between contextual factors and integration activities with suppliers and customers. The findings suggest that the assumption of a fit between context and integration of the Structural Contingency Theory is applicable also from an inter-organizational perspective. The model can be applied to contextual factors both external and internal to an organization. It is supplemented by structured lists of identified contextual factors and integration activities. Recalling the notion of fit between value of contextual factors and level of integration with suppliers and customers, it can be stated that even low levels of integration can be appropriate as long as they are consistent with the values of certain factors representing organizational context. Furthermore, the model adds to existing models and frameworks as it can be used as a diagnostic tool. Applying this model, an organization can evaluate if current levels of integration fit with the corresponding values of contextual factors. Furthermore, the model support identification of misfits between values of contextual factors and present level of integration and it provides an opportunity to adjust or reevaluate the current levels of integration. The model, in combination with the lists of contextual factors and integration activities, can then be used to develop corrective actions in order to regain the desired fit. Intention of this study was to identify and analyze integration of triads in the studied supply chains, commonly known as Supply chain integration. However, this scope of integration has not been found, which is in line with previous research indicating that triadic integration is rare. To reflect the actual situation in more accurate way it is suggested to use the term Inter-organizational integration, implying dyadic scope of integration, rather than Supply chain integration.
144

Expatriate adjustment revisited : an exploration of the factors explaining expatriate adjustment in MNCs and UN organizations in Egypt

Khedr, Wessam January 2011 (has links)
This thesis aims to understand the relative influence of institutional, cultural and organizational factors on the adjustment of the United Nations’ (UN) and multinational companies’ expatriates in Egypt. The research makes a contribution to the field of expatriate research through its application of the institutional lens in examining the factors impacting on adjustment; and through testing a traditional adjustment model in an under-researched host context. As a result of the research this thesis proposes a new framework for understanding the factors impacting on adjustment which adopts a contingency perspective and incorporates a stronger focus on institutional determinants and the organisational infrastructure supporting the management of expatriates. The study relies, for its theoretical basis, on certain cultural and organizational factors borrowed from the expatriate literature, in addition to introducing other factors (mainly institutional factors) which have not been previously examined in the literature as predictors of adjustment. The research questions the utility of these organizational, cultural and institutional factors, especially those from traditional models, when applied to relatively new national and organizational contexts, the Egyptian national context and the United Nations organizational context. Both contexts are under-researched areas in the expatriate adjustment literature and in the international human resources management literature in general. The Arab cultural context introduces many differences to the Anglo-Saxon and European context, more traditionally the subject of research studies and thus it provides an opportunity for testing the wider application of expatriate models. Equally the UN is a highly multicultural organisational context with a socio-political mission which is highly distinct from the ‘for profit’ based multinational. Thus both these contextual factors offer fertile ground for the further development of a framework for understanding expatriate adjustment during contemporary times. In addition, the novelty of the context brings to the fore the opportunity for examining the utility of institutional theory as an alternative or complement to cultural theory as a way of understanding the factors influencing expatriate adjustment. In terms of the method, the research relies mainly on quantitative data obtained by surveying expatriates in multinational and United Nations organizations working in Egypt. In addition a qualitative technique (interviews) was used to aid questionnaire development and data contextualization. The results highlight the role of institutional measures in explaining expatriate adjustment. The evidence suggests that the institutional variables provide additional explanatory power beyond that provided by traditional factors studies. However, the research also demonstrates that the institutional measures do not replace the cultural measures and therefore there is not a substitution factor at work. Rather, we would argue that the institutional lens provides additional understanding and is tapping into other factors not already captured through measures of culture. The research puts forward a contingency model incorporating additional organisational and institutional variables which are often overlooked or underemphasised in some of the traditional organisational focused models.
145

Contingency-constrained unit commitment with post-contingency corrective recourse

Chen, Richard Li-Yang, Fan, Neng, Pinar, Ali, Watson, Jean-Paul 05 December 2014 (has links)
We consider the problem of minimizing costs in the generation unit commitment problem, a cornerstone in electric power system operations, while enforcing an -- reliability criterion. This reliability criterion is a generalization of the well-known - criterion and dictates that at least fraction of the total system demand (for ) must be met following the failure of or fewer system components. We refer to this problem as the contingency-constrained unit commitment problem, or CCUC. We present a mixed-integer programming formulation of the CCUC that accounts for both transmission and generation element failures. We propose novel cutting plane algorithms that avoid the need to explicitly consider an exponential number of contingencies. Computational studies are performed on several IEEE test systems and a simplified model of the Western US interconnection network. These studies demonstrate the effectiveness of our proposed methods relative to current state-of-the-art.
146

A Comparison of Some Continuity Corrections for the Chi-Squared Test in 3 x 3, 3 x 4, and 3 x 5 Tables

Mullen, Jerry D. (Jerry Davis) 05 1900 (has links)
This study was designed to determine whether chis-quared based tests for independence give reliable estimates (as compared to the exact values provided by Fisher's exact probabilities test) of the probability of a relationship between the variables in 3 X 3, 3 X 4 , and 3 X 5 contingency tables when the sample size is 10, 20, or 30. In addition to the classical (uncorrected) chi-squared test, four methods for continuity correction were compared to Fisher's exact probabilities test. The four methods were Yates' correction, two corrections attributed to Cochran, and Mantel's correction. The study was modeled after a similar comparison conducted on 2 X 2 contingency tables and published by Michael Haber.
147

Physician Chief Executive Officers and Hospital Performance: A Contingency Theory Perspective

Patel, Urvashi B. 01 January 2006 (has links)
Years ago it was typical for a physician to serve as a hospital's Chief Executive Officer (CEO). However, with the development of Master of Health Administration, Master of Public Health, and Master of Business Administration programs, hospitals began to move away from this model. Today however, as hospitals search for innovative ideas to reduce healthcare costs and improve the quality of care, the idea of the physician hospital CEO has returned. Little empirical research is available in the health services literature on the physician hospital CEO. The study aims to examine the relationship between organizational and environmental factors and physician CEOs, and whether or not physician CEOs are associated with improved hospital performance.The conceptual framework is adapted from Donabedian's structure, process, and outcome perspective, which when applied to the organizational level becomes context design-performance. The theoretical perspective applied to the conceptual framework to guide the development of hypotheses is contingency theory, which suggests that organizations are most successful when they can adapt their structures to fit their environment.Data for this study were obtained from multiple sources: American Hospital Association Annual Survey, the Centers for Medicare and Medicaid Services Hospital Cost Reports, SK&A, Area Resource File, and the Centers for Medicare and Medicaid Services Hospital Quality Alliance.Besides descriptive analyses, logistic regression was used in this study to evaluate the relationship between the organizational and environmental hospital characteristics. Ordinary least squares regression was used to explore the relationship between physician CEOs and hospital performance.Results indicate that hospitals in markets with greater physician competition are more likely to have physician CEOs. Hospitals that are affiliated with a system are also more likely to have physician CEOs. The study found that while teaching hospitals and specialty hospitals were associated with placement of physician CEOs, it was in the opposite direction of what was hypothesized. This may be a result of the small sample size of both teaching and specialty hospitals in the study sample. The study concludedthat having a physician CEOs is associated with hospital financial outcomes but not associated with its quality of care outcomes.
148

Using Behavioral Incentives to Promote Exercise Compliance in Women with Cocaine Dependence

Islam, Leila 20 August 2013 (has links)
To date, low rates of patient compliance have made it impractical to study whether regular exercise can contribute to positive outcomes in women with substance use disorders (SUD). One robust strategy for promoting and maintaining behavior change is contingency management (CM). CM has been used successfully to reinforce drug abstinence, treatment attendance, and other pro-social behaviors. CM delivers incentives (prizes) contingent upon target behaviors, though can be expensive. To reduce costs, CM is often delivered with an escalating variable-ratio schedule, first tested by Petry and colleagues (2005). As a Stage Ib behavioral therapies development project (Rounsaville et al., 2001), the primary aim of the present study was to test the use of behavioral incentives (BI) to promote regular physical activity in a residential SUD treatment setting with cocaine-dependent women. The target was physical activity, which was objectively defined at two levels: 30 minutes of treadmill walking at any pace and treadmill walking at moderate intensity. Specifically, a pilot RCT compared rates of physical activity over a six-week study period in a sample of N = 17 women with Cocaine Dependence. N = 10 were randomized to BI group and n = 7 were placed in the control (C) group. All participants completed baseline assessment, attended a 45-minute health and fitness education class, and were scheduled in exercise sessions three days/week. Those randomized to BI, however, were eligible three days/week, to receive incentives for meeting the target behavior(s). Follow-up assessment occurred at 3-weeks and 6-weeks post-randomization (midpoint and end of intervention), and 4-weeks post-discharge from the residential program. The primary outcome variables (percentage of sessions completed and total time spent in scheduled sessions) were used for effect size estimations, which were then used to perform power analyses so that sample size calculations could be estimated for the design of a Stage II RCT. A significant Group effect demonstrated that the BI group spent a significantly greater number of total minutes in scheduled exercise sessions than the C group. This dissertation provided benchmark data on the utility of BI for promoting physical activity for women with cocaine dependence. These promising findings support the use of BI procedures to promote exercise compliance, which will ultimately allow scientists to better develop SUD programs that directly utilize the mental and physical health benefits of physical activity.
149

Současná podoba incentivní terapie u těhotných uživatelek návykových látek ve vybraných adiktologických zařízeních / Current state of contingency management applied on pregnant women - drug users in selected services for drug users

Meruňková, Tereza January 2015 (has links)
Background: More detailed usage of incentive therapy (IT) in the Czech Republic is still unknown. On the contrary, theme of drug issues among female users and issue of usage of drugs during their pregnancy period is becoming much more actual in the field of addictology. Particularly, the public is prejudiced and shows stereotyped behaviour against drug pregnant addicts. The first attempts to combine IT and the specific group of drug pregnant addicts have been started by low-threshold facilities like XTP Sananim in Prague and Centrum U Větrníku in Jihlava. Objective: The aim of this thesis is to map the current situation of IT in that facilities which engage in programmes for drug pregnant addicts based on the IT. Methodology: The data were gained by using semi-structured interviews whose respondents work in low-threshold facilities. In the analysis elements from the theory of addictology were utilised. Results: On the basis of statements of interviewees was found that two chosen facilities has clearly defined program of IT and the structured working course with their clients. The both differ in terms of remuneration clients. The main reason, why to start to work on the basis of IT, is a high number of contacts with drug pregnant addicts, which was always the initiative of one worker in a facility...
150

“Ta forme veille et mes yeux sont ouverts” : crise de fondements du poème et poétique de la contingence chez Mallarmé, Valéry et Reverdy / “Ta forme veille et mes yeux sont ouverts” : poem's foundations crisis and contingency poetics in Mallarmé, Valéry and Reverdy

Monginot, Benoît 14 December 2012 (has links)
Cette thèse essaie d’établir dans un premier temps que Mallarmé, Valéry et Reverdy accomplissent exemplairement la critique théorique des fondations romantiques (à la fois ontologiques et rhétoriques) de la littérature ; que cette critique conduit à une crise de légitimité, qui, sans donner lieu à une rupture violente avec la tradition (à la différence de ce qu’implique par exemple l'ethos dominant des avant-gardes), témoigne cependant d’une lucidité sans concession. On montre alors que cette lucidité s'inscrit dans l'écriture même des poèmes selon une poétique a-théologique et par le réinvestissement de formes (l’allégorie, l’écriture moraliste) susceptibles d'indiquer, contre la considération idolâtre d’une autonomie sans autre de l’œuvre, une transitivité rhétorique et non résolutoire de celle-ci. On explique ainsi que l'écroulement de la métaphysique romantique ouvre la voie, chez ces auteurs, à une reconnaissance de la discursivité de l'œuvre, à sa considération tout à la fois poétique et rhétorique. De cela on conclut à l’éviction d’un paradigme esthétique de la poésie et de ses implications indissociablement politiques et communicationnelles, au profit d’une reconnaissance des contingences rhétorique et factuelle du discours littéraire, celles-ci définissant, sans dogmatisme, une inquiétude humaniste. / This thesis first aims at establishing that Mallarmé, Valéry and Reverdy build up a strong theoretical criticism of Literature’s romantic foundations (which are both ontological and rhetorical); it also points out that this criticism leads to a legitimacy crisis. This crisis, though it does not express itself through a violent break from tradition (unlike the main ethos of the avant-garde), bears witness to an uncompromising sense of lucidity. We then demonstrate that this lucidity shapes the very writing of the poems through a-theological poetics and by reinvesting traditional forms (such as allegory or moralist writing) which can convey, though in a problematic way, a certain rhetorical transitivity. We thus throw light on the fact that the wrecking of romantic metaphysics makes these authors acknowledge the discursive nature of the poem, its poetical and rhetorical aspects. We finally conclude on the overcoming of both the aesthetic paradigm of literature and its political and communicational implications to the benefit of an acknowledgement of factual and rhetorical contingencies of the literary discourse. This defines, without any dogmatism, a humanistic disquiet.

Page generated in 0.0734 seconds