• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 327
  • 58
  • 46
  • 35
  • 21
  • 9
  • 9
  • 8
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 635
  • 66
  • 65
  • 54
  • 54
  • 49
  • 47
  • 45
  • 41
  • 36
  • 35
  • 34
  • 33
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Effects of external contingencies on an actively caring behavior: a field test of intrinsic motivation theory

Boyce, Thomas E. 20 January 2010 (has links)
Reward programs and incentive plans are popular methods of increasing desired behaviors in applied settings. Yet, opponents of "carrot and stick" interventions claim these programs are perceived as controlling and as a result are counterproductive to people's intrinsic motivation to emit a desired response. The current research studied intrinsic motivation theory in a community setting by combining written commitments with external rewards, and manipulating the time at which the reward was delivered (either prior to or subsequent to task completion). It was found that written commitments only had no effect on the rate at which the target response was emitted. Written commitments combined with contingent rewards increased the rate of responding during intervention, but upon withdrawal, response rates dropped significantly below baseline. Written commitments in combination with non-contingent rewards, offered in advance, increased response rates during intervention and were more effective in maintaining responding after the withdrawal of all contingencies. Additionally ~ the current research used the Actively Caring (A C) Model (Geller, 1991) in an attempt to predict who would be more likely to emit the AC target response. The model did not successfully predict the rates at which the target response would be emitted. The implications of this research are discussed from the theories of behavior analysis, intrinsic motivation, and equity. Directions for future studies of intrinsic motivation in applied settings are also offered. / Master of Science
132

Meeting Data Sharing Needs of Heterogeneous Distributed Users

Zhan, Zhiyuan 16 January 2007 (has links)
The fast growth of wireless networking and mobile computing devices has enabled us to access information from anywhere at any time. However, varying user needs and system resource constraints are two major heterogeneity factors that pose a challenge to information sharing systems. For instance, when a new information item is produced, different users may have different requirements for when the new value should become visible. The resources that each device can contribute to such information sharing applications also vary. Therefore, how to enable information sharing across computing platforms with varying resources to meet different user demands is an important problem for distributed systems research. In this thesis, we address the heterogeneity challenge faced by such systems. We assume that shared information is encapsulated in distributed objects, and we use object replication to increase system scalability and robustness, which introduces the consistency problem. Many consistency models have been proposed in recent years but they are either too strong and do not scale very well, or too weak to meet many users' requirements. We propose a Mixed Consistency (MC) model as a solution. We introduce an access constraints based approach to combine both strong and weak consistency models together. We also propose a MC protocol that combines existing implementations together with minimum modifications. It is designed to tolerate crash failures and slow processes/communication links in the system. We also explore how the heterogeneity challenge can be addressed in the transportation layer by developing an agile dissemination protocol. We implement our MC protocol on top of a distributed publisher-subscriber middleware, Echo. We finally measure the performance of our MC implementation. The results of the experiments are consistent with our expectations. Based on the functionality and performance of mixed consistency protocols, we believe that this model is effective in addressing the heterogeneity of user requirements and available resources in distributed systems.
133

Statistická hloubka funkcionálních dat / Statistical Depth for Functional Data

Nagy, Stanislav January 2016 (has links)
Statistical data depth is a nonparametric tool applicable to multivariate datasets in an attempt to generalize quantiles to complex data such as random vectors, random functions, or distributions on manifolds and graphs. The main idea is, for a general multivariate space M, to assign to a point x ∈ M and a probability distribution P on M a number D(x; P) ∈ [0, 1] characterizing how "centrally located" x is with respect to P. A point maximizing D(·; P) is then a generalization of the median to M-valued data, and the locus of points whose depth value is greater than a certain threshold constitutes the inner depth-quantile region corresponding to P. In this work, we focus on data depth designed for infinite-dimensional spaces M and functional data. Initially, a review of depth functionals available in the literature is given. The emphasis of the exposition is put on the unification of these diverse concepts from the theoretical point of view. It is shown that most of the established depths fall into the general framework of projection-driven functionals of either integrated, or infimal type. Based on the proposed methodology, characteristics and theoretical properties of all these depths can be evaluated simultaneously. The first part of the work is devoted to the investigation of these theoretical properties,...
134

Identifying Unsolvable Instances, Forbidden States and Irrelevant Information in Planning

Ståhlberg, Simon January 2012 (has links)
Planning is a central research area in artificial intelligence, and a lot of effort has gone into constructing more and more efficient planning algorithms. In real-world examples, many problem instances do not have a solution. Hence, there is an obvious need for methods that are capable of identifying unsolvable instances efficiently. It is not possible to efficiently identify all unsolvable instances due to the inherent high complexity of planning, but many unsolvable instances can be identified in polynomial time. We present a number of novel methods for doing this. We adapt the notion of k-consistency (a well-studied concept from constraint satisfaction) for testing unsolvability of planning instances. The idea is to decompose a given problem instance into a number of smaller instances which can be solved in polynomial time. If any of the smaller instances are unsolvable, then the original instance is unsolvable. If all the smaller instances are solvable, then it is possible to extract information which can be used to guide the search. For instance, we introduce the notion of forbidden state patterns that are partial states that must be avoided by any solution to the problem instance. This can be viewed as the opposite of pattern databases which give information about states which can lead to a solution.  We also introduce the notion of critical sets and show how to identify them. Critical sets describe operators or values which must be used or achieved in any solution. It is a variation on the landmark concept, i.e., operators or values which must be used in every solution. With the help of critical sets we can identify superfluous operators and values. These operators and values can be removed by preprocessing the problem instance to decrease planning time.
135

Evaluation of the effect of stabilization time in eventually consistent systems

Svan, Mac January 2010 (has links)
The effect of using the eventual consistency model is evaluated, compared to the effect of immediate consistency, by increasing the stabilization time in both models and using the immediate consistent system as a baseline for evaluations. The immediate consistent system performs better if the information and the decisions are replicated adequately fast throughout the system. When the stabilization time increases the effectiveness of eventual consistency emerges, which is most obvious when time constraints make it difficult to propagate information and decisions. By using a simulation to extract data for evaluations, it is verified in this research that as the stabilization time between different parts of a system increases, the eventual consistency will always outperform the immediate consistent system. This statement is valid in all situations where consistency models are useful. Of secondary importance in the research, by adding more decision layers to the eventual consistency model, the performance output is increased significantly, as swift and less well calculated decisions can be thwarted and corrected using the second decision layer.
136

Functional Principal Component Analysis for Discretely Observed Functional Data and Sparse Fisher’s Discriminant Analysis with Thresholded Linear Constraints

Wang, Jing 01 December 2016 (has links)
We propose a new method to perform functional principal component analysis (FPCA) for discretely observed functional data by solving successive optimization problems. The new framework can be applied to both regularly and irregularly observed data, and to both dense and sparse data. Our method does not require estimates of the individual sample functions or the covariance functions. Hence, it can be used to analyze functional data with multidimensional arguments (e.g. random surfaces). Furthermore, it can be applied to many processes and models with complicated or nonsmooth covariance functions. In our method, smoothness of eigenfunctions is controlled by directly imposing roughness penalties on eigenfunctions, which makes it more efficient and flexible to tune the smoothness. Efficient algorithms for solving the successive optimization problems are proposed. We provide the existence and characterization of the solutions to the successive optimization problems. The consistency of our method is also proved. Through simulations, we demonstrate that our method performs well in the cases with smooth samples curves, with discontinuous sample curves and nonsmooth covariance and with sample functions having two dimensional arguments (random surfaces), repectively. We apply our method to classification problems of retinal pigment epithelial cells in eyes of mice and to longitudinal CD4 counts data. In the second part of this dissertation, we propose a sparse Fisher’s discriminant analysis method with thresholded linear constraints. Various regularized linear discriminant analysis (LDA) methods have been proposed to address the problems of the LDA in high-dimensional settings. Asymptotic optimality has been established for some of these methods when there are only two classes. A difficulty in the asymptotic study for the multiclass classification is that for the two-class classification, the classification boundary is a hyperplane and an explicit formula for the classification error exists, however, in the case of multiclass, the boundary is usually complicated and no explicit formula for the error generally exists. Another difficulty in proving the asymptotic consistency and optimality for sparse Fisher’s discriminant analysis is that the covariance matrix is involved in the constraints of the optimization problems for high order components. It is not easy to estimate a general high-dimensional covariance matrix. Thus, we propose a sparse Fisher’s discriminant analysis method which avoids the estimation of the covariance matrix, provide asymptotic consistency results and the corresponding convergence rates for all components. To prove the asymptotic optimality, we provide an asymptotic upper bound for a general linear classification rule in the case of muticlass which is applied to our method to obtain the asymptotic optimality and the corresponding convergence rate. In the special case of two classes, our method achieves the same as or better convergence rates compared to the existing method. The proposed method is applied to multivariate functional data with wavelet transformations.
137

Collaborative Editing of Graphical Network using Eventual Consistency

Hedkvist, Pierre January 2019 (has links)
This thesis compares different approaches of creating a collaborative editing application using different methods such as OT, CRDT and Locking. After a comparison between these methods an implementation based on CRDT was done. The implementation of a collaborative graphical network was made such that consistency is guaranteed. The implementation uses the 2P2P-Graph which was extended in order to support moving of nodes, and uses the client-server communication model. An evaluation of the implementation was made by creating a time-complexity and a space complexity analysis. The result of the thesis includes a comparison between different methods and by an evaluation of the Extended 2P2P-Graph.
138

Consistency in Clinical Preceptor Field Training for Sonography Students

Daniels, Cathy Herring 01 January 2016 (has links)
Consistency in clinical preceptor training for sonography students is important in assuring equity in sonography student evaluation. Review of a local community college sonography program revealed a gap between expected roles and responsibilities of clinical preceptorship and what was actually done in the clinical setting. The purpose of this project study was to explore perceptions of graduates and preceptors regarding what constituted best practices in the evaluation of sonography students in the clinical setting. Knowles's theory of active learning provided a framework for understanding the student-preceptor relationship in the evaluation process. Research questions focused on sonography graduates' and clinical preceptors' perceptions of important practices for ensuring consistency and equity in clinical evaluation. A case study design composed of face-to-face interviews with 5 graduates and 5 preceptors at the study community college was used to address the research questions. Sonography graduates were at least 2 years post-graduation; preceptors had at least 1 year with the program and at least 2 years of clinical experience. Interview data were transcribed verbatim and open coded to identify common themes. Four themes were identified: similar definitions of consistency in evaluation, importance of immediate feedback after skillls performance, potential favoritism in clinical evaluation, and the need to enforce program policies. Findings were used to design a clinical preceptor training workshop that could provide a better understanding of effective measures to attain consistency and equity in the evaluation process, fostering positive social change by helping prepare sonography students as competent practitioners to address health care needs locally and globally.
139

The normativity of rationality : a defense

Levy, Yair January 2013 (has links)
Rationality is very widely regarded as a normative notion, which underwrites various everyday normative practices of evaluation, criticism, and advice. When some agent behaves irrationally, she is likely to be critically evaluated, and advised to change her ways. Such practices seem to presuppose that agents ought to behave as rationality requires. But some philosophers question this thought. They argue that at least some requirements of rationality cannot be ones that we ought to comply with. This thesis aims to dispel such sceptical doubts over the normativity of rationality; it defends the idea that the requirements of rationality are indeed normative, in the sense that if one is rationally required to F, one ought to F because rationality requires one to F. The normativity of three requirements of practical rationality in particular is the main target for defense in the following pages. They are: [ENKRASIA] Rationality requires of A that, if A believes she ought to F, then A intends to F. [MEANS-ENDS] Rationality requires of A that, if A intends to E, and believes that she will not E unless she intends to M, then A intends to M. [INTENTION CONSISTENCY] Rationality requires of A that, if A intends to F, and believes that she cannot both F and G, then A does not intend to G. After presenting some of the grounds for scepticism about the normativity of these three requirements in chapter 1, the thesis goes on in chapters 2 & 3 to critically examine several different accounts of why rationality is normative, concluding that they are all unsuccessful; a novel account is called for. An account of this kind is offered over the course of the two following chapters, 4 & 5. Each requirement is shown to be constituted by a certain kind of ought, while at the same time corresponding to a rule of correct reasoning. Chapter 6 is devoted to answering an objection to that account, according to which the rules of reasoning are given by permissions rather than requirements. Chapter 7 offers a digression into a related issue in action theory: it unfavorably explores the idea that reasoning is a factor that can be used to analyse not only rational action, but also intentional action more broadly; the chapter suggests that treating intentional action as irreducible is the more fruitful approach. Finally, chapter 8 summarizes the main conclusions of the thesis and comments on some remaining questions.
140

Mapping parallel graph algorithms to throughput-oriented architectures

McLaughlin, Adam 07 January 2016 (has links)
The stagnant performance of single core processors, increasing size of data sets, and variety of structure in information has made the domain of parallel and high-performance computing especially crucial. Graphics Processing Units (GPUs) have recently become an exciting alternative to traditional CPU architectures for applications in this domain. Although GPUs are designed for rendering graphics, research has found that the GPU architecture is well-suited to algorithms that search and analyze unstructured, graph-based data, offering up to an order of magnitude greater memory bandwidth over their CPU counterparts. This thesis focuses on GPU graph analysis from the perspective that algorithms should be efficient on as many classes of graphs as possible, rather than being specialized to a specific class, such as social networks or road networks. Using betweenness centrality, a popular analytic used to find prominent entities of a network, as a motivating example, we show how parallelism, distributed computing, hybrid and on-line algorithms, and dynamic algorithms can all contribute to substantial improvements in the performance and energy-efficiency of these computations. We further generalize this approach and provide an abstraction that can be applied to a whole class of graph algorithms that require many simultaneous breadth-first searches. Finally, to show that our findings can be applied in real-world scenarios, we apply these techniques to the problem of verifying that a multiprocessor complies with its memory consistency model.

Page generated in 0.095 seconds