• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 619
  • 158
  • 86
  • 74
  • 55
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1432
  • 210
  • 190
  • 190
  • 183
  • 180
  • 124
  • 118
  • 104
  • 103
  • 99
  • 85
  • 81
  • 80
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Examining the Effects of Site-Selection Criteria for Evaluating the Effectiveness of Traffic Safety Improvement Countermeasures

Kuo, Pei-Fen 2012 May 1900 (has links)
The before-after study is still the most popular method used by traffic engineers and transportation safety analysts for evaluating the effects of an intervention. However, this kind of study may be plagued by important methodological limitations, which could significantly alter the study outcome. They include the regression-to-the-mean (RTM) and site-selection effects. So far, most of the research on these biases has focused on the RTM. Hence, the primary objective of this study consists of presenting a method that can reduce the site-selection bias when an entry criterion is used in before-after studies for continuous (e.g. speed, reaction times, etc.) and count data (e.g. number of crashes, number of fatalities, etc.). The proposed method documented in this research provides a way to adjust the Naive estimator by using the sample data and without relying on the data collected from the control group, since finding enough appropriate sites for the control group is much harder in traffic-safety analyses. In this study, the proposed method, a.k.a. Adjusted method, was compared to commonly used methods in before-after studies. The study results showed that among all methods evaluated, the Naive is the most significantly affected by the selection bias. Using the CG, the ANCOVA, or the EB method based on a control group (EBCG) method can eliminate the site-selection bias, as long as the characteristics of the control group are exactly the same as those for the treatment group. However, control group data that have same characteristics based on a truncated distribution or sample may not be available in practice. Moreover, site-selection bias generated by using a dissimilar control group might be even higher than with using the Naive method. The Adjusted method can partially eliminate site-selection bias even when biased estimators of the mean, variance, and correlation coefficient of a truncated normal distribution are used or are not known with certainty. In addition, three actual datasets were used to evaluate the accuracy of the Adjusted method for estimating site-selection biases for various types of data that have different mean and sample-size values.
332

Biobanks and informed consent : An anthropological contribution to medical ethics

Hoeyer, Klaus January 2004 (has links)
Background: 1985 saw the beginnings of a population-based biobank in Västerbotten County, Sweden. In 1999, a start-up genomics company, UmanGenomics, obtained ‘all commercial rights’ to the biobank. The company introduced an ethics policy, which was well received in prestigious journals, focusing on public oversight and informed consent. Aims: To explore how social anthropology can aid understanding of the challenges posed by the new role of the biobank in Västerbotten, and thus complement more established traditions in the field of medical ethics. An anthropological study of the ethics policy was executed. Theoretical perspective: Inspired by the anthropology of policy and social science perspectives on ethics and morality, the policy was studied at three analytical levels: policymakers (who formulate the policy), policy workers (who implement the policy, primarily nurses who obtain informed consent) and target group (for whom and on whom the policy is supposed to work: the potential donors to the biobank). Methods: Policymakers, nurses, and potential donors were interviewed, donations observed, and official documents analysed to mirror the moral problematizations made at the three levels in each other and to study the practical implications of the policy. To extend the reliability of the findings two surveys were executed: one among the general population, one among donors. Results: The qualitative studies show that policymakers distinguish between blood and data differently to potential donors. Informed consent seems more important to policymakers than potential donors, who are more concerned about political implications at a societal level. Among the respondents from the survey in the general public, a majority (66.8%) accepted surrogate decisions by Research Ethics Committees; a minority (4 %) stated informed consent as a principal concern; and genetic research based on biobank material was generally accepted (71%). Among the respondents to the survey in donors, 65% knew they had consented to donate a blood sample, and 32% knew they could withdraw their consent; 6% were dissatisfied with the information they had received; and 85% accepted surrogate decisions by Research Ethics Committees. Discussion: The ethics policy constitutes a particular naming and framing of moral problems in biobank-based research which overemphasises the need for informed consent, and underemphasises other concerns of potential donors. This embodies a political transformation where access to stored blood and medical information is negotiated in ethical terms, while it also has unacknowledged political implications. In particular, the relations between authorities and citizens in the Swedish welfare state are apparently transforming: from mutual obligation to individual contracts. Conclusion: Anthropology contributes to medical ethics with increased awareness of the practical implications of particular research ethical initiatives. This awareness promotes appreciation of the political implications of ethics policies and raises new issues for further consideration.
333

Sjuksköterskestudenters upplevelser av handledning under verksamhetsförlagd utbildning VFU : en empirisk undersökning

Petrušić, Minnie, Åberg, Christèl January 2009 (has links)
Background: The education for becoming a registered nurse in Sweden includes compulsory time for clinical practice. The clinical education means an essential part in the students’ personal development. According to regulation for nurses, a registered nurse has the responsibility to supervise nurse students during their clinical placement. Aim:. The aim of the examination was to describe nurse students´ experiences of supervision during clinical education. Method: The study was an empirical examination based on interviews with seven nurse students. Results: The result showed that nurse students wanted to be seen and treated as colleagues, but not used as labour. Students believed that a supervisor was required to have knowledge how the students learn and how to teach the students about the coming role as a nurse, major themes that appeared were: to have time for the student, to promote a good interaction and to respect the students´ situation.
334

Empirical Evaluations of Semantic Aspects in Software Development

Blom, Martin January 2006 (has links)
This thesis presents empirical research in the field of software development with a focus on handling semantic aspects. There is a general lack of empirical data in the field of software development. This makes it difficult for industry to choose an appropriate method for their particular needs. The lack of empirical data also makes it difficult to convey academic results to the industrial world. This thesis tries to remedy this problem by presenting a number of empirical evaluations that have been conducted to evaluate some common approaches in the field of semantics handling. The evaluations have produced some interesting results, but their main contribution is the addition to the body of knowledge on how to perform empirical evaluations in software development. The evaluations presented in this thesis include a between-groups controlled experiment, industrial case studies and a full factorial design controlled experiment. The factorial design seems like the most promising approach to use when the number of factors that need to be controlled is high and the number of available test subjects is low. A factorial design has the power to evaluate more than one factor at a time and hence to gauge the effects from different factors on the output. Another contribution of the thesis is the development of a method for handling semantic aspects in an industrial setting. A background investigation performed concludes that there seems to be a gap between what academia proposes and how industry handles semantics in the development process. The proposed method aims at bridging this gap. It is based on academic results but has reduced formalism to better suit industrial needs. The method is applicable in an industrial setting without interfering too much with the normal way of working, yet providing important benefits. This method is evaluated in the empirical studies along with other methods for handling semantics. In the area of semantic handling, further contributions of the thesis include a taxonomy for semantic handling methods as well as an improved understanding of the relation between semantic errors and the concept of contracts as a means of avoiding and handling these errors.
335

A Manifestation of Model-Code Duality: Facilitating the Representation of State Machines in the Umple Model-Oriented Programming Language

Badreldin, Omar 18 April 2012 (has links)
This thesis presents research to build and evaluate embedding of a textual form of state machines into high-level programming languages. The work entailed adding state machine syntax and code generation to the Umple model-oriented programming technology. The added concepts include states, transitions, actions, and composite states as found in the Unified Modeling Language (UML). This approach allows software developers to take advantage of the modeling abstractions in their textual environments, without sacrificing the value added of visual modeling. Our efforts in developing state machines in Umple followed a test-driven approach to ensure high quality and usability of the technology. We have also developed a syntax-directed editor for Umple, similar to those available to other high-level programming languages. We conducted a grounded theory study of Umple users and used the findings iteratively to guide our experimental development. Finally, we conducted a controlled experiment to evaluate the effectiveness of our approach. By enhancing the code to be almost as expressive as the model, we further support model-code duality; the notion that both model and code are two faces for the same coin. Systems can be and should be equally-well specified textually and diagrammatically. Such duality will benefit both modelers and coders alike. Our work suggests that code enhanced with state machine modeling abstractions is semantically equivalent to visual state machine models. The flow of the thesis is as follows; the research hypothesis and questions are presented in “Chapter 1: Introduction”. The background is explored in “Chapter 2: Background”. “Chapter 3: Syntax and semantics of simple state machines” and “Chapter 4: Syntax and semantics of composite state machines” investigate simple and composite state machines in Umple, respectively. “Chapter 5: Implementation of composite state machines” presents the approach we adopt for the implementation of composite state machines that avoids explosion of the amount of generated code. From this point on, the thesis presents empirical work. A grounded theory study is presented in “Chapter 6: A Grounded theory study of Umple”, followed by a controlled experiment in “Chapter 7: Experimentation”. These two chapters constitute our validation and evaluation of Umple research. Related and future work is presented in “Chapter 8: Related work”.
336

Manufacturing Strategy, Capabilities and Performance

Hallgren, Mattias January 2007 (has links)
This dissertation addresses the topic of manufacturing strategy, especially the manufacturing capabilities and operational performance of manufacturing plants. Manufacturing strategy research aims at providing a structured decision making approach to improve the economics of manufacturing and to make companies more competitive. The overall objective of this thesis is to investigate how manufacturing companies make use of different manufacturing practices or bundles of manufacturing practices to develop certain sets of capabilities, with the ultimate goal of supporting the market requirements. The thesis aims to increase the understanding of the role of operations management and its immediate impact on manufacturing performance. Following the overall research objective three areas are identified to be of particular interest; to investigate (i) the relationship among different dimensions of operational performance, (ii) the way different performance dimensions are affected by manufacturing practices or bundles of manufacturing practices, (iii) whether there are contingencies that may help explain the relationships between dimensions of manufacturing capabilities or the effects of manufacturing practices or bundles of manufacturing practices on operational performance. The empirical elements in this thesis use data from the High Performance Manufacturing (HPM) project. The HPM project is an international study of manufacturing plants involving seven countries and three industries. The research contributes to several insights to the research area of manufacturing strategy and to practitioners in manufacturing operations. The thesis develops measurements for and tests the effects of several manufacturing practices on operational performance. The results are aimed at providing guidance for decision making in manufacturing companies. The most prominent implication for researchers is the manifestation of the customer order decoupling point as an important contingency variable to consider when studying manufacturing operations.
337

Uncertainty and Information Processing

Frost, Robert E., III 01 December 2011 (has links)
The purpose of these two studies was to examine two factors that may influence the effects of uncertainty on information processing. The first factor is the positioning of uncertainty relative to a target of judgment, and how this affects people’s judgment processing. The second factor had to do with the degree to which uncertainty signals active goal conflict or not. In the first study, 145 participants with a mean age of 19.51 were induced with uncertainty either before or after information about the target accused of illegal behavior. The results demonstrated that uncertainty before information produced higher guilt judgments of the target and uncertainty after information produced lower guilt judgments towards the target, but only in a subset of conditions. The second study, with 121 participants and a mean age was 19.58, primed participants with one of two different goals. It then induced uncertainty threat which either was or was not relevant to the primed goal, and asked participants to make judgments based on information given about the target as in Study 1. The results revealed that for women, but not for men, uncertainty threat produced stronger guilt judgments when the uncertainty was relevant to the primed goal. Together, these results indicate that both the positioning and goal relevance of uncertainty may impact its effect on information processing.
338

Design of an Aging Estimation Block for a Battery Management System (BMS) :

Khalid, Areeb January 2013 (has links)
No description available.
339

On the Maintenance Costs of Formal Software Requirements Specification Written in the Software Cost Reduction and in the Real-time Unified Modeling Language Notations

Kwan, Irwin January 2005 (has links)
A formal specification language used during the requirements phase can reduce errors and rework, but formal specifications are regarded as expensive to maintain, discouraging their adoption. This work presents a single-subject experiment that explores the costs of modifying specifications written in two different languages: a tabular notation, Software Cost Reduction (SCR), and a state-of-the-practice notation, Real-time Unified Modeling Language (UML). The study records the person-hours required to write each specification, the number of defects made during each specification effort, and the amount of time repairing these defects. Two different problems are specified&mdash;a Bidirectional Formatter (BDF), and a Bicycle Computer (BC)&mdash;to balance a learning effect from specifying the same problem twice with different specification languages. During the experiment, an updated feature for each problem is sent to the subject and each specification is modified to reflect the changes. <br /><br /> The results show that the cost to modify a specification are highly dependent on both the problem and the language used. There is no evidence that a tabular notation is easier to modify than a state-of-the-practice notation. <br /><br /> A side-effect of the experiment indicates there is a strong learning effect, independent of the language: in the BDF problem, the second time specifying the problem required more time, but resulted in a better-quality specification than the first time; in the BC problem, the second time specifying the problem required less time and resulted in the same quality specification as the first time. <br /><br /> This work demonstrates also that single-subject experiments can add important information to the growing body of empirical data about the use of formal requirements specifications in software development.
340

Bootstrap and Empirical Likelihood-based Semi-parametric Inference for the Difference between Two Partial AUCs

Huang, Xin 17 July 2008 (has links)
With new tests being developed and marketed, the comparison of the diagnostic accuracy of two continuous-scale diagnostic tests are of great importance. Comparing the partial areas under the receiver operating characteristic curves (pAUC) is an effective method to evaluate the accuracy of two diagnostic tests. In this thesis, we study the semi-parametric inference for the difference between two pAUCs. A normal approximation for the distribution of the difference between two pAUCs has been derived. The empirical likelihood ratio for the difference between two pAUCs is defined and its asymptotic distribution is shown to be a scaled chi-quare distribution. Bootstrap and empirical likelihood based inferential methods for the difference are proposed. We construct five confidence intervals for the difference between two pAUCs. Simulation studies are conducted to compare the finite sample performance of these intervals. We also use a real example as an application of our recommended intervals.

Page generated in 1.1983 seconds