• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 615
  • 157
  • 86
  • 74
  • 54
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1425
  • 210
  • 188
  • 188
  • 181
  • 178
  • 123
  • 116
  • 103
  • 102
  • 98
  • 85
  • 80
  • 78
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

The Effectiveness of Quality Efforts in the Portuguese Business Culture: An Empirica Investigation

Correia, Elisabete, Lisboa, João, Yasin, Mahmoud 01 June 2003 (has links)
This study empirically examines the impact of quality effort orientation on the financial performance of certified Portuguese firms. The results of factor analysis revealed four quality efforts orientation factors. The results of cluster analysis revealed the existence of three distinct groups of firms with regard to quality efforts orientation and performance. The analysis of variance results revealed that firms with a quality efforts orientation focusing on the customer tends to outperform firms utilising other quality efforts orientation with regard to net profit after taxes.
182

Gully Morphology, Hillslope Erosion, and Precipitation Characteristics in the Appalachian Valley and Ridge Province, Southeastern USA

Luffman, Ingrid E., Nandi, Arpita, Spiegel, Tim 01 October 2015 (has links)
This study investigates gully erosion on an east Tennessee hillslope in a humid subtropical climate. The study area is deeply gullied in Ultisols (Acrisol, according to the World Reference Base for Soil), with thirty years of undisturbed erosional history with no efforts to correct or halt the erosion. The objectives are (1) to examine how different gully morphologies (channel, sidewall, and interfluve) behave in response to precipitation-driven erosion, and (2) to identify an appropriate temporal scale at which precipitation-driven erosion can be measured to improve soil loss prediction. Precipitation parameters (total accumulation, duration, average intensity, maximum intensity) extracted from data collected at an on-site weather station were statistically correlated with erosion data. Erosion data were collected from erosion pins installed in four gully systems at 78 locations spanning three different morphological settings: interfluves, channels, and sidewalls. Kruskal-Wallis non-parametric tests and Mann-Whitney U-tests indicated that different morphological settings within the gully system responded differently to precipitation (p<0.00). For channels and sidewalls, regression models relating erosion and precipitation parameters retained antecedent precipitation and precipitation accumulation or duration (R2=0.50, p<0.00 for channels, R2=0.28, p<0.00 for sidewalls) but precipitation intensity variables were not retained in the models. For interfluves, less than 20% of variability in erosion data could be explained by precipitation parameters. Precipitation duration and accumulation (including antecedent precipitation accumulation) were more important than precipitation intensity in initiating and propagating erosion in this geomorphic and climatic setting, but other factors including mass wasting and eolian erosion are likely contributors to erosion. High correlation coefficients between aggregate precipitation parameters and erosion indicate that a suitable temporal scale to relate precipitation to soil erosion is the synoptic time-scale. This scale captures natural precipitation cycles and corresponding measurable soil erosion.
183

New Strategies of Antifungal Therapy in Hematopoietic Stem Cell Transplant Recipients and Patients With Hematological Malignancies

Leather, Helen, Wingard, John R. 01 September 2006 (has links)
Invasive fungal infections (IFIs) are associated with considerable morbidity and mortality among high-risk individuals. Outcomes for IFI historically have been suboptimal and associated with a high mortality rate, hence global prophylaxis strategies have been applied to at-risk populations. Among certain populations, fluconazole prophylaxis has reduced systemic and superficial infections caused by Candida species. Newer azoles are currently being evaluated as prophylaxis and have the potential to provide protection against mould pathogens that are more troublesome to treat once they occur. Global prophylaxis strategies have the shortcoming of subjecting patients to therapy that ultimately will not need it. Targeted prophylaxis has the advantage of treating only patients at highest risk using some parameter of greater host susceptibility. Prophylaxis strategies are most suitable in patients at the highest risk for IFI. For patient groups whose risk is somewhat lower or when suspicion of IFI occurs in patients receiving prophylaxis, empirical antifungal therapy is often employed following a predefined period of fever. Again this approach subjects many non-infected patients to unnecessary and toxic therapy. A more refined approach such as presumptive or pre-emptive therapy whereby treatment is only initiated upon positive identification of a surrogate marker of infection in combination with clinical and radiological signs will subject fewer patients to toxic and expensive treatments.
184

An empirical investigation on modern code review focus areas

Jiang, Zhiyu, Ma, Bowen January 2020 (has links)
Background: In a sustaining, durable project, an effective code review process is key to ensuring the long-term quality of the code base. As the size of the software continues to increase, although the code inspections have many benefits, the time it takes, the manpower makes it not a good method in some larger projects.  Nowadays more and more industry performs modern code reviews for their project in order to increase the quality of the program. Only a few papers have studied the relationship between code reviewers and code review quality. We need to explore the relationships among code review, code complexity, and reviewers. Finding out which part of the code the reviewers pay more attention to in the code review and how much effort it takes to review. This way we can conduct code reviews more effectively. Objectives: The objective of our study is to investigate if code complexity relates to how software developers to review code in terms of code review length, review frequency, review text quality, reviewer’s sentiment. What’s more, we want to research if the reviewer’s experience will have an impact on code review quality. In order to find a suitable way to conduct a code review for different complexity codes.  Methods: We conduct an exploratory case study. The case and unit of analysis is the open-source project, Cassandra. We extract data from Cassandra Jira (a proprietary issue tracking product), the data are the reviewer’s name, review content, review time, reviewer’s comments, reviewer’s sentiment, comment length, and the review file(java file). Then we use CodeMR to calculate the complexity of the file, it uses some coupling and code complexity metrics. The reviewer’s sentiment is analyzed by a text analysis API. After we collect all these data we use SPSS to do a statistic analysis, to find whether there are relationships between code complexity and these factors. What’s more, we have a workshop and send out questionnaires to collect more input from Cassandra developers. Results: The results show that code review frequency is related to code complexity, complex code requires more review. Reviewer’s sentiment is related to code complexity, reviewer’s sentiment towards complex code is more positive or negative rather than neutral. Code review text quality is related to the reviewer’s experience, experienced reviewers leave a comment with higher quality than novice reviewers. On the other hand, the code review length and review text quality are not related to code complexity. Conclusions: According to the results, the code with higher code complexity related to the more frequent review, and the reviewer's emotions are more clear when reviewing more complex code. Training experienced reviewers are also very necessary because the results show that experienced reviewers review the code with higher quality. From the questionnaire, we know developers believe that more complex code needs more iterations of code review and experienced reviewers do have a positive effect on code review, which gives us a guide on how to do code review based on a different level of code complexity.
185

Empirical analysis of wireless sensor networks / L'analyse empirique des réseaux de capteurs sans fil

Gupta, Ashish 10 September 2010 (has links)
Les réseaux de capteurs sans fil sont une collection de nœuds non connectés qui sont installés pour la détection de certains phénomènes intéressants. Après avoir pris des mesures un capteur sans fil retransmet ces mesures à la station de base. La station de base collecte les données de tous les capteurs et les analyse. Pour économiser l’énergie il est souvent utilise de grouper les capteurs en clusters, chaque cluster ayant une tête de cluster qui communique avec la station de base. Au début, on commence par analyser la simulation des réseaux Zigbee où il y a quelques nœuds qui transmettent avec différentes puissances. Les résultats montrent que dans les réseaux de capteurs mobiles et hétérogènes et à cause du phénomène d’isolation des nœuds et du coût très élevé du routage et la maintenance, les performances sont moins bonnes que celles des réseaux homogènes. Le but principal de cette thèse est de faire une analyse empirique des réseaux de capteurs. A cause de leurs ressources limitées les réseaux de capteurs doivent faire face à plusieurs défis techniques. Beaucoup de protocoles fonctionnent très bien dans les simulateurs mais pas aussi bien en implémentation réelle. Par exemple, les capteurs déposés sur un objet élevé subissent moins d’atténuation que les autres capteurs placés sur le sol. Dans cette thèse, on montre qu’il y a un impact des liens asymétriques sur la topologie des réseaux de capteurs sans fil et que la qualité des liens (LQI) varie en permanence. On propose deux méthodes pour améliorer les performances des algorithmes basés sur la qualité des liens des réseaux de capteurs avec des liens asymétriques. Dans la première méthode, le réseau n’a pas d’autre choix que d’avoir des nœuds qui transmettent à des grandes distances et deviennent des clusters Head. Le nombre de clusters Head peut être donné par Matérn Hard-core process. Dans la seconde méthode, on propose HybridLQI qui améliore les algorithmes basés sur LQI sans ajouter des entêtes au réseau. Ensuite, on applique les approches de clustérisassions théoriques sur le réseau de capteurs réel. On applique Matérn Hard Core process et Max-Min heuristique de formation des clusters sur des nœuds «Tmote » dans des réseaux denses et des réseaux de faible densité. Les résultats empiriques ont montré la supériorité de Matérn sur Max-Min dans les besoins d’espace mémoire, la simplicité de l’implémentation et le nombre de messages de signalisation. Enfin, en utilisant les chaînes de Markov absorbantes et des mesures, on étudie les performances des techniques de la distribution de charge dans des réseaux de capteurs réels / Wireless sensor networks are the collection of wireless nodes that are deployed to monitor certain phenomena of interest. Once the node takes measurements it transmits to a base station over a wireless channel. The base station collects data from all the nodes and do further analysis. To save energy, it is often useful to build clusters, and the head of each cluster communicates with the base station. Initially, we do the simulation analysis of the Zigbee networks where few nodes are more powerful than the other nodes. The results show that in the mobile heterogeneous sensor networks, due to phenomenon orphaning and high cost of route discovery and maintenance, the performance of the network degrades with respect to the homogeneous network. The core of this thesis is to empirically analyze the sensor network. Due to its resource constraints, low power wireless sensor networks face several technical challenges. Many protocols work well on simulators but do not act as we expect in the actual deployments. For example, sensors physically placed at the top of the heap experience Free Space propagation model, while the sensors which are at the bottom of the heap have sharp fading channel characteristics. In this thesis, we show that impact of asymmetric links in the wireless sensor network topology and that link quality between sensors varies consistently. We propose two ways to improve the performance of Link Quality Indicator (LQI) based algorithms in the real asymmetric link sensor networks. In the first way, network has no choice but to have some sensors which can transmit over the larger distance and become cluster heads. The number of cluster heads can be given by Matérn Hard-Core process. In the second solution, we propose HybridLQI which improves the performance of LQI based algorithm without adding any overhead on the network. Later, we apply theoretical clustering approaches in sensor network to real world. We deploy Matérn Hard Core Process and Max-Min cluster Formation heuristic on real Tmote nodes in sparse as well as highly dense networks. Empirical results show clustering process based on Matérn Hard Core Process outperforms Max-Min Cluster formation in terms of the memory requirement, ease of implementation and number of messages needed for clustering. Finally, using Absorbing Markov chain and measurements we study the performance of load balancing techniques in real sensor networks.
186

Extreme Value Distribution in Hydrology

Chen, Bill (Tzeng-Lwen) 01 May 1980 (has links)
The problems encountered when empirical fit is used as the sole criterion for choosing a distribution to represent annual flood data are discussed. Some theoretical direction is needed for this choice. Extreme value theory is established as a viable tool for analyzing annual flood data. Extreme value distributions have been used in previous analyses of flood data. How�ver, no systematic investigation of the theory has previously been applied. Properties of the extreme value distributions are examined. The most appropriate distribution for flood data has not previously been fit to such data. The fit of the chosen extreme value distribution compares favorably with that of the Pearson and log Pearson Type III distributions.
187

A Comprehensive Safety Analysis of Diverging Diamond Interchanges

Lloyd, Holly 01 May 2016 (has links)
As the population grows and the travel demands increase, alternative interchange designs are becoming increasingly popular. The diverging diamond interchange is one alternative design that has been implemented in the United States. This design can accommodate higher flow and unbalanced flow as well as improve safety at the interchange. As the diverging diamond interchange is increasingly considered as a possible solution to problematic interchange locations, it is imperative to investigate the safety effects of this interchange configuration. This report describes the selection of a comparison group of urban diamond interchanges, crash data collection, calibration of functions used to estimate the predicted crash rate in the before and after periods and the Empirical Bayes before and after analysis technique used to determine the safety effectiveness of the diverging diamond interchanges in Utah. A discussion of pedestrian and cyclist safety is also included. The analysis results demonstrated statistically significant decreases in crashes at most of the locations studied. This analysis can be used by UDOT and other transportation agencies as they consider the implementation of the diverging diamond interchanges in the future.
188

A Comparison of Rational Versus Empirical Methods in the Prediction of Psychotherapy Outcome

Spielmans, Glen I. 01 May 2004 (has links)
Several systems have been designed to monitor psychotherapy outcome, in which feedback is generated based on how a client's rate of progress compares to an expected level of progress. Clients who progress at a much lesser rate than the average client are referred to as signal-alarm cases. Recent studies have shown that providing feedback to therapists based on comparing their clients' progress to a set of rational, clinically derived algorithms has enhanced outcomes for clients predicted to show poor treatment outcomes. Should another method of predicting psychotherapy outcome emerge as more accurate than the rational method, this method would likely be more useful than the rational method in enhancing psychotherapy outcomes. The present study compared the rational algorithms to those generated by an empirical prediction method generated through hierarchical linear modeling. The sample consisted of299 clients seen at a university counseling center and a psychology training clinic. The empirical method was significantly more accurate in predicting outcome than was the rational method. Clients predicted to show poor treatment outcome by the empirical method showed, on average, very little positive change. There was no difference between the methods in the ability to accurately forecast reliable worsening during treatment. The rational method resulted in a high percentage of false alarms, that is, clients who were predicted to show poor treatment response but in fact showed a positive treatment outcome. The empirical method generated significantly fewer false alarms than did the rational method. The empirical method was generally accurate in its predictions of treatment success, whereas the rational method was somewhat less accurate in predicting positive outcomes. Suggestions for future research in psychotherapy quality management are discussed.
189

Factors Affecting the Design and Use of Reusable Components

Anguswamy, Reghu 31 July 2013 (has links)
Designing software components for future reuse has been an important area in software engineering. A software system developed with reusable components follows a "with" reuse process while a component designed to be reused in other systems follows a "for" reuse process. This dissertation explores the factors affecting design for reuse and design with reusable components through empirical studies. The studies involve Java components implementing a particular algorithm, a stemming algorithm that is widely used in the conflation domain. The method and empirical approach are general and independent of the programming language. Such studies may be extended to other types of components, for example, components implementing data structures such as stacks, queues etc. Design for reuse: In this thesis, the first study was conducted analyzing one-use and equivalent reusable components for the overhead in terms of component size, effort required, number of parameters, and productivity. Reusable components were significantly larger than their equivalent one-use components and had significantly more parameters. The effort required for the reusable components was higher than for one-use components. The productivity of the developers was significantly lower for the reusable components compared to the one-use components. Also, during the development of reusable components, the subjects spent more time on writing code than designing the components, but not significantly so.  A ranking of the design principles by frequency of use is also reported. A content analysis performed on the feedback is also reported and the reasons for using and not using the reuse design principles are identified. A correlation analysis showed that the reuse design principles were, in general, used independently of each other. Design with reuse: Through another empirical study, the effect of the size of a component and the reuse design principles used in building the component on the ease of reuse were analyzed. It was observed that the higher the complexity the lower the ease of reuse, but the correlation is not significant. When considered independently, four of the reuse design principles: well-defined interface, clarity and understandability, generality, and separate concepts from content significantly increased the ease of reuse while commonality and variability analysis significantly decreased the ease of reuse, and documentation did not have a significant impact on the ease of reuse. Experience in the programming language had no significant relationship with the reusability of components. Experience in software engineering and software reuse showed a relationship with reusability but the effect size was small. Testing components before integrating them into a system was found to have no relationship with the reusability of components. A content analysis of the feedback is presented identifying the challenges of components that were not easy to reuse. Features that make a component easily reusable were also identified. The Mahalanobis-Taguchi Strategy (MTS) was employed to develop a model based on Mahalanobis Distance  to identify the factors that can detect if a component is easy to reuse or not. The identified factors within the model are: size of a component, a set of reuse design principles (well-defined interface, clarity and understandability, commonality and variability analysis, and generality), and component testing. / Ph. D.
190

Modeling longitudinal data with interval censored anchoring events

Chu, Chenghao 01 March 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In many longitudinal studies, the time scales upon which we assess the primary outcomes are anchored by pre-specified events. However, these anchoring events are often not observable and they are randomly distributed with unknown distribution. Without direct observations of the anchoring events, the time scale used for analysis are not available, and analysts will not be able to use the traditional longitudinal models to describe the temporal changes as desired. Existing methods often make either ad hoc or strong assumptions on the anchoring events, which are unveri able and prone to biased estimation and invalid inference. Although not able to directly observe, researchers can often ascertain an interval that includes the unobserved anchoring events, i.e., the anchoring events are interval censored. In this research, we proposed a two-stage method to fit commonly used longitudinal models with interval censored anchoring events. In the first stage, we obtain an estimate of the anchoring events distribution by nonparametric method using the interval censored data; in the second stage, we obtain the parameter estimates as stochastic functionals of the estimated distribution. The construction of the stochastic functional depends on model settings. In this research, we considered two types of models. The first model was a distribution-free model, in which no parametric assumption was made on the distribution of the error term. The second model was likelihood based, which extended the classic mixed-effects models to the situation that the origin of the time scale for analysis was interval censored. For the purpose of large-sample statistical inference in both models, we studied the asymptotic properties of the proposed functional estimator using empirical process theory. Theoretically, our method provided a general approach to study semiparametric maximum pseudo-likelihood estimators in similar data situations. Finite sample performance of the proposed method were examined through simulation study. Algorithmically eff- cient algorithms for computing the parameter estimates were provided. We applied the proposed method to a real data analysis and obtained new findings that were incapable using traditional mixed-effects models. / 2 years

Page generated in 0.0381 seconds