• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 370
  • 356
  • 40
  • 34
  • 34
  • 32
  • 30
  • 28
  • 8
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 1076
  • 1076
  • 331
  • 274
  • 193
  • 136
  • 117
  • 101
  • 93
  • 91
  • 77
  • 76
  • 76
  • 72
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Localized quality of service routing algorithms for communication networks : the development and performance evaluation of some new localized approaches to providing quality of service routing in flat and hierarchical topologies for computer networks

Alzahrani, Ahmed S. January 2009 (has links)
Quality of Service (QoS) routing considered as one of the major components of the QoS framework in communication networks. The concept of QoS routing has emerged from the fact that routers direct traffic from source to destination, depending on data types, network constraints and requirements to achieve network performance efficiency. It has been introduced to administer, monitor and improve the performance of computer networks. Many QoS routing algorithms are used to maximize network performance by balancing traffic distributed over multiple paths. Its major components include bandwidth, delay, jitter, cost, and loss probability in order to measure the end users' requirements, optimize network resource usage and balance traffic load. The majority of existing QoS algorithms require the maintenance of the global network state information and use it to make routing decisions. The global QoS network state needs to be exchanged periodically among routers since the efficiency of a routing algorithm depends on the accuracy of link-state information. However, most of QoS routing algorithms suffer from scalability problems, because of the high communication overhead and the high computation effort associated with marinating and distributing the global state information to each node in the network. The goal of this thesis is to contribute to enhancing the scalability of QoS routing algorithms. Motivated by this, the thesis is focused on localized QoS routing that is proposed to achieve QoS guarantees and overcome the problems of using global network state information such as high communication overhead caused by frequent state information updates, inaccuracy of link-state information for large QoS state update intervals and the route oscillating due to the view of state information. Using such an approach, the source node makes its own routing decisions based on the information that is local to each node in the path. Localized QoS routing does not need the global network state to be exchanged among network nodes because it infers the network state and avoids all the problems associated with it, like high communication and processing overheads and oscillating behaviour. In localized QoS routing each source node is required to first determine a set of candidate paths to each possible destination. In this thesis we have developed localized QoS routing algorithms that select a path based on its quality to satisfy the connection requirements. In the first part of the thesis a localized routing algorithm has been developed that relies on the average residual bandwidth that each path can support to make routing decisions. In the second part of the thesis, we have developed a localized delay-based QoS routing (DBR) algorithm which relies on a delay constraint that each path satisfies to make routing decisions. We also modify credit-based routing (CBR) so that this uses delay instead of bandwidth. Finally, we have developed a localized QoS routing algorithm for routing in two levels of a hierarchal network and this relies on residual bandwidth to make routing decisions in a hierarchical network like the internet. We have compared the performance of the proposed localized routing algorithms with other localized and global QoS routing algorithms under different ranges of workloads, system parameters and network topologies. Simulation results have indicated that the proposed algorithms indeed outperform algorithms that use the basics of schemes that currently operate on the internet, even for a small update interval of link state. The proposed algorithms have also reduced the routing overhead significantly and utilize network resources efficiently.
182

Personality and Rater Leniency: Comparison of Broad and Narrow Measures of Conscientiousness and Agreeableness

Grahek, Myranda 05 1900 (has links)
Performance appraisal ratings provide the basis for numerous employment decisions, including retention, promotion, and salary increases. Thus, understanding the factors affecting the accuracy of these ratings is important to organizations and employees. Leniency, one rater error, is a tendency to assign higher ratings in appraisal than is warranted by actual performance. The proposed study examined how personality factors Agreeableness and Conscientiousness relate to rater leniency. The ability of narrower facets of personality to account for more variance in rater leniency than will the broad factors was also examined. The study used undergraduates' (n = 226) evaluations of instructor performance to test the study's hypotheses. In addition to personality variables, students' social desirability tendency and attitudes toward instructor were predicted to be related to rater leniency. Partial support for the study's hypotheses were found. The Agreeableness factor and three of the corresponding facets (Trust, Altruism and Tender-Mindedness) were positively related to rater leniency as predicted. The hypotheses that the Conscientiousness factor and three of the corresponding facets (Order, Dutifulness, and Deliberation) would be negatively related to rater leniency were not supported. In the current sample the single narrow facet Altruism accounted for more variance in rater leniency than the broad Agreeableness factor. While social desirability did not account for a significant amount of variance in rater leniency, attitude toward instructor was found to have a significant positive relationship accounting for the largest amount of variance in rater leniency.
183

Performance Analysis of Wireless Networks with QoS Adaptations

Dash, Trivikram 08 1900 (has links)
The explosive demand for multimedia and fast transmission of continuous media on wireless networks means the simultaneous existence of traffic requiring different qualities of service (QoS). In this thesis, several efficient algorithms have been developed which offer several QoS to the end-user. We first look at a request TDMA/CDMA protocol for supporting wireless multimedia traffic, where CDMA is laid over TDMA. Then we look at a hybrid push-pull algorithm for wireless networks, and present a generalized performance analysis of the proposed protocol. Some of the QoS factors considered include customer retrial rates due to user impatience and system timeouts and different levels of priority and weights for mobile hosts. We have also looked at how customer impatience and system timeouts affect the QoS provided by several queuing and scheduling schemes such as FIFO, priority, weighted fair queuing, and the application of the stretch-optimal algorithm to scheduling.
184

Direct-connect performance evaluation of a valveless pulse detonation engine

Wittmers, Nicole K. 12 1900 (has links)
Approved for public release, distribution is unlimited / Operational characteristics of a valveless pulse detonation engine system were characterized by experimental measurements of thrust, fuel flow, and internal gas dynamics. The multi-cycle detonation experiments were performed on an axis-symmetric engine geometry operating on an ethylene/air mixtures. The detonation diffraction process from a small 'initiator' combustor to a larger diameter main combustor in a continuous airflow configuration was evaluated during multi-cycle operation of a pulse detonation engine and was found to be very successful at initiating combustion of the secondary fuel/air mixture at high frequencies. The configuration was used to demonstrate the benefit of generating an overdriven detonation condition near the diffraction plane for enhanced transmission of the larger combustor. Results have shown that the addition of optical sensors, such as tunable diode lasers, to provide fuel profile data are invaluable for providing high fidelity performance results. The performance results demonstrated the ability of the valveless pulse detonation engine to run at efficiencies similar to valved pulse detonation engine geometries and may be a low cost alternative to conventional air-breathing propulsion systems. / Funded By: N00014OWR20226. / Lieutenant, United States Navy
185

Smart Interventions for Effective Medication Adherence

Singh, Neetu 18 July 2016 (has links)
In this research we present a model for medication adherence from information systems and technologies (IS/IT) perspective. Information technology applications for healthcare have the potential to improve cost-effectiveness, quality and accessibility of healthcare. To date, measurement of patient medication adherence and use of interventions to improve adherence are rare in routine clinical practice. IS/IT perspective helps in leveraging the technology advancements to develop a health IT system for effectively measuring medication adherence and administering interventions. Majority of medication adherence studies have focused on average medication adherence. Average medication adherence is the ratio of the number of doses consumed and the number of doses prescribed. It does not matter in which order or pattern patients consume the dose. Patients with enormously diverse dosing behavior can achieve the same average levels of medication adher­ence. The same outcomes with different levels of ad­herence raise the possibility that patterns of adherence affect the effectiveness of medication adherence. We propose that medication adherence research should utilize effective medication adherence (EMA), derived by including both the pattern and average medication adherence for a patient. Using design science research (DSR) approach we have developed a model as an artifact for smart interventions. We have leveraged behavior change techniques (BCTs) based on the behavior change theories to design smart intervention. Because of the need for real time requirements for the system, we are also focusing on hierarchical control system theory and reference model architecture (RMA). The benefit of using this design is to enable an intervention to be administered dynamically on a need basis. A key distinction from existing systems is that the developed model leverages probabilistic measure instead of static schedule. We have evaluated and validated the model using formal proofs and by domain experts. The research adds to the IS knowledge base by providing the theory based smart interventions leveraging BCTs and RMA for improving the medication adherence. It introduces EMA as a measurement of medication adherence to healthcare systems. Smart interventions based on EMA will further lead to reducing the healthcare cost by improving prescription outcomes.
186

Issues of Complex Hierarchical Data and Multilevel Analysis : Applications in Empirical Economics

Karlsson, Joel January 2012 (has links)
This thesis consists of four individual essays and an introduction chapter. The essays are in the field of multilevel analysis of economic data. The first essay estimates capitalisation effects of farm attributes, with a particular focus on single farm payments (SFP), into the price of farms. Using a sample of Swedish farm transactions sold all across the country, the results from a spatial multiple-membership model suggests that the local effect of SFP is negative while there is a positive between-region effect of SFP, on farm prices.   The second essay investigates the extent to which differences in the probability to exit from part-time unemployment to a full-time job can be accounted for by spatial contextual factors and individual characteristics. To correctly incorporate contextual effects, a multilevel analysis was applied to explore whether contextual factors account for differences in the probability of transition to full-time employment between individuals with different characteristics. The results indicate that there is a contextual effect and that there are some spatial spill-over effects from neighbouring municipalities.   The third essay investigates the determinants of educational attainment for third-generation immigrants and natives in Sweden. Using a mixed-effects model that includes unobserved family heterogeneity, for linked register data, the main result is that the effect of parent’s educational attainment is mainly due to the between-parental education effect of family income.   The fourth and last essay presents a new robust strategy for performance evaluation in the case of panel data that is based on routinely collected variables or indicators. The suggested strategy applies a cross-classified, mixed-effect model. The strategy is implemented in two illustrative empirical examples, and the robustness is investigated in a Monte Carlo study.
187

The Contrast-Inertia Model and the Updating of Attributions in Performance Evaluation

Atkinson, Sue Andrews 12 1900 (has links)
The two problems which motivate this research concern the role of managerial accounting information in performance evaluation. The first problem is that the processing of accounting information by individual managers may deviate from a normative (Bayesian) pattern. Second, managers' use of accounting information in performance appraisal may contribute to conflict between superiors and subordinates. In this research, I applied the contrast-inertia model (C-IM) and attribution theory (AT) to predict how accounting information affects managers' beliefs about the causes for observed performance. The C-IM describes how new evidence is incorporated into opinions. Application of the C-IM leads to the prediction that information order may influence managers' opinions. Attribution theory is concerned with how people use information to assign causality, especially for success or failure. Together, the C-IM and AT imply that causal beliefs of superiors and subordinates diverge when they assimilate accounting information. Three experiments were performed with manufacturing managers as subjects. Most of the subjects were middle-level production managers from Texas manufacturing plants. The subjects used accounting information in revising their beliefs about causes for performance problems. In the experiments, the manipulated factors were the order of information, subject role (superior or subordinate), and the position of different types of information. The experimental results were analyzed by repeated measures analyses of variance, in which the dependent variable was an opinion or the change in an opinion over a series of evidence items. The experimental results indicate that the order of mixed positive and negative information affects beliefs in performance evaluation. For mixed evidence, there was significant divergence of opinions between superiors and subordinates. The results provide little evidence that superior and subordinate roles bias the belief updating process. The experiments show that belief revision in performance evaluation deviates from the normative standard, and that the use of accounting information may cause divergence of opinions between superiors and subordinates.
188

Evaluating building energy performance : a lifecycle risk management methodology

Doylend, Nicholas January 2015 (has links)
There is widespread acceptance of the need to reduce energy consumption within the built environment. Despite this, there are often large discrepancies between the energy performance aspiration and operational reality of modern buildings. The application of existing mitigation measures appears to be piecemeal and lacks a whole-system approach to the problem. This Engineering Doctorate aims to identify common reasons for performance discrepancies and develop a methodology for risk mitigation. Existing literature was reviewed in detail to identify individual factors contributing to the risk of a building failing to meet performance aspirations. Risk factors thus identified were assembled into a taxonomy that forms the basis of a methodology for identifying and evaluating performance risk. A detailed case study was used to investigate performance at whole-building and sub-system levels. A probabilistic approach to estimating system energy consumption was also developed to provide a simple and workable improvement to industry best practice. Analysis of monitoring data revealed that, even after accounting for the absence of unregulated loads in the design estimates, annual operational energy consumption was over twice the design figure. A significant part of this discrepancy was due to the space heating sub-system, which used more than four times its estimated energy consumption, and the domestic hot water sub-system, which used more than twice. These discrepancies were the result of whole-system lifecycle risk factors ranging from design decisions and construction project management to occupant behaviour and staff training. Application of the probabilistic technique to the estimate of domestic hot water consumption revealed that the discrepancies observed could be predicted given the uncertainties in the design assumptions. The risk taxonomy was used to identify factors present in the results of the qualitative case study evaluation. This work has built on practical building evaluation techniques to develop a new way of evaluating both the uncertainty in energy performance estimates and the presence of lifecycle performance risks. These techniques form a risk management methodology that can be applied usefully throughout the project lifecycle.
189

Improve the Performance and Scalability of RAID-6 Systems Using Erasure Codes

Wu, Chentao 15 November 2012 (has links)
RAID-6 is widely used to tolerate concurrent failures of any two disks to provide a higher level of reliability with the support of erasure codes. Among many implementations, one class of codes called Maximum Distance Separable (MDS) codes aims to offer data protection against disk failures with optimal storage efficiency. Typical MDS codes contain horizontal and vertical codes. However, because of the limitation of horizontal parity or diagonal/anti-diagonal parities used in MDS codes, existing RAID-6 systems suffer several important problems on performance and scalability, such as low write performance, unbalanced I/O, and high migration cost in the scaling process. To address these problems, in this dissertation, we design techniques for high performance and scalable RAID-6 systems. It includes high performance and load balancing erasure codes (H-Code and HDP Code), and Stripe-based Data Migration (SDM) scheme. We also propose a flexible MDS Scaling Framework (MDS-Frame), which can integrate H-Code, HDP Code and SDM scheme together. Detailed evaluation results are also given in this dissertation.
190

Exploration of Erasure-Coded Storage Systems for High Performance, Reliability, and Inter-operability

Subedi, Pradeep 01 January 2016 (has links)
With the unprecedented growth of data and the use of low commodity drives in local disk-based storage systems and remote cloud-based servers has increased the risk of data loss and an overall increase in the user perceived system latency. To guarantee high reliability, replication has been the most popular choice for decades, because of simplicity in data management. With the high volume of data being generated every day, the storage cost of replication is very high and is no longer a viable approach. Erasure coding is another approach of adding redundancy in storage systems, which provides high reliability at a fraction of the cost of replication. However, the choice of erasure codes being used affects the storage efficiency, reliability, and overall system performance. At the same time, the performance and interoperability are adversely affected by the slower device components and complex central management systems and operations. To address the problems encountered in various layers of the erasure coded storage system, in this dissertation, we explore the different aspects of storage and design several techniques to improve the reliability, performance, and interoperability. These techniques range from the comprehensive evaluation of erasure codes, application of erasure codes for highly reliable and high-performance SSD system, to the design of new erasure coding and caching schemes for Hadoop Distributed File System, which is one of the central management systems for distributed storage. Detailed evaluation and results are also provided in this dissertation.

Page generated in 0.1108 seconds