• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 356
  • 347
  • 40
  • 34
  • 33
  • 30
  • 26
  • 23
  • 8
  • 6
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 1024
  • 1024
  • 331
  • 274
  • 189
  • 129
  • 112
  • 89
  • 88
  • 87
  • 77
  • 72
  • 71
  • 68
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Differential Reactions to Men's and Women's Counterproductive Work Behavior

Way, Jason Donovan 01 January 2011 (has links)
This purpose of this study was to examine the effect that employee gender might have on performance ratings. Specifically, it was thought that negative performance episodes, such as aggressive behavior, might have less of an effect on performance ratings for males compared to females because males have a stereotype of being more aggressive. Additional hypotheses examined how different types of negative performance affected perceptions that the employee was behaving according to their gender ideal, and whether people judged male and female aggressiveness differently. To this end, 134 undergraduate students participated in a 2 x 3 design experiment where they read about a hypothetical server in a restaurant who had committed various negative behaviors at work. The results were, for the most part, not significant. The exception was that there were some slight group differences in how well the employees in the various conditions fit their gender ideal.
252

Delta Encoding Based Methods to Reduce the Size of Smartphone Application Updates

Samteladze, Nikolai 01 January 2013 (has links)
In 2012 the two biggest smartphone application markets - the Google Play store and the Apple App Store - each had close to 700 thousand applications with approximately 2 billion downloads happening every month. The introduction of new features and correction of bugs and security vulnerabilities make it usual for mobile application developers to release new version of an application every month. Combined with the great smartphone popularity, it leads to approximately 400 PB annual traffic generated by app updates in the U.S. wireless networks alone. Being partially transmitted through cellular networks, mobile application updates traffic accounts to up to 20% of the annual cellular traffic in the U.S. This thesis presents delta encoding based techniques that significantly reduce update traffic by transferring only the changes (or patches) between two versions of an application. Such network bandwidth reduction enables savings for smartphone users, mobile operators, and data centers that serve app updates. Two Android application update methods - called DELTA and DELTA++ - were developed, implemented, and evaluated. Both methods use delta encoding to transfer only the changes between application versions. DELTA++ improves on DELTA by exploiting the internal structure of APK packages, which are used to distribute Android applications. The APK file can be seen as a compressed archive of all the files contained in application. The DELTA++ algorithm unpackages APK and computes differences between decompressed application files, which allows it to produce much smaller patches. Our experimental results show that DELTA++ reduces app update size by 77% on average. DELTA++ patches are twice smaller than those produced by the Google Smart Application Update method, which is currently used in the Google Play store. This reduction has a trade-off - increased complexity of generated patches makes patch deployment process more sophisticated. Consequently, more time has to be spent to apply the received patch in smartphone. Such delay can be considered acceptable as application update is a delay-tolerant process and smartphone users do not need an update immediately after its release. In order to estimate how much savings can be achieved with DELTA++, a study of Android smartphone users was conducted. The results show that if DELTA++ is used in Google Play instead of the Google Smart Application Update method, then 32 PB or 1.7% of annual traffic can be saved every year in cellular networks in the U.S. The Apple App Store currently does not use any method based on delta encoding to reduce application updates traffic. Usage of methods similar to DELTA++ in the App Store can further increase the savings up to the 12% of yearly cellular traffic in the U.S., which equals to more than $2 billion cost savings a year.
253

Packet Coalescing and Server Substitution for Energy-Proportional Operation of Network Links and Data Servers

Mostowfi, Mehrgan 01 January 2013 (has links)
Electricity generation for Information and Communications Technology (ICT) contributes over 2% of the human-generated CO2 to the atmosphere. Energy costs are rapidly becoming the major operational expense for ICT and may soon dwarf capital expenses as software and hardware continue to drop in price. In this dissertation, three new approaches to achieving energy-proportional operation of network links and data servers are explored. Ethernet is the dominant wireline communications technology for Internet connectivity. IEEE 802.3az Energy Efficient Ethernet (EEE) describes a Low Power Idle (LPI) mechanism for allowing Ethernet links to sleep. A method of coalescing packets to consolidate link idle periods is investigated. It is shown that packet coalescing can result in almost fully energy-proportional behavior of an Ethernet link. Simulation is done at both the queuing and protocol levels for a range of traffic models and system configurations. Analytical modeling is used to gain a deeper general insight into packet coalescing. The architecture of a hybrid web server based on two platforms - a low-power (ARM based) and a high-power (Pentium based) - can be used to achieve step-wise energy-proportional operation and maintain headroom for peak loads. A new method based on Gratuitous ARP for switching between two mirrored platforms is developed, prototyped, and evaluated. Experimental results show that for up to 50 requests per minute, a hybrid server where the Master platform is a 2012 server-grade desktop PC can sleep for 50% of time with no increase in response time. HTTP can be used for redirection in space - a new method for precise redirection in time is proposed and used to schedule requests to a high-power server in a hybrid server. The scheduling method is modeled as a single server queue with vacations where the vacation duration is fixed and the service distribution is directly a function of the request load. This approach is well suited for delay tolerant applications such as application updates and file back-up. Energy-proportional operation is shown to be achievable in a prototype system. A first-order estimation with conservative assumptions on the adoption rate of the methods proposed and studied here shows that these methods can collectively enable energy savings in the order of hundreds of million dollars in the US annually.
254

Self-Assembly Kinetics of Microscale Components: A Parametric Evaluation

Carballo, Jose Miguel 01 January 2015 (has links)
The goal of the present work is to develop, and evaluate a parametric model of a basic microscale Self-Assembly (SA) interaction that provides scaling predictions of process rates as a function of key process variables. At the microscale, assembly by “grasp and release” is generally challenging. Recent research efforts have proposed adapting nanoscale self-assembly (SA) processes to the microscale. SA offers the potential for reduced equipment cost and increased throughput by harnessing attractive forces (most commonly, capillary) to spontaneously assemble components. However, there are challenges for implementing microscale SA as a commercial process. The existing lack of design tools prevents simple process optimization. Previous efforts have characterized a specific aspect of the SA process. However, the existing microscale SA models do not characterize the inter-component interactions. All existing models have simplified the outcome of SA interactions as an experimentally-derived value specific to a particular configuration, instead of evaluating it outcome as a function of component level parameters (such as speed, geometry, bonding energy and direction). The present study parameterizes the outcome of interactions, and evaluates the effect of key parameters. The present work closes the gap between existing microscale SA models to add a key piece towards a complete design tool for general microscale SA process modeling. First, this work proposes a simple model for defining the probability of assembly of basic SA interactions. A basic SA interaction is defined as the event where a single part arrives on an assembly site. The model describes the probability of assembly as a function of kinetic energy, binding energy, orientation and incidence angle for the component and the assembly site. Secondly, an experimental SA system was designed, and implemented to create individual SA interactions while controlling process parameters independently. SA experiments measured the outcome of SA interactions, while studying the independent effects of each parameter. As a first step towards a complete scaling model, experiments were performed to evaluate the effects of part geometry and part travel direction under low kinetic energy conditions. Experimental results show minimal dependence of assembly yield on the incidence angle of the parts, and significant effects induced by changes in part geometry. The results from this work indicate that SA could be modeled as an energy-based process due to the small path dependence effects. Assembly probability is linearly related to the orientation probability. The proportionality constant is based on the area fraction of the sites with an amplification factor. This amplification factor accounts for the ability of capillary forces to align parts with only very small areas of contact when they have a low kinetic energy. Results provide unprecedented insight about SA interactions. The present study is a key step towards completing a basic model of a general SA process. Moreover, the outcome from this work can complement existing SA process models, in order to create a complete design tool for microscale SA systems. In addition to SA experiments, Monte Carlo simulations of experimental part-site interactions were conducted. This study confirmed that a major contributor to experimental variation is the stochastic nature of experimental SA interactions and the limited sample size of the experiments. Furthermore, the simulations serve as a tool for defining an optimum sampling strategy to minimize the uncertainty in future SA experiments.
255

Design and evaluation of new search paradigms and power management for peer-to-peer file sharing

Perera, Graciela 01 June 2007 (has links)
Current estimates are that more than nine million PCs in the U.S. are part of peer-to-peer (P2P) file sharing overlay networks on the Internet. These P2P hosts generate about 20% of the traffic on the Internet and consume about 7.8 TWh/yr equal to $630 million per year. File search in a P2P network is based on a wasteful paradigm of broadcasting query messages. Reducing P2P overhead traffic to reduce bandwidth waste and enabling power management to reduce electricity usage are clearly of great interest. In this dissertation, two new search paradigms with reduced overhead traffic are investigated. The new Targeted Search method uses statistics from previous searches to target future searches. Targeted Search is shown to reduce query overhead traffic when compared to broadcast-based search used by Gnutella. The new Broadcast Updates with Local Look-up Search (BULLS) protocol enables new capabilities including power management and reduces overhead traffic by enabling a local look-up of shared files. BULLS hosts periodically broadcast changes in their list of files shared and build a table of shared files by all other hosts. Power management in P2P networks is studied as an application of the minimum set cover problem. A reduction in overall energy consumption is achieved by powering down hosts that have all of their shared files fully shared (or covered) by other hosts. A new set cover heuristic -- called the Random Map Out (RMO) algorithm --is introduced and compared to the well-known Greedy heuristic. The algorithms are evaluated for minimum set cover size and computational complexity (number of comparisons). The RMO algorithm requires significantly less comparisons than Greedy and still achieves a set cover size within a few percent of that of Greedy. Additionally, the RMO algorithm can be distributed and independently executed by each host with reduced complexity per host where the Greedy heuristic does not reduce in complexity by being distributed. With RMO there is a non-zero probability of a given file being "lost" (not in set cover). The probability of this event is modeled and numerical results show that the probability of a file being lost is practically insignificant.
256

Design and evaluation of new power management methods to reduce direct and induced energy use of the internet

Gunaratne, Priyanga Chamara 01 June 2006 (has links)
The amount of electricity consumed by devices connected to the Internet in the U.S. has rapidly increased and now amounts to over 2% of total electricity used, which is about 74 TWh/yr costing over $6 billion annually. This energy use can be categorized as direct and induced. Much of this energy powers idle links, switches, and network-connected hosts and is thus wasted.This dissertation contains the first-ever investigation into the energy efficiency of Ethernet networks. A method for matching Ethernet link data rate with link utilization, called Adaptive Link Rate (ALR), is designed and evaluated. ALR consists of a mechanism to change the link data rate and a policy to determine when to change the data rate. The focus of this dissertation is on the analysis and simulation evaluation of two ALR policies. The simplest ALR policy uses output buffer thresholds to determine when to change data rate. This policy is modeled using a Markov chain. A specific challenge was modeling a state-dependent service rate queue with rate transition only at service completion. This policy was shown to be unstable in some cases, and an improved policy based on explicit utilization measurement was investigated. This more complex policy was evaluated using simulation. A synthetic traffic generator was developed to create realistic synthetic network traffic traces for the simulation evaluation. Finally, an improved method for detecting long idle periods using quantile estimation was investigated.Characterization of network traffic showed that proxying by a low power device for a high power device is feasible. A prototype proxy for a Web server was developed and studied. To maintain TCP connections during sleep time periods of a host, a new split TCP connection method was designed. The split connection method was prototyped and shown to be invisible to a telnet session.This research has contributed to the formation of an IEEE 802.3 Energy Efficient Ethernet study group. It is thus very likely that ALR will become a standard and will achieve industry implementation and widespread deployment. This will result in energy savings of hundreds of millions of dollars per year in the U.S. alone.
257

Towards a Low Latency Internet: Understanding and Solutions

Rajiullah, Mohammad January 2015 (has links)
Networking research and development have historically focused on increasing network throughput and path resource utilization, which particularly helped bulk applications such as file transfer and video streaming. Recent over-provisioning in the core of the Internet has facilitated the use of interactive applications like interactive web browsing, audio/video conferencing, multi- player online gaming and financial trading applications. Although the bulk applications rely on transferring data as fast as the network permits, interactive applications consume rather little bandwidth, depending instead on low latency. Recently, there has been an increasing concern in reducing latency in networking research, as the responsiveness of interactive applications directly influences the quality of experience. To appreciate the significance of latency-sensitive applications for today's Internet, we need to understand their traffic pattern and quantify their prevalence. In this thesis, we quantify the proportion of potentially latency-sensitive traffic and its development over time. Next, we show that the flow start-up mechanism in the Internet is a major source of latency for a growing proportion of traffic, as network links get faster. The loss recovery mechanism in the transport protocol is another major source of latency. To improve the performance of latency-sensitive applications, we propose and evaluate several modifications in TCP. We also investigate the possibility of prioritization at the transport layer to improve the loss recovery. The idea is to trade reliability for timeliness. We particularly examine the applicability of PR-SCTP with a focus on event logging. In our evaluation, the performance of PR-SCTP is largely influenced by small messages. We analyze the inefficiency in detail and propose several solutions. We particularly implement and evaluate one solution that utilizes the Non-Renegable Selective Acknowledgments (NR-SACKs) mechanism, which has been proposed for standardization in the IETF. According to the results, PR-SCTP with NR-SCAKs significantly improves the application performance in terms of low latency as compared to SCTP and TCP. / Interactive applications such as web browsing, audio/video conferencing, multi-player online gaming and financial trading applications do not benefit (much) from more bandwidth. Instead, they depend on low latency. Latency is a key determinant of user experience. An increasing concern for reducing latency is therefore currently being observed among the networking research community and industry. In this thesis, we quantify the proportion of potentially latency-sensitive traffic and its development over time. Next, we show that the flow start-up mechanism in the Internet is a major source of latency for a growing proportion of traffic, as network links get faster. The loss recovery mechanism in the transport protocol is another major source of latency. To improve the performance of latency-sensitive applications, we propose and evaluate several modifications in TCP. We also investigate the possibility of prioritization at the transport layer to improve the loss recovery. The idea is to trade reliability for timeliness. We particularly examine the applicability of PR-SCTP with a focus on event logging. In our evaluation, the performance of PR-SCTP is largely influenced by small messages. We analyze the inefficiency in detail and propose several solutions. We particularly implement and evaluate one solution that utilizes the Non-Renegable Selective Acknowledgments (NR-SACKs) mechanism, which has been proposed for standardization in the IETF. According to the results, PR-SCTP with NR-SCAKs significantly improves the application performance in terms of low latency as compared to SCTP and TCP.
258

A performance map framework for maximizing soldier performance

McFarland, Kyle Alan 12 July 2011 (has links)
Soldiers in the Unites States Army operate under uniquely demanding conditions with increasingly high performance expectations. Modern missions, including counter-insurgency operations in Iraq and Afghanistan, are complex operations. The Army expects this complexity to continue to increase. These conditions affect Soldier performance in combat. Despite spending billions of dollars to provide Soldiers with better equipment to meet the demands of the modern battlefield, the U.S. Army has dedicated comparatively little resources to measuring and improving individual Soldier performance in real-time. As a result, the Army does not objectively measure a Soldier’s performance at any point in their active duty career. The objective of this report is to demonstrate the utility and feasibility of monitoring Soldier performance in real-time by means of visual 3D performance maps supported by a Bayesian network model of Soldier performance. This work draws on techniques developed at the University of Texas’ Robotics Research Group for increasing performance in electro-mechanical systems. Humans and electro-mechanical systems are both complex and demonstrate non-linear performance trends which are often ignored by simplified analytical models. Therefore, application of empirical Bayesian models with visual presentation of data in 3D performance maps enables rapid understanding of important performance parameters for a specific Soldier. The performance maps can easily portray areas of non-linear performance that should be avoided or exploited, while presenting levels of uncertainty regarding the assessments, thus empowering the individual to make informed decisions regarding control and allocation of resources. The present work demonstrates the utility of visual performance maps by structuring 19 relatively mature 3D performance maps based on published empirical research data and analytical models related to human performance. Based on a broad review of the literature, the present research evaluated 10 potential physiological indicators, termed biomarkers that correlate with human responses to a select set of stressors, referred to as impact parameters. The 10 evaluated impact parameters affect various components of Soldier performance. The present research evaluated the documentation of these relationships in the existing literature with regard to 9 general Soldier performance measures. Identifying the research supported relationships from biomarkers to impact parameters to Soldier performance measures resulted in a preliminary Bayesian Soldier Performance Model, from which it is possible to create 70 distinct 3D performance maps. Based on the quality of the relationships identified in the reviewed literature, and a contemporary evaluation of existing sensor technology for the related biomarkers, the present research assessed 26 of the potential 70 performance maps as being achievable in the near-term. Continuing development of the Soldier Performance Model (SPM) as proposed in this report has the potential to increase Soldier performance while simultaneously improving Soldier well-being, reducing risk of physical and mental injury, and reducing downstream treatment cost. / text
259

Numerical and statistical approaches for model checking of stochastic processes

Djafri, Hilal 19 June 2012 (has links) (PDF)
We propose in this thesis several contributions related to the quantitative verification of systems. This discipline aims to evaluate functional and performance properties of a system. Such a verification requires two ingredients: a formal model to represent the system and a temporal logic to express the desired property. Then the evaluation is done with a statistical or numerical method. The spatial complexity of numerical methods which is proportional to the size of the state space of the model makes them impractical when the state space is very large. The method of stochastic comparison with censored Markov chains is one of the methods that reduces memory requirements by restricting the analysis to a subset of the states of the original Markov chain. In this thesis we provide new bounds that depend on the available information about the chain. We introduce a new quantitative temporal logic named Hybrid Automata Stochastic Logic (HASL), for the verification of discrete event stochastic processes (DESP). HASL employs Linear Hybrid Automata (LHA) to select prefixes of relevant execution paths of a DESP. LHA allows rather elaborate information to be collected on-the-fly during path selection, providing the user with a powerful mean to express sophisticated measures. In essence HASL provides a unifying verification framework where temporal reasoning is naturally blended with elaborate reward-based analysis. We have also developed COSMOS, a tool that implements statistical verification of HASL formulas over stochastic Petri nets. Flexible manufacturing systems (FMS) have often been modelized by Petri nets. However the modeler should have a good knowledge of this formalism. In order to facilitate such a modeling we propose a methodology of compositional modeling that is application oriented and does not require any knowledge of Petri nets by the modeler.
260

An investigation into the school and classroom factors that contribute to learners' performing poorly in Grade 4 in a primary school in KwaZulu-Natal.

Khoza, Ntombizonke Irene. January 2007 (has links)
This study was undertaken to investigate the school and classroom factors that contribute / Thesis (M.Ed.) - University of KwaZulu-Natal, Pietermaritzburg, 2007.

Page generated in 0.1347 seconds