• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 22
  • 15
  • 13
  • 11
  • 7
  • 6
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 197
  • 39
  • 34
  • 30
  • 25
  • 24
  • 24
  • 23
  • 21
  • 18
  • 17
  • 17
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

An empirical approach to automated performance management for elastic n-tier applications in computing clouds

Malkowski, Simon J. 03 April 2012 (has links)
Achieving a high degree of efficiency is non-trivial when managing the performance of large web-facing applications such as e-commerce websites and social networks. While computing clouds have been touted as a good solution for elastic applications, many significant technological challenges still have to be addressed in order to leverage the full potential of this new computing paradigm. In this dissertation I argue that the automation of elastic n-tier application performance management in computing clouds presents novel challenges to classical system performance management methodology that can be successfully addressed through a systematic empirical approach. I present strong evidence in support of my thesis in a framework of three incremental building blocks: Experimental Analysis of Elastic System Scalability and Consolidation, Modeling and Detection of Non-trivial Performance Phenomena in Elastic Systems, and Automated Control and Configuration Planning of Elastic Systems. More concretely, I first provide a proof of concept for the feasibility of large-scale experimental database system performance analyses, and illustrate several complex performance phenomena based on the gathered scalability and consolidation data. Second, I extend these initial results to a proof of concept for automating bottleneck detection based on statistical analysis and an abstract definition of multi-bottlenecks. Third, I build a performance control system that manages elastic n-tier applications efficiently with respect to complex performance phenomena such as multi-bottlenecks. This control system provides a proof of concept for automated online performance management based on empirical data.
52

Eisengehalt von Fleisch - Ermittlung des Eisengehalts im Fleisch verschiedener Tierarten

Westphal, Karsten, Klose, Ralf, Golze, Manfred 08 December 2009 (has links) (PDF)
Der Bericht beinhaltet Ergebnisse von Fleischuntersuchungen auf den Eisengehalt. Untersucht wurden 308 Schweinefleischproben und etwa 300 Fleischproben der Tierarten Rind, Bison, Auerochse, Büffel, Schaf, Ziege, Kaninchen, Wildschwein, Rehwild, Rotwild und Fasan. Die sächsischen Ergebnisse bestätigen Untersuchungen anderer Bundesländer und belegen den starken Rückgang des Eisengehaltes im Schweinefleisch. Er lag im Mittel bei 4,1 mg/kg Frischmasse (FM). Vor 30 Jahren lag der Eisengehalt noch bei 18 mg/kg - 25 mg/kg. Das Fleisch von Tierarten mit sogenanntem rotem Fleisch wie Rind, Schaf, Büffel, Bison, Auerochse, Reh- und Rotwild weist erwartungsgemäß einen hohen Eisengehalt (17 mg/kg FM - 33 mg/kg FM) auf. Farbuntersuchungen zeigten, dass der Eisengehalt eine Abhängigkeit zur Farbhelligkeit und zum Rotton des Fleisches aufweist. Je dunkler bzw. je intensiver der Rotton des Fleisches ist, desto höher der Eisengehalt.
53

SOQPSK with LDPC: Spending Bandwidth to Buy Link Margin

Hill, Terry, Uetrecht, Jim 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / Over the past decade, SOQPSK has been widely adopted by the flight test community, and the low density parity check (LDPC) codes are now in widespread use in many applications. This paper defines the waveform and presents the bit error rate (BER) performance of SOQPSK coupled with a rate 2/3 LDPC code. The scheme described here expands the transmission bandwidth by approximately 56% (which is still 22% less than the legacy PCM/FM modulation), for the benefit of improving link margin by over 10 dB at BER = 10⁻⁶.
54

ARTM CPM Receiver/Demodulator Performance: An Update

Temple, Kip 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / Since the waveform was first developed by the Advanced Range Telemetry Program (ARTM) and adopted by the Range Commanders Council Telemetry Group (RCC/TG), receiver/demodulators for the ARTM Continuous Phase Modulation (CPM) waveform have undergone continued development by several hardware vendors to boost performance in terms of phase noise, detection performance, and resynchronization time. These same results were initially presented at the International Telemetry Conference (ITC) 2003 when hardware first became available supporting this waveform, at the time called ARTM Tier II. This paper reexamines the current state of the art performance of ARTM CPM receiver/demodulators available in the marketplace today.
55

A study of transient bottlenecks: understanding and reducing latency long-tail problem in n-tier web applications

Wang, Qingyang 21 September 2015 (has links)
An essential requirement of cloud computing or data centers is to simultaneously achieve good performance and high utilization for cost efficiency. High utilization through virtualization and hardware resource sharing is critical for both cloud providers and cloud consumers to reduce management and infrastructure costs (e.g., energy cost, hardware cost) and to increase cost-efficiency. Unfortunately, achieving good performance (e.g., low latency) for web applications at high resource utilization remains an elusive goal. Both practitioners and researchers have experienced the latency long-tail problem in clouds during periods of even moderate utilization (e.g., 50%). In this dissertation, we show that transient bottlenecks are an important contributing factor to the latency long-tail problem. Transient bottlenecks are bottlenecks with a short lifespan on the order of tens of milliseconds. Though short-lived, transient bottleneck can cause a long-tail response time distribution that spans a spectrum of 2 to 3 orders of magnitude, from tens of milliseconds to tens of seconds, due to the queuing effect propagation and amplification caused by complex inter-tier resource dependencies in the system. Transient bottlenecks can arise from a wide range of factors at different system layers. For example, we have identified transient bottlenecks caused by CPU dynamic voltage and frequency scaling (DVFS) control at the CPU architecture layer, Java garbage collection (GC) at the system software layer, and virtual machine (VM) consolidation at the application layer. These factors interact with naturally bursty workloads from clients, often leading to transient bottlenecks that cause overall performance degradation even if all the system resources are far from being saturated (e.g., less than 50%). By combining fine-grained monitoring tools and a sophisticated analytical method to generate and analyze monitoring data, we are able to detect and study transient bottlenecks in a systematic way.
56

Check, Connect, and Expect in a Self-Contained Setting for Elementary Students with Emotional and Behavioral Disorders

McDaniel, Sara C, Houchins, David E, Jolivette, Kristine, Steed, Elizabeth, Gagne, Phil, Henrich, Chris 11 August 2011 (has links)
Check, Connect, Expect (CCE) is a secondary tier behavioral intervention for at-risk students who require targeted behavioral support in addition to school-wide positive behavioral interventions and supports. A full-time coach in the CCE intervention provided behavioral supports including daily check-in and check-out procedures, as well as targeted social skills instruction. This study extended CCE to a self-contained elementary school for students with emotional and behavioral disorders. Twenty-two students participated in the 17-week study that involved a four week baseline phase, followed by a 13-week intervention phase. The following research questions were addressed: (a) How did CCE affect student behavior?; (b) How did CCE affect student weekly academic engagement?; (c) How did CCE affect student weekly math calculation and oral reading fluency growth?; (d) How did severity of behavior predict student response to CCE?; (e) How did function maintaining the behavior predict student response to CCE?; (f) How did relationship strength with the coach predict student response to CCE?; and (g) How socially valid was CCE for teachers, paraprofessionals, and students? Two growth curve models were used to analyze the academic and behavioral data. Overall, students displayed significant behavioral growth during the intervention phase and positive growth in the areas of academic engagement and achievement. Severity of behavior, function, and relationship strength were not significant predictors of student response to the CCE intervention. Future directions, limitations, and implications for practice are discussed.
57

Transforming Medical Imaging Applications into Collaborative PACS-based Telemedical Systems

Maani, Rouzbeh 13 October 2010 (has links)
Many medical imaging applications have been developed so far; however, many of them do not support collaboration and are not remotely accessible (i.e., Telemedicine). Medical imaging applications are not practical for use in clinical workflows unless they are able to communicate with the Picture Archiving and Communications System (PACS). This thesis presents an approach based on a three-tier architecture and provides several components to transform medical imaging applications into collaborative, PACS-based, telemedical systems. A novel method is presented to support PACS connectivity. The method is to use the Digital Imaging and COmmunication in Medicine (DICOM) protocol and enhance transmission time by employing a combination of parallelism and compression methods. Experimental results show up to 1.63 speedup over Local Area Networks (LANs) and up to 16.34 speedup over Wide Area Networks (WANs) compared to the current method of medical data transmission.
58

An automated approach to create, manage and analyze large- scale experiments for elastic n-tier application in clouds

Jayasinghe, Indika D. 20 September 2013 (has links)
Cloud computing has revolutionized the computing landscape by providing on-demand, pay-as-you-go access to elastically scalable resources. Many applications are now being migrated from on-premises data centers to public clouds; yet, the transition to the cloud is not always straightforward and smooth. An application that performed well in an on-premise data center may not perform identically in public computing clouds, because many variables like virtualization can impact the application's performance. By collecting significant performance data through experimental study, the cloud's complexity particularly as it relates to performance can be revealed. However, conducting large-scale system experiments is particularly challenging because of the practical difficulties that arise during experimental deployment, configuration, execution and data processing. In spite of these associated complexities, we argue that a promising approach for addressing these challenges is to leverage automation to facilitate the exhaustive measurement of large-scale experiments. Automation provides numerous benefits: removes the error prone and cumbersome involvement of human testers, reduces the burden of configuring and running large-scale experiments for distributed applications, and accelerates the process of reliable applications testing. In our approach, we have automated three key activities associated with the experiment measurement process: create, manage and analyze. In create, we prepare the platform and deploy and configure applications. In manage, we initialize the application components (in a reproducible and verifiable order), execute workloads, collect resource monitoring and other performance data, and parse and upload the results to the data warehouse. In analyze, we process the collected data using various statistical and visualization techniques to understand and explain performance phenomena. In our approach, a user provides the experiment configuration file, so at the end, the user merely receives the results while the framework does everything else. We enable the automation through code generation. From an architectural viewpoint, our code generator adopts the compiler approach of multiple, serial transformative stages; the hallmarks of this approach are that stages typically operate on an XML document that is the intermediate representation, and XSLT performs the code generation. Our automated approach to large-scale experiments has enabled cloud experiments to scale well beyond the limits of manual experimentation, and it has enabled us to identify non-trivial performance phenomena that would not have been possible otherwise.
59

IMPROVING FARM MANAGEMENT DECISIONS BY ANALYZING PRODUCTION EXPENDITURE ALLOCATIONS AND FARM PERFORMANCE STANDING

Osborne, William A 01 January 2013 (has links)
This study examines the potential effects of categorical increases in production expenditures on farm income performance according to farm standing. The objective of this study is to expose differences in anticipated net farm income return from production expenditure investments and the optimal expense allocation strategy for each performance level. Studying farm performance through segregation by utilizing a two-tier analysis and quantile regression acknowledges the possibility that managerial strategy can differ based on managerial ability. Study outcomes are useful to farm managers because they offer more prescription style results and interpretations than found in other farm performance studies. Study findings show that as managerial proficiency increases so does a manager’s ability to extract higher returns from additional expenditures in certain input categories. Additionally, better managers are able to produce higher returns from more investment sources than their lower performing peers. Overall, study results and interpretations point to the importance of farm management ability as the key input for improving farm performance.
60

HOW SCHOOL GENERATED FUNDING REINFORCES A TWO-TIER EDUCATION SYSTEM IN ONTARIO

Pizzoferrato, Sherell 23 May 2014 (has links)
This thesis examines School Generated Funding (SGF) within the Toronto District School (TDSB) to see if SGF is reinforcing a two-tier education system. Five sources of data were analyzed: The SGF Record of the TDSB from 2008-2009, The Preliminary School Budget from 2010-2011, EQAO test results from 2008 to 2009, The Learning Opportunity Index (LOI) from 2009, and three socio-economic status factors (income, education and occupation) using the Toronto Wards Profiles. Using the SGF record, twenty green schools (schools that raised the most SGF, amounting to $4,043,837) were compared, using the five sources of data against twenty red schools (schools that raised the least amount of SGF, amounting to $109,885). Two recommendations have been suggested: SGF be capped at a median amount throughout the TDSB and extra funding be put into a funding account for the TDSB to disperse to schools that need it.

Page generated in 0.0151 seconds