• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 7
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 83
  • 83
  • 21
  • 15
  • 15
  • 13
  • 10
  • 10
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Benchmarking dairy information using interactive visualization for dairy farm decision making

Boda, Gayatri. January 2005 (has links)
The main goal of this research was to explore the use of benchmarking in the dairy industry. This includes descriptions of the various sectors in North America where benchmarking has been used successfully on a continuous basis or in research. Benchmarking methods that are currently used in the Quebec dairy industry are examined. An improvement to such methods is proposed through the use of visualization, coupled with interactivity, and with a focus on adaptability and usage. The advantages of such an interactive tool are discussed in light of on-farm decision-making, and a further use of visual slider applications is described to help with parameters of known economic importance.
52

An automated approach to create, manage and analyze large- scale experiments for elastic n-tier application in clouds

Jayasinghe, Indika D. 20 September 2013 (has links)
Cloud computing has revolutionized the computing landscape by providing on-demand, pay-as-you-go access to elastically scalable resources. Many applications are now being migrated from on-premises data centers to public clouds; yet, the transition to the cloud is not always straightforward and smooth. An application that performed well in an on-premise data center may not perform identically in public computing clouds, because many variables like virtualization can impact the application's performance. By collecting significant performance data through experimental study, the cloud's complexity particularly as it relates to performance can be revealed. However, conducting large-scale system experiments is particularly challenging because of the practical difficulties that arise during experimental deployment, configuration, execution and data processing. In spite of these associated complexities, we argue that a promising approach for addressing these challenges is to leverage automation to facilitate the exhaustive measurement of large-scale experiments. Automation provides numerous benefits: removes the error prone and cumbersome involvement of human testers, reduces the burden of configuring and running large-scale experiments for distributed applications, and accelerates the process of reliable applications testing. In our approach, we have automated three key activities associated with the experiment measurement process: create, manage and analyze. In create, we prepare the platform and deploy and configure applications. In manage, we initialize the application components (in a reproducible and verifiable order), execute workloads, collect resource monitoring and other performance data, and parse and upload the results to the data warehouse. In analyze, we process the collected data using various statistical and visualization techniques to understand and explain performance phenomena. In our approach, a user provides the experiment configuration file, so at the end, the user merely receives the results while the framework does everything else. We enable the automation through code generation. From an architectural viewpoint, our code generator adopts the compiler approach of multiple, serial transformative stages; the hallmarks of this approach are that stages typically operate on an XML document that is the intermediate representation, and XSLT performs the code generation. Our automated approach to large-scale experiments has enabled cloud experiments to scale well beyond the limits of manual experimentation, and it has enabled us to identify non-trivial performance phenomena that would not have been possible otherwise.
53

Numerical benchmarking of a coarse-mesh transport (COMET) method for medical physics applications

Blackburn, Megan Satterfield 02 July 2009 (has links)
Radiation therapy has become a very import method for treating cancer patients. Thus, it is extremely important to accurately determine the location of energy deposition during these treatments, maximizing dose to the tumor region and minimizing it to healthy tissue. A Coarse-Mesh Transport Method (COMET) has been developed at the Georgia Institute of Technology in the Computational Reactor and Medical Physics Group for use very successfully with neutron transport to analyze whole-core criticality. COMET works by decomposing a large, heterogeneous system into a set of smaller fixed source problems. For each unique local problem that exists, a solution is obtained that we call a response function. These response functions are pre-computed and stored in a library for future use. The overall solution to the global problem can then be found by a linear superposition of these local problems. This method has now been extended to the transport of photons and electrons for use in medical physics problems to determine energy deposition from radiation therapy treatments. The main goal of this work was to develop benchmarks for testing in order to evaluate the COMET code to determine its strengths and weaknesses for these medical physics applications. For response function calculations, Legendre polynomial expansions are necessary for space, angle, polar angle, and azimuthal angle. An initial sensitivity study was done to determine the best orders for future testing. After the expansion orders were found, three simple benchmarks were tested: a water phantom, a simplified lung phantom, and a non-clinical slab phantom. Three more clinically relevant problems were developed from patient CT scans. Different coarse-mesh sizes and incident energies were tested. The COMET solutions for each case were compared to a reference solution obtained by pure Monte Carlo results from EGSnrc. In most cases, the COMET solutions produced reasonably good agreement with the COMET solutions. It was found that better results were obtained for lower energy incident photon beams as well as for larger mesh sizes. Recommendations were made for future development of COMET and the numerical benchmarks.
54

Modelling construction durations for public housing projects in Hong Kong /

Chan, Wai-ming, January 1998 (has links)
Thesis (Ph. D.)--University of Hong Kong, 1998. / Includes bibliographical references (leaves 320-341).
55

Benchmarking in the South African tool and die manufacturing industry

Malherbe, D.C. 03 1900 (has links)
Thesis (MScEng (Industrial Engineering))--University of Stellenbosch, 2007. / The supply of manufactured products depends on tool, die and mould (TDM) manufacturing. The TDM industry provides the machines, tools and equipment necessary to produce most manufactured components. The TDM industry is a high value-adding constituent in the supply of manufactured products by being at the heart of component manufacturing and by forming the backbone of the manufacturing sector. Unfortunately, the South African TDM industry experienced a steady economic decline during the last decade. This decline resulted in a negative effect on the domestic manufacturing industry. The South African government realised the evident need to restructure and develop the TDM industry. This research forms part of government incentives to increase global competitiveness of the South African TDM industry. The South African TDM industry lacks the capacity to supply in the local demand. This study determines shortfalls and the need for improvement by comparing the South African industry against its global counterparts. A benchmarking methodology is developed to identify improvement plans for individual tool rooms and for the industry as a whole. Recommendations for the domestic industry are provided through conclusions drawn from the study. The benchmarking methodology can be applied to an industry or to an individual concern. A pilot implementation of the methodology was performed in three specific tool rooms. This thesis analysis the South African TDM industry in its entirety and provides recommendations to improve competitiveness.
56

An investigation to establish whether the implementation of a structured total quality management system would add value to the South African Brewery, East London depot

Herman, Dane January 2004 (has links)
No matter how receptive or unreceptive an individual or a company is to the arrival of the age of technology, this phenomenon of change cannot be overlooked in the competitive global village. Companies must respond and change if they wish to survive into the next century. The results and testing of the hypotheses in chapters four and five clearly indicate that there is a need for a structured Total Quality Management (TQM) system in the East London Depot, of South African Breweries (SAB). The vision of the company states that they aim to be the “benchmark of South African industry and the brewing world”. The mission of the company states that they wish to provide their consumers with the finest quality malt beverages, brewed and marketed by world - class people in a socially responsible and innovative manner. Two of the core values of the company are: • Customer service and consumer focus • Innovation and quality (http://Beernet) Taking the afore mentioned as a guide line, it is evident that although there is a need for a structured TQM system at the East London Depot, there are key focus areas to concentrate on for the program to be successful. The main focus area will be to change the perception of the staff with regards to doing things right the first time. Customer service is very important and should be understood by all. A competent person should be tasked with the implementation of the system. This person should then manage the system and ensure that the staff training and maintenance of documented procedures are adhered to. A TQM committee must also be established in order to perform the necessary audits. As mentioned in chapter one, the aim of the depot management is to improve on it’s current national ranking. A structured TQM program will make a huge contribution towards achieving the desired result. This will result in better results and achievement of goals. With this in mind this paper aims to investigate the feasibility of establishing a T Q M system at the East London Depot of SAB.
57

Benchmarking a neural network forecaster against statistical measures

Herman, Hilde 16 September 2014 (has links)
M.Ing. (Mechanical Engineering) / The combination of non-linear signal processing and financial market forecasting is a relatively new field of research. This dissertation concerns the forecasting of shares quoted on the Johannesburg Stock Exchange by using Artificial Neural Networks, and does so by comparing neural network results with established statistical results. The share price rise or fall are predicted as well as buy, sell and hold signals and compared to Time Series model and Moving Average Convergence Divergence results. The dissertation will show that artificial neural networks predict the share price rise or fall with less error than statistical models and yielded the highest profit when forecasting buy, sell and hold signals for a particular share.
58

Perception of educators towards the implementation of an integrated quality management system in the White Hazy Circuit : Mpumalanga Province

Malepe, Jabulile Ivonne January 2017 (has links)
Thesis (M. Dev.) -- University of Limpopo, 2017. / The purpose of this study was to assess and reflect on the perceptions of educators towards the implementation of an Integrated Quality Management System in the White Hazy Circuit, Mpumalanga Province. The IQMS is a clear reaction to the autocratic mode of evaluation that operated during the Apartheid era and is a major shift from the old paradigm of external evaluators. The new paradigm calls for a joint collaboration between schools, districts and supervisory unit with the main aim to enhance the quality of education in South Africa. The researcher used quantitative method for this study. In its quantitative approach the study was to describe, analyse and interpret the data to determine if there were any negative perceptions among White Hazy educators about IQMS implementation. The reason to use this approach was to obtain the detailed descriptions of the information required for the study. Secondly results are based on objective method which can be expressed in a specific management technology. Data collection was done using structured questionnaires (N = 97). Respondents of all post level in the teaching profession participated. The participants were selected randomly to give equal participation of all respondents from 16 selected schools. The participants come from both primary and secondary schools. The study highlighted that compliance, lack of training, shifting of responsibility to one another and insufficient time for development play a significant role in IQMS implementation. The findings revealed that the IQMS Policy is implemented in schools. However, there are some areas where there is a lack of school based teacher development to some schools and there is no consistency in the implementation which affected teacher performance and learner performance. The results which arose from the study suggested the following recommendations: quarterly training to enhance service delivery and the reviewing of the current policy to cover trends of contemporary conditions in schools and to capacitate school management teams to become effective in the IQMS implementation.
59

Community Benchmarks: An Analysis of Performance Measurements in Urban Planning Management

Daluddung, Susan Joan 01 January 2005 (has links)
New public management practices in the U.S. call for governmental accountability, performance measures and benchmarks. Community benchmarks research provides a basis for current information and further research for planners and educators in the urban planning profession. A benchmark is simply a standard for performance or targeted level of service delivery aspired to by the city. Community benchmarks, as defined by the researcher, are tied to an adopted community plan. Community plans take many shapes including the General or Comprehensive Plan, the city's budget document, or a variety of strategic planning documents. The intent of the study was to complete research and survey mid-size cities to determine common performance practices for urban planning. management. The sample population was 381 cities selected from the National League of Cities and a database was created. The intent was to create a composite of key quantitative variables strongly related to the benchmark cities program. Additional terminal research was conducted from 2000 to 2004 to supplement survey results. Case studies of several select cities were conducted in order to determine the application of community benchmarks.
60

Benchmarking dairy information using interactive visualization for dairy farm decision making

Boda, Gayatri. January 2005 (has links)
No description available.

Page generated in 0.1077 seconds