• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 807
  • 687
  • 106
  • 64
  • 41
  • 40
  • 35
  • 26
  • 11
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 2228
  • 2228
  • 660
  • 658
  • 370
  • 203
  • 188
  • 185
  • 177
  • 163
  • 156
  • 149
  • 122
  • 121
  • 120
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Study of the skincalm filling process at Aspen Pharmacare applying some six sigma principles

Marx, Johannes January 2005 (has links)
Aspen Pharmacare is listed on the Johannesburg Securities Exchange South Africa (JSE) and is Africa’s largest pharmaceutical manufacturer. The company is a major supplier of branded pharmaceutical and healthcare products to the local and selected international markets. For decades, Aspen has manufactured a basket of affordable, quality, and effective products for the ethical, generic over-the-counter (OTC) and personal care markets. Aspen is also the leading supplier of generic medicines to the public sector, providing comprehensive coverage of the products on the Essential Drug List. Aspen continues to deliver on its commitment toward playing a role in social responsibility diseases such as HIV/AIDS, tuberculosis and malaria. In August 2003 Aspen developed Africa’s first generic anti-retroviral drug, namely Aspen-Stavudine. Aspen’s manufacturing facilities are based in Port Elizabeth (PE) and East London. Aspen has recently completed an Oral Solid Dosage (OSD) manufacturing facility worth approximately R150 million in PE. The Group manufactures approximately 20 tons of product daily and in excess of 400 tons of solid dosage pharmaceuticals, which equates to more than 2 billion tablets. In addition, more than 3 million litres of liquid pharmaceuticals and over 200 tons of pharmaceutical creams and ointments are produced per year [1]. Aspen excels at delivering quality products and services, exceeding customer expectations, complying with international standards in an environment that cultivates technical expertise and innovation. Following this philosophy through to the shop floor areas mean that there are always initiatives in continuous production improvement. One of these improvement projects introduced is called Six Sigma. 8 Ten members of the staff, selected from different expertise fields in the company were trained in Six Sigma. Knowledge gained from the two week training course were applied to different areas in the factory using Six Sigma principles. This dissertation focuses on the study undertaken in one of production areas, namely the filling process of the ointments and creams at the Aspen Port Elizabeth facility.
262

A flexible vehicle measurement system for modern automobile production

Lichtenberg, Thilo Unknown Date (has links)
To stay competitive and to be able to sell high-class products in the modern automobile production it is absolutely necessary to check the quality standard of a manufactured vehicle. The normal measurement strategy to check the quality standard of a completely assembled car is through a complex measurement strategy whilst the vehicle is in the actual series production. This is an immensely time and money consuming process. Furthermore, measurement systems are fixed within a certain position and the flexible measurement of a produced vehicle is very difficult to realize. This project presents a measurement system compliant to all quality guidelines, with which it is possible to measure any mounted component from a completely assembled vehicle wherever and whenever required. For the first time it is possible to measure the vehicle quality and dimensional standard from the first body in white prototype assembled in production up to the completely assembled vehicle delivered to the customer. The result of this project is a measurement system that consists of a hardware tool and a specially programmed software add-on. The complete system could easily be carried to the vehicle that must be analysed. This gives a lot of advantages. Furthermore it is possible to use this developed technology for the whole Volkswagen Company including the other brands like Audi, Skoda and Seat.
263

The components of a quality assurance program for smaller hospitals

Finnie, Carol Jean January 1985 (has links)
The components of a quality assurance program for smaller hospitals in British Columbia have been defined. These components have been defined by a comparison of the normative standards as determined in the literature and by a survey of administrators. Sixteen administrators of predominantly acute-care, accredited, 20-50-bed hospitals in B.C. were surveyed. Twelve of these administrators were surveyed twice. A new requirement for accreditation was introduced by the Canadian Council on Hospital Accreditation (C.C.H.A.) called the Quality Assurance Standard (1985). This Standard required that quality assurance (QA) programs be established in every department or service in the hospital. The Standard does not give a clear description of the QA functions for each individual department in a smaller hospital. An important and relevant list of specific functions for a QA program were identified at various C.C.H.A. seminars held across Canada in late 1983 and early 1984. The literature review indicated that there were a number of controversial issues affecting the implementation of the QA Standard. In spite of many methodological problems associated with quality measurement and assurance, most hospitals will adopt a quality assurance model. The first survey asked the administrators to define the purpose, goals and objectives of a QA program. They were also asked to determine the QA functions for four areas: hospital board, dietary, nursing and pharmacy. Administrators were asked to identify who in the hospital is primarily responsible for the overall QA program and for the QA program in four areas; the problems and benefits encountered when trying to implement a QA program; and their opinion of the new QA requirements for accreditation. The second survey asked the administrators to assign a priority to those functions identified in Round I. The empirical findings were then compared with the normative standards. With some exceptions, the empirical data were consistent with the normative standards. The empirical findings shows that there are problems related to implementing a QA program but at the same time there are a number of benefits related to the program. The priority ratings of the functions indicated areas of high or low importance to the administrator. It is likely that these priority ratings are useful for planning when alternatives must be considered during this time of fiscal restraint. Government policies along with the strong voluntary support of accreditation programs make it vitally important that suitable models for implementing QA are developed. The Doll model is suggested as a basis for implementing QA. Further areas for research are presented. / Medicine, Faculty of / Population and Public Health (SPPH), School of / Graduate
264

Topographic characterization for DEM error modelling

Xiao, Yanni 05 1900 (has links)
Digital Elevation Models have been in use for more than three decades and have become a major component of geographic information processing. The intensive use of DEMs has given rise to many accuracy investigations. The accuracy estimate is usually given in a form of a global measure such as root-mean-square error (RMSE), mostly from a producer's point of view. Seldom are the errors described in terms of their spatial distribution or how the resolution of the DEM interacts with the variability of terrain. There is a wide range of topographic variation present in different terrain surfaces. Thus, in defining the accuracy of a DEM, one needs ultimately to know the global and local characteristics of the terrain and how the resolution interacts with them. In this thesis, DEMs of various resolutions (i.e., 10 arc-minutes, 5 arc-minutes, 2 km, 1 km, and 50 m) in the study area (Prince George, British Columbia) were compared to each other and their mismatches were examined. Based on the preliminary test results, some observations were made regarding the relations among the spatial distribution of DEM errors, DEM resolution and the roughness of terrain. A hypothesis was proposed that knowledge of the landscape characteristics might provide some insights into the nature of the inherent error (or uncertainty) in a DEM. To test this statistically, the global characteristics of the study area surfaces were first examined by measures such as grain and those derived from spectral analysis, nested analysis of variance and fractal analysis of DEMs. Some important scale breaks were identified for each surface and this information on the surface global characteristics was then used to guide the selection of the moving window sizes for the extraction of the local roughness measures. The spatial variation and complexity of various study area surfaces was characterized by means of seven local geomorphometric parameters. The local measures were extracted from DEMs with different resolutions and using different moving window sizes. Then the multivariate cluster analysis was used for automated terrain classification in which relatively homogeneous terrain types at different scale levels were identified. Several different variable groups were used in the cluster analysis and the different classification results were compared to each other and interpreted in relation to each roughness measure. Finally, the correlations between the DEM errors and each of the local roughness measures were examined and the variation of DEM errors within various terrain clusters resulting from multivariate classifications were statistically evaluated. The effectiveness of using different moving window sizes for the extraction of the local measures and the appropriateness of different variable groups for terrain classification were also evaluated. The major conclusion of this study is that knowledge of topographic characteristics does provide some insights into the nature of the inherent error (or uncertainty) in a DEM and can be useful for DEM error modelling. The measures of topographic complexity are related to the observed patterns of discrepancy between DEMs of differing resolution, but there are variations from case to case. Several patterns can be identified in terms of relation between DEM errors and the roughness of terrain. First of all, the DEM errors (or elevation differences) do show certain consistent correlations with each of the various local roughness variables. With most variables, the general pattern is that the higher the roughness measure, the more points with higher absolute elevation differences (i.e., horn-shaped scatter of points indicating heteroscedasticity). Further statistical test results indicate that various DEM errors in the study area do show significant variation between different clusters resulting from terrain classifications based on different variable groups and window sizes. Cluster analysis was considered successful in grouping the areas according to their overall roughness and useful in DEM error modelling. In general, the rougher the cluster, the larger the DEM error (measured with either the standard deviation of the elevation differences or the mean of the absolute elevation differences in each cluster). However, there is still some of the total variation of various DEM errors that could not be accounted for by the cluster structure derived from multivariate classification. This could be attributed to the random errors inherent in any of the DEMs and the errors introduced in the interpolation process. Another conclusion is that the multivariate approach to the classification of topographic surfaces for DEM error modelling is not necessarily more successful than using only a single roughness measure in characterizing the overall roughness of terrain. When comparing the DEM error modelling results for surfaces with different global characteristics, the size of the moving window used in geomorphometric parameter abstraction also has certain impact on the modelling results. It shows that some understanding of the global characteristics of the surface is useful in the selection of appropriate/optimal window sizes for the extraction of local measures for DEM error modelling. Finally, directions for further research are suggested. / Arts, Faculty of / Geography, Department of / Graduate
265

Service quality measurement for non-executive directors in public entities

Van Wyk, M.F. 12 September 2012 (has links)
D.Comm. / In commercial corporations shareholders, at least in theory, evaluate the performance of the boards they have appointed. Such evaluation is mainly based on the financial performance of the entity. Public (state funded) entities have only the state as shareholder and the performance of their boards is not evaluated by the taxpayers who ultimately pay the directors' fees. The term "public entity" refers to 20 corporations with an annual turnover in excess of R 55 billion which are substantially tax-funded or are awarded a market monopoly in terms of legislation by parliament. Although these public entities are regularly criticised by the press, the academic literature reports neither an assessment of the quality of governance by their non-executive directors' nor any instrument to use in such an assessment. The aim of this study was to measure the expectations and perceptions of executives in public entities about their non-executive boards' corporate governance service. This began with a literature was analysis, firstly to define what "proper" corporate governance and secondly to find a recognised methodology to use in the development of an assessment instrument. It was found that two main corporate governance models were generally recognised, namely the United Kingdom model and the German model. The United Kingdom model advocates a single board comprising both executive and non-executive directors while the German model has a supervisory board of non-executive directors overseeing the activities of an executive management board. It was further found that, contrary to King's (1994) recommendation to use unitary boards, the 20 listed public entities all had supervisory boards as advocated in the German model. A procedure advocated by Churchill (1979:65-72), in his paradigm for developing measures of marketing constructs, proved to be very successful in the development in the United States of America of an instrument named SERVQUAL which was applied in the general service arena where a paying client evaluated a service. Churchill's method was therefore used in this study to develop an instrument called ECGSI to measure the quality of governance of listed public entities' non-executive boards. The opinions of executives attending board meetings, e.g. to make presentations, were used both to develop ECGSI and to measure the quality of the non-executive directors' service.
266

Discrete random feedback models in industrial quality control /

Bishop, Albert B. January 1957 (has links)
No description available.
267

Application of quality control and other statistical methods to the precision wood industry

Rhodes, Raymond C. 17 March 2010 (has links)
Investigations were conducted of the statistical aspects of basic research, engineering development, and economic problems pertinent to the Lane Company, Altavista, Virginia, cedar chest manufacturer. Estimations were made of the quality level and variability of various manufacturing operations, e.g., the veneer slicer, gang saws, hot plate press, planers, sanders, top panel inspection, and finish inspection. Statistical quality control procedures were established at points in the processes most feasible for and responsive to their application. A thorough study was made of available data on chests returned by consumers because of open corners. The percentage of returned chests was related to differences in case size and to differences in the predicted equilibrium moisture content of wood in the plant during manufacture. These relationships were presented as a basis for determining the months of the year during which it will be economically profitable to 3-ply chests of various sizes as a protective action against returned chests. An experiment was designed to estimate the effects of high humidity conditions on the rupture of the corners of cedar chests having different panel constructions, corner constructions, and glue treatments. A proposed design with an outline of the analysis was presented. Some thought was directed to the measurement of the moisture content of cedar wood. It was proposed that a combination of both oven-dry and electrometric methods, rather than by an extraction-distillation method alone, might be employed to estimate more precisely the true moisture content under industrial conditions. / Master of Science
268

Quality Control Recommendations for Structural Interventions on Historic Properties

Holland, Michele M. 29 September 2006 (has links)
This thesis presents recommendations for controlling quality in structural interventions on historic properties. Recognizing that establishing quality in the early stages of an intervention can set the standard of quality for an entire project, these recommendations are for the first phase of an intervention, the Pre-Construction Phase. To create these recommendations, first a literature review of past and present intervention methods is conducted. After breaking down the Pre-Construction Phase first into a series of steps, and then each step into a series of details, a standard of quality is established for each detail. The available methods for conducting each detail are then analyzed. Using the literature review and the established standards of quality, recommendations are made as to which method is most appropriate for a given project. These recommendations are applied to two case studies, the structural interventions of Boykin's Tavern and Fallingwater. Finally, conclusions on the use of the proposed quality control recommendations are drawn, and suggestions are given for further work in this field. / Master of Science
269

Near Infrared Investigation of Polypropylene-Clay Nanocomposites for Further Quality Control Purposes-Opportunities and Limitations

Witschnigg, A., Laske, S., Holzer, C., Patel, Rajnikant, Khan, Atif H., Benkreira, Hadj, Coates, Philip D. 31 August 2015 (has links)
Yes / Polymer nanocomposites are usually characterized using various methods, such as small angle X-ray diffraction (XRD) or transmission electron microscopy, to gain insights into the morphology of the material. The disadvantages of these common characterization methods are that they are expensive and time consuming in terms of sample preparation and testing. In this work, near infrared spectroscopy (NIR) spectroscopy is used to characterize nanocomposites produced using a unique twin-screw mini-mixer, which is able to replicate, at ~25 g scale, the same mixing quality as in larger scale twin screw extruders. We correlated the results of X-ray diffraction, transmission electron microscopy, G′ and G″ from rotational rheology, Young’s modulus, and tensile strength with those of NIR spectroscopy. Our work has demonstrated that NIR-technology is suitable for quantitative characterization of such properties. Furthermore, the results are very promising regarding the fact that the NIR probe can be installed in a nanocomposite-processing twin screw extruder to measure inline and in real time, and could be used to help optimize the compounding process for increased quality, consistency, and enhanced product properties
270

A systematic, experimental methodology for design optimization

Ritchie, Paul Andrew, 1960- January 1988 (has links)
Much attention has been directed at off-line quality control techniques in recent literature. This study is a refinement of and an enhancement to one technique, the Taguchi Method, for determining the optimum setting of design parameters in a product or process. In place of the signal-to-noise ratio, the mean square error (MSE) for each quality characteristic of interest is used. Polynomial models describing mean response and variance are fit to the observed data using statistical methods. The settings for the design parameters are determined by minimizing a statistical model. The model uses a multicriterion objective consisting of the MSE for each quality characteristic of interest. Minimum bias central composite designs are used during the data collection step to determine the settings of the parameters where observations are to be taken. Included is the development of minimum bias designs for various cases. A detailed example is given.

Page generated in 0.0639 seconds