• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 602
  • 214
  • 194
  • 161
  • 102
  • 55
  • 40
  • 39
  • 36
  • 26
  • 20
  • 14
  • 11
  • 10
  • 10
  • Tagged with
  • 1748
  • 506
  • 362
  • 339
  • 242
  • 215
  • 177
  • 150
  • 148
  • 148
  • 135
  • 127
  • 124
  • 122
  • 119
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

The effectiveness of the project management life cycle in Eskom Limpopo Operating Unit

Baloyi, Gidion January 2015 (has links)
Thesis (MBA.) -- University of Limpopo, 2018 / South Africa is a developing state; the roles of the state owned entities in encouraging economic growth and contributing to the mitigation of unemployment and poverty eradication are unavoidable. Project management from an engineering development perspective and as an industrial discipline has been investigated and published throughout the past period. It could be said that the subject is mature, as recent publications on project management fail to bring new knowledge to light particularly in Eskom. This mini dissertation studies the most significant serious success factors in the effective project management in different departmental conditions within Eskom. Projects are being used daily in Eskom to achieve the company goal. In recent years researchers have become increasingly interested in factors that may have an impact on project management effectiveness and the success of projects. However, there is little research that shows how effectively projects are managed in a business organisational context like Eskom. My Study aims to partly fill this gap by presenting results from a case study and surveys of Eskom as an organisation practising project management. It also aims to investigate the effectiveness of project management in terms of Eskom Divisional structures, technical competency, Eskom Project leadership ability and the characteristics of an effective project manager. In managing projects, it is significant to know how to handle both the tools and the people and to achieve a balance between the two. Experience, especially in the management of change was perceived to be a significant factor in project success
462

ADAPTIVE MULTI-OBJECTIVE OPERATING ROOM PLANNING WITH STOCHASTIC DEMAND AND CASE TIMES

Gunna, Vivek Reddy 01 January 2017 (has links)
The operating room (OR) is accountable for most hospital admissions and is one of the most cost and work intensive areas in the hospital. From recent trends, we discover an unexpected parallel increase in expenditure and waiting time. Therefore, improving OR planning has become obligatory, particularly regarding utilization, and service level. Significant challenges in OR planning are the high variations in demand, processing times of surgical specialties, the trade-off between the objectives, and control of OR performance in long-term. Our model provides OR configurations at a strategical level of OR planning to minimize the tradeoff between the utilization and service level accounting for variation in both demand and processing times of surgical specialties. An adaptive control scheme is proposed to aid OR managers to maintain the OR performance within the prescribed controllable limits. Our model is validated using a simulation of demand and processing time data of surgical services at University of Kentucky Health Care.
463

Analysis of a coordination framework for mapping coarse-grain applications to distributed systems

Schaefer, Linda Ruth 01 January 1991 (has links)
A paradigm is presented for the parallelization of coarse-grain engineering and scientific applications. The coordination framework provides structure and an organizational strategy for a parallel solution in a distributed environment. Three categories of primitives which define the coordination framework are presented: structural, transformational. and operational. The prototype of the paradigm presented in this thesis is the first step towards a programming development tool. This tool will allow non-specialist programmers to parallelize existing sequential solutions through the distribution, synchronization and collection of tasks. The distributed control, multidimensional pipeline characteristics of the paradigm provide advantages which include load balancing through the use of self-directed workers, a simplified communication scheme ideally suited for infrequent task interaction, a simple programmer interface, and the ability of the programmer to use already existing code. Results for the parallelization of SPICE3Cl in a distributed system of fifteen SUN 3 workstations with one fileserver demonstrate linear speedup with slopes ranging from 0.7 to 0.9. A high-level abstraction of the system is presented in the form of a closed, single class, queuing network model. Using the Mean Value Analysis solution technique from queuing network theory, an expression for total execution time is obtained and is shown to be consistent with the well known Amdahl's Law. Our expression is in fact a refinement of Amdahl's Law which realistically captures the limitations of the system. We show that the portion of time spent executing serial code which cannot be enhanced by parallelization is a function of N, the number of workers in the system. Experiments reveal the critical nature of the communication scheme and the synchronization of the paradigm. Investigation of the synchronization center indicates that as N increases, visitations to the center increase and degrade system performance. Experimental data provides the information needed to characterize the impact of visitations on the perfoimance of the system. This characterization provides a mechanism for optimizing the speedup of an application. It is shown that the model replicates the system as well as predicts speedup over an extended range of processors, task count, and task size.
464

[en] OPERATING SYSTEM KERNEL SCRIPTING WITH LUA / [pt] LUNATIK: SCRIPTING DE KERNEL DE SISTEMA OPERACIONAL COM LUA

LOURIVAL PEREIRA VIEIRA NETO 26 October 2011 (has links)
[pt] Existe uma abordagem de projeto para aumentar a flexibilidade de sistemas operacionais, chamada sistema operacional extensível, que sustenta que sistemas operacionais devem permitir extensoes para poderem atender a novos requisitos. Existe também uma abordagem de projetos no desenvolvimento de aplicações que sustenta que sistemas complexos devem permitir que usuários escrevam scripts para que eles possam tomar as suas próprias decisões de configuração em tempo de execução. Seguindo estas duas abordagens de projeto, nos construímos uma infra-estrutura que possibilita que usuários carreguem e executem dinamicamente scripts Lua dentro de kernels de sistema operacional, aumentando a flexibilidade deles. Nesta dissertação, nos apresentamos Lunatik, a nossa infra-estrutura para scripting de kernel baseada em Lua, e mostramos um cenário de uso real no escalonamento dinâmico da frequência e voltagem de CPU. Lunatik está implementado atualmente tanto para NetBSD quanto para Linux. / [en] There is a design approach to improve operating system flexibility, called extensible operating system, that supports that operating systems must allow extensions in order to meet new requirements. There is also a design approach in application development that supports that complex systems should allow users to write scripts in order to let them make their own configuration decisions at run-time. Following these two design approaches, we have built an infrastructure that allows users to dynamically load and run Lua scripts into operating system kernels, improving their flexibility. In this thesis we present Lunatik, our scripting subsystem based on Lua, and show a real usage scenario in dynamically scaling CPU frequency and voltage. Lunatik is currently implemented both for NetBSD and Linux.
465

Cost effective, computer-aided analytical performance evaluation of chromosomal microarrays for clinical laboratories

Goodman, Corey William 01 July 2012 (has links)
Many disorders found in humans are caused by abnormalities in DNA. Genetic testing of DNA provides a way for clinicians to identify disease-causing mutations in patients. Once patients with potentially disease-causing mutations are identified, they can be enrolled in treatment or preventative programs to improve the patients' long term quality of life. Array-based comparative genomic hybridization (aCGH) provides a high- resolution, genome-wide method for detecting chromosomal abnormalities. Using computer software, chromosome abnormalities, or copy number variations (CNVs) can be identified from aCGH data. The development of a software tool to analyze the performance of CGH microarrays is of great benefit to clinical laboratories. Calibration of parameters used in aCGH software tools can maximize the performance of these arrays in a clinical setting. According to the American College of Medical Genetics, the validation of a clinical chromosomal microarray platform should be performed by testing a large number (200-300) of well-characterized cases, each with unique CNVs located throughout the genome. Because of the Clinical Laboratory Improvement Amendment of 1988 and the lack of an FDA approved whole genome chromosomal microarray platform the ultimate responsibility for validating the performance characteristics of this technology falls to the clinical laboratory performing the testing. To facilitate this task, we have established a computational analytical validation procedure for CGH microarrays that is comprehensive, efficient, and low cost. This validation uses a higher resolution microarray to validate a lower resolution microarray with a receiver operating characteristic (ROC)-based analysis. From the results we are able to estimate an optimal log2 threshold range for determining the presence or absence (calling) of CNVs.
466

Application Of Support Vector Machines And Neural Networks In Digital Mammography: A Comparative Study

Candade, Nivedita V 28 October 2004 (has links)
Microcalcification (MC) detection is an important component of breast cancer diagnosis. However, visual analysis of mammograms is a difficult task for radiologists. Computer Aided Diagnosis (CAD) technology helps in identifying lesions and assists the radiologist in making his final decision. This work is a part of a CAD project carried out at the Imaging Science Research Division (ISRD), Digital Medical Imaging Program, Moffitt Cancer Research Center, Tampa, FL. A CAD system had been previously developed to perform the following tasks: (a) pre-processing, (b) segmentation and (c) feature extraction of mammogram images. Ten features covering spatial, and morphological domains were extracted from the mammograms and the samples were classified as Microcalcification (MC) or False alarm (False Positive microcalcification/ FP) based on a binary truth file obtained from a radiologist's initial investigation. The main focus of this work was two-fold: (a) to analyze these features, select the most significant features among them and study their impact on classification accuracy and (b) to implement and compare two machine-learning algorithms, Neural Networks (NNs) and Support Vector Machines (SVMs) and evaluate their performances with these features. The NN was based on the Standard Back Propagation (SBP) algorithm. The SVM was implemented using polynomial, linear and Radial Basis Function (RBF) kernels. A detailed statistical analysis of the input features was performed. Feature selection was done using Stepwise Forward Selection (SFS) method. Training and testing of the classifiers was carried out using various training methods. Classifier evaluation was first performed with all the ten features in the model. Subsequently, only the features from SFS were used in the model to study their effect on classifier performance. Accuracy assessment was done to evaluate classifier performance. Detailed statistical analysis showed that the given dataset showed poor discrimination between classes and proved a very difficult pattern recognition problem. The SVM performed better than the NN in most cases, especially on unseen data. No significant improvement in classifier performance was noted with feature selection. However, with SFS, the NN showed improved performance on unseen data. The training time taken by the SVM was several magnitudes less than the NN. Classifiers were compared on the basis of their accuracy and parameters like sensitivity and specificity. Free Receiver Operating Curves (FROCs) were used for evaluation of classifier performance. The highest accuracy observed was about 93% on training data and 76% for testing data with the SVM using Leave One Out (LOO) Cross Validation (CV) training. Sensitivity was 81% and 46% on training and testing data respectively for a threshold of 0.7. The NN trained using the 'single test' method showed the highest accuracy of 86% on training data and 70% on testing data with respective sensitivity of 84% and 50%. Threshold in this case was -0.2. However, FROC analyses showed overall superiority of SVM especially on unseen data. Both spatial and morphological domain features were significant in our model. Features were selected based on their significance in the model. However, when tested with the NN and SVM, this feature selection procedure did not show significant improvement in classifier performance. It was interesting to note that the model with interactions between these selected variables showed excellent testing sensitivity with the NN classifier (about 81%). Recent research has shown SVMs outperform NNs in classification tasks. SVMs show distinct advantages such as better generalization, increased speed of learning, ability to find a global optimum and ability to deal with linearly non-separable data. Thus, though NNs are more widely known and used, SVMs are expected to gain popularity in practical applications. Our findings show that the SVM outperforms the NN. However, its performance depends largely on the nature of data used.
467

Theatre wear must be worn beyond this point : a hermeneutic ethnographic exploration of operating room nursing

Bull, Rosalind Margaret. January 2002 (has links) (PDF)
"September 2002" Includes bibliographical references (leaves 301-318)
468

Turbo-equalization for QAM constellations

Petit, Paul January 2002 (has links)
While the focus of this work is on turbo equalization, there is also an examination of equalization techniques including MMSE linear and DFE equalizers and Precoding. The losses and capacity associated with the ISI channel are also examined. Iterative decoding of concatenated codes is briefly reviewed and the MAP algorithm is explained.
469

Formal memory models for verifying C systems code

Tuch, Harvey, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Systems code is almost universally written in the C programming language or a variant. C has a very low level of type and memory abstraction and formal reasoning about C systems code requires a memory model that is able to capture the semantics of C pointers and types. At the same time, proof-based verification demands abstraction, in particular from the aliasing and frame problems. In this thesis, we study the mechanisation of a series of models, from semantic to separation logic, for achieving this abstraction when performing interactive theorem-prover based verification of C systems code in higher- order logic. We do not commit common oversimplifications, but correctly deal with C's model of programming language values and the heap, while developing the ability to reason abstractly and efficiently. We validate our work by demonstrating that the models are applicable to real, security- and safety-critical code by formally verifying the memory allocator of the L4 microkernel. All formalisations and proofs have been developed and machine-checked in the Isabelle/HOL theorem prover.
470

Programmer friendly and efficient distributed shared memory integrated into a distributed operating system.

Silcock, Jackie, mikewood@deakin.edu.au January 1998 (has links)
Distributed Shared Memory (DSM) provides programmers with a shared memory environment in systems where memory is not physically shared. Clusters of Workstations (COWs), an often untapped source of computing power, are characterised by a very low cost/performance ratio. The combination of Clusters of Workstations (COWs) with DSM provides an environment in which the programmer can use the well known approaches and methods of programming for physically shared memory systems and parallel processing can be carried out to make full use of the computing power and cost advantages of the COW. The aim of this research is to synthesise and develop a distributed shared memory system as an integral part of an operating system in order to provide application programmers with a convenient environment in which the development and execution of parallel applications can be done easily and efficiently, and which does this in a transparent manner. Furthermore, in order to satisfy our challenging design requirements we want to demonstrate that the operating system into which the DSM system is integrated should be a distributed operating system. In this thesis a study into the synthesis of a DSM system within a microkernel and client-server based distributed operating system which uses both strict and weak consistency models, with a write-invalidate and write-update based approach for consistency maintenance is reported. Furthermore a unique automatic initialisation system which allows the programmer to start the parallel execution of a group of processes with a single library call is reported. The number and location of these processes are determined by the operating system based on system load information. The DSM system proposed has a novel approach in that it provides programmers with a complete programming environment in which they are easily able to develop and run their code or indeed run existing shared memory code. A set of demanding DSM system design requirements are presented and the incentives for the placement of the DSM system with a distributed operating system and in particular in the memory management server have been reported. The new DSM system concentrated on an event-driven set of cooperating and distributed entities, and a detailed description of the events and reactions to these events that make up the operation of the DSM system is then presented. This is followed by a pseudocode form of the detailed design of the main modules and activities of the primitives used in the proposed DSM system. Quantitative results of performance tests and qualitative results showing the ease of programming and use of the RHODOS DSM system are reported. A study of five different application is given and the results of tests carried out on these applications together with a discussion of the results are given. A discussion of how RHODOS’ DSM allows programmers to write shared memory code in an easy to use and familiar environment and a comparative evaluation of RHODOS DSM with other DSM systems is presented. In particular, the ease of use and transparency of the DSM system have been demonstrated through the description of the ease with which a moderately inexperienced undergraduate programmer was able to convert, write and run applications for the testing of the DSM system. Furthermore, the description of the tests performed using physically shared memory shows that the latter is indistinguishable from distributed shared memory; this is further evidence that the DSM system is fully transparent. This study clearly demonstrates that the aim of the research has been achieved; it is possible to develop a programmer friendly and efficient DSM system fully integrated within a distributed operating system. It is clear from this research that client-server and microkernel based distributed operating system integrated DSM makes shared memory operations transparent and almost completely removes the involvement of the programmer beyond classical activities needed to deal with shared memory. The conclusion can be drawn that DSM, when implemented within a client-server and microkernel based distributed operating system, is one of the most encouraging approaches to parallel processing since it guarantees performance improvements with minimal programmer involvement.

Page generated in 0.3283 seconds