• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11693
  • 2106
  • 1106
  • 946
  • 844
  • 499
  • 271
  • 259
  • 245
  • 226
  • 178
  • 132
  • 104
  • 71
  • 68
  • Tagged with
  • 23268
  • 3426
  • 2891
  • 2207
  • 2099
  • 2024
  • 1944
  • 1761
  • 1716
  • 1658
  • 1585
  • 1551
  • 1514
  • 1498
  • 1489
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Assessing value-based management implementation in a mining company / Jaco Cronje Maree

Maree, Jaco Cronje January 2014 (has links)
Value-based management (VBM) is a business philosophy and management system for competing effectively in the global marketplace, based upon the inherent value, dignity and empowerment of each person particularly each employee, customer and supplier. Although VBM has been implemented in an organisation, the difficulty remains that the employees/workers do not necessarily understand the mechanics of VBM, or their actions in value creation. The main objective of this study is to determine if the employees/workers of the mining organisation understand the basic concepts of VBM, where it has been implemented and to determine whether or not VBM has been successfully implemented within a mining organisation. This was done by collecting data using a standardised questionnaire distributed among the mining company‟s employees. The results from the study indicated that VBM in general was successfully implemented but that lower lever employees is less informed on certain aspects of VBM. / MBA, North-West University, Potchefstroom Campus, 2014
592

Effective development of haptic devices using a model-based and simulation-driven design approach

Ahmad, Aftab January 2014 (has links)
Virtual reality (VR)-based surgical simulators using haptic devices can increase the effectiveness of surgical training for surgeons when performing surgical procedures in hard tissues such as bones or teeth milling. The realism of virtual surgery through a surgical simulator depends largely on the precision and reliability of the haptic device, which reflects the interaction with the virtual model. The quality of perceptiveness (sensation, force/torque) depends on the design of the haptic device, which presents a complex design space due to its multi-criteria and conflicting character of functional and performance requirements. These requirements include high stiffness, large workspace, high manipulability, small inertia, low friction, high transparency, and cost constraints. This thesis proposes a design methodology to improve the realism of force/torque feedback from the VR-based surgical simulator while fulfilling end-user requirements. The main contributions of this thesis are: 1. The development of a model-based and simulation-driven design methodology, where one starts from an abstract, top-level model that is extended via stepwise refinements and design space exploration into a detailed and integrated systems model that can be physically realized. 2. A methodology for creating an analytical and compact model of the quasi-static stiffness of a haptic device, which considers the stiffness of actuation systems, flexible links and passive joints. 3. A robust design optimization approach to find the optimal numerical values for a set of design parameters to maximize the kinematic, dynamic and kinetostatic performances of a 6-degrees of freedom (DOF) haptic device, while minimizing its sensitivity to variations in manufacturing tolerances and cost, and also satisfying the allowed variations in the performance indices. 4. A cost-effective approach for force/torque feedback control using force/torque estimated through a recursive least squares estimation. 5. A model-based control strategy to increase transparency and fidelity of force/torque feedback from the device by compensating for the natural dynamics of the device, friction in joints, gravity of platform, and elastic deformations. / <p>QC 20140415</p>
593

Anthrapyrazole cysteinyl peptides as inhibitors of AP-1 transcription factor binding

Tran, Phuong My January 1998 (has links)
Synthesis of peptides anchored to DNA by intercalating chromophores can incorporate the design principle of the naturally occurring peptide based antibiotics. This work is concerned with the synthesis of DNA anchored cysteinyl peptides designed to be potentially nucleotide sequence specific with possible affinity for the AP-l transcription site. Previous work has shown that anthraquinones and anthrapyrazoles (APZs) substituted with cationic side groups are excellent DNA intercalating agents. In this work a series of APZ analogues has been synthesised which are coupled onto the amino terminus of varying peptide sequences. Three derivatives of APZs were prepared namely 2-, 2,5- and 2,7-substituted. Eight short polypeptides (see below), all varying slightly in sequence but all containing the KCR motif (with one exception where a Cys was replaced with Ser) were combined with the APZ chromophore to give a series of intercalator-peptide molecules. Peptides were synthesised using the Fmoc strategy on a solid phase peptide synthesizer (SPPS). The peptides were then isolated by reversed-phase HPLC using a water: acetonitrile gradient. Characterisation of the peptides was carried out by matrix assisted laser desorption ionisation (MALDI) mass spectrometry and two dimensional nmr (i.e. COSY and NOESy). Anthraquinone linked peptide ligands were also synthesised using similar synthetic routes, and tested for their activity. Coupling of the two components was achieved via activation of the carboxylic acid group using PyBOP or via formation of a reactive aziridinium ion. All intercalator-peptides prepared were examined for their DNA binding properties. The methods included the effect of intercalator-peptides on the thermal denaturation of DNA and the competitive displacement of ethidium by fluorimetry. It was shown that the APZ binds to DNA by intercalation. Peptides prepared were: H2N-A-K-C-R-A-C02H; H2N-A-K-C-R-A-CONH2; H2N-A-K-S-R-A-CONH2; H2N-A-K-C-R-N-A-CONH2; H2N-A-K-C-R-K-A-CONH2; H2N-A-K-C-R-N-R-A-CONH2; H2N-A-K-C-R-K-R-ACONH2; H2N-A-A-K-C-R-A-A-CONH2. The biological activities of the intercalatorpeptides were then investigated using an electrophoretic mobility shift assay (EMSA), making use of cell nuclear extracts rich in AP-l and also c-Jun homodimer recombinant proteins. It was shown that most of the intercalatorpeptides were capable of inhibiting AP-l (fos/jun) heterodimer protein from binding to the AP-l DNA consensus sequence. Importantly, the intercalatorpeptides showed superior activity over the intercalator or peptide moieties alone. The order of binding affinity was intercalator-peptide> intercalator» peptide.
594

A real-time simulation-based optimisation environment for industrial scheduling

Frantze´n, Marcus January 2013 (has links)
In order to cope with the challenges in industry today, such as changes in product diversity and production volume, manufacturing companies are forced to react more flexibly and swiftly. Furthermore, in order for them to survive in an ever-changing market, they also need to be highly competitive by achieving near optimal efficiency in their operations. Production scheduling is vital to the success of manufacturing systems in industry today, because the near optimal allocation of resources is essential in remaining highly competitive. The overall aim of this study is the advancement of research in manufacturing scheduling through the exploration of more effective approaches to address complex, real-world manufacturing flow shop problems. The methodology used in the thesis is in essence a combination of systems engineering, algorithmic design and empirical experiments using real-world scenarios and data. Particularly, it proposes a new, web services-based, industrial scheduling system framework, called OPTIMISE Scheduling System (OSS), for solving real-world complex scheduling problems. OSS, as implemented on top of a generic web services-based simulation-based optimisation (SBO) platform called OPTIMISE, can support near optimal and real-time production scheduling in a distributed and parallel computing environment. Discrete-event simulation (DES) is used to represent and flexibly cope with complex scheduling problems without making unrealistic assumptions which are the major limitations of existing scheduling methods proposed in the literature. At the same time, the research has gone beyond existing studies of simulation-based scheduling applications, because the OSS has been implemented in a real-world industrial environment at an automotive manufacturer, so that qualitative evaluations and quantitative comparisons of scheduling methods and algorithms can be made with the same framework. Furthermore, in order to be able to adapt to and handle many different types of real-world scheduling problems, a new hybrid meta-heuristic scheduling algorithm that combines priority dispatching rules and genetic encoding is proposed. This combination is demonstrated to be able to handle a wider range of problems or a current scheduling problem that may change over time, due to the flexibility requirements in the real-world. The novel hybrid genetic representation has been demonstrated effective through the evaluation in the real-world scheduling problem using real-world data.
595

An agent-based approach for improving the performance of distributed business processes in maritime port community

Abdul-Mageed, Loay January 2012 (has links)
In the recent years, the concept of “port community” has been adopted by the maritime transport industry in order to achieve a higher degree of coordination and cooperation amongst organizations involved in the transfer of goods through the port area. The business processes of the port community supply chain form a complicated process which involves several process steps, multiple actors, and numerous information exchanges. One of the widely used applications of ICT in ports is the Port Community System (PCS) which is implemented in ports in order to reduce paperwork and to facilitate the information flow related to port operations and cargo clearance. However, existing PCSs are limited in functionalities that facilitate the management and coordination of material, financial, and information flows within the port community supply chain. This research programme addresses the use of agent technology to introduce business process management functionalities, which are vital for port communities, aiming to the enhancement of the performance of the port community supply chain. The investigation begins with an examination of the current state in view of the business perspective and the technical perspective. The business perspective focuses on understanding the nature of the port community, its main characteristics, and its problems. Accordingly, a number of requirements are identified as essential amendments to information systems in seaports. On the other hand, the technical perspective focuses on technologies that are convenient for solving problems in business process management within port communities. The research focuses on three technologies; the workflow technology, agent technology, and service orientation. An analysis of information systems across port communities enables an examination of the current PCSs with regard to their coordination and workflow management capabilities. The most important finding of this analysis is that the performance of the business processes, and in particular the performance of the port community supply chain, is not in the scope of the examined PCSs. Accordingly, the Agent-Based Middleware for Port Community Management (ABMPCM) is proposed as an approach for providing essential functionalities that would facilitate collaborative planning and business process management. As a core component of the ABMPCM, the Collaborative Planning Facility (CPF) is described in further details. A CPF prototype has been developed as an agent-based system for the domain of inland transport of containers to demonstrate its practical effectiveness. To evaluate the practical application of the CPF, a simulation environment is introduced in order to facilitate the evaluation process. The research started with the definition of a multi-agent simulation framework for port community supply chain. Then, a prototype has been implemented and employed for the evaluation of the CPF. The results of the simulation experiments demonstrate that our agent-based approach effectively enhances the performance of business process in the port community.
596

Pedestrian flow measurement using image processing techniques

Zhang, Xiaowei January 1999 (has links)
No description available.
597

Incorporating farmers' knowledge in the planning of interdisciplinary research and extension

Joshi, Laxman January 1997 (has links)
No description available.
598

Introducing corpus-based rules and algorithms in a rule-based machine translation system

Dugast, Loic January 2013 (has links)
Machine translation offers the challenge of automatically translating a text from one natural language into another. Statistical methods - originating from the field of information theory - have shown to be a major breakthrough in the field of machine translation. Prior to this paradigm, many systems had been developed following a rule-based approach. This denotes a system based on a linguistic description of the languages involved and of how translation occurs in the mind of the (human) translator. Statistical models on the contrary use empirical means and may work with very little linguistic hypothesis on language and translation as performed by humans. This had implications for rule-based translation systems, in terms of software architecture and the nature of the rules, which were manually input and lack any statistical feature. In the view of such diverging paradigms, we can imagine trying to combine both in a hybrid system. In the present work, we start by examining the state-of-the-art of both rule-based and statistical systems. We restrict the rule-based approach to transfer-based systems. We compare rule-based and statistical paradigms in terms of global translation quality and give a qualitative analysis of their respective specific errors. We also introduce initial black-box hybrid models that confirm there is an expected gain in combining the two approaches. Motivated by the qualitative analysis, we focus our study and experiments on lexical phrasal rules. We propose a setup allowing to extract such resources from corpora. Going one step further in the integration of rule-based and statistical approaches, we then examine how to combine the extracted rules with decoding modules that will allow for a corpus-based handling of ambiguity. This then leads to the final delivery of this work: a rule-based system for which we can learn non-deterministic rules from corpora, and whose decoder can be optimised on a tuning set in the same domain.
599

Automatic skeleton-driven performance optimizations for transactional memory

Wanderley Goes, Luis Fabricio January 2012 (has links)
The recent shift toward multi-core chips has pushed the burden of extracting performance to the programmer. In fact, programmers now have to be able to uncover more coarse-grain parallelism with every new generation of processors, or the performance of their applications will remain roughly the same or even degrade. Unfortunately, parallel programming is still hard and error prone. This has driven the development of many new parallel programming models that aim to make this process efficient. This thesis first combines the skeleton-based and transactional memory programming models in a new framework, called OpenSkel, in order to improve performance and programmability of parallel applications. This framework provides a single skeleton that allows the implementation of transactional worklist applications. Skeleton or pattern-based programming allows parallel programs to be expressed as specialized instances of generic communication and computation patterns. This leaves the programmer with only the implementation of the particular operations required to solve the problem at hand. Thus, this programming approach simplifies parallel programming by eliminating some of the major challenges of parallel programming, namely thread communication, scheduling and orchestration. However, the application programmer has still to correctly synchronize threads on data races. This commonly requires the use of locks to guarantee atomic access to shared data. In particular, lock programming is vulnerable to deadlocks and also limits coarse grain parallelism by blocking threads that could be potentially executed in parallel. Transactional Memory (TM) thus emerges as an attractive alternative model to simplify parallel programming by removing this burden of handling data races explicitly. This model allows programmers to write parallel code as transactions, which are then guaranteed by the runtime system to execute atomically and in isolation regardless of eventual data races. TM programming thus frees the application from deadlocks and enables the exploitation of coarse grain parallelism when transactions do not conflict very often. Nevertheless, thread management and orchestration are left for the application programmer. Fortunately, this can be naturally handled by a skeleton framework. This fact makes the combination of skeleton-based and transactional programming a natural step to improve programmability since these models complement each other. In fact, this combination releases the application programmer from dealing with thread management and data races, and also inherits the performance improvements of both models. In addition to it, a skeleton framework is also amenable to skeleton-driven performance optimizations that exploits the application pattern and system information. This thesis thus also presents a set of pattern-oriented optimizations that are automatically selected and applied in a significant subset of transactional memory applications that shares a common pattern called worklist. These optimizations exploit the knowledge about the worklist pattern and the TM nature of the applications to avoid transaction conflicts, to prefetch data, to reduce contention etc. Using a novel autotuning mechanism, OpenSkel dynamically selects the most suitable set of these pattern-oriented performance optimizations for each application and adjusts them accordingly. Experimental results on a subset of five applications from the STAMP benchmark suite show that the proposed autotuning mechanism can achieve performance improvements within 2%, on average, of a static oracle for a 16-core UMA (Uniform Memory Access) platform and surpasses it by 7% on average for a 32-core NUMA (Non-Uniform Memory Access) platform. Finally, this thesis also investigates skeleton-driven system-oriented performance optimizations such as thread mapping and memory page allocation. In order to do it, the OpenSkel system and also the autotuning mechanism are extended to accommodate these optimizations. The conducted experimental results on a subset of five applications from the STAMP benchmark show that the OpenSkel framework with the extended autotuning mechanism driving both pattern and system-oriented optimizations can achieve performance improvements of up to 88%, with an average of 46%, over a baseline version for a 16-core UMA platform and up to 162%, with an average of 91%, for a 32-core NUMA platform.
600

Modeling long-term variability and change of soil moisture and groundwater level - from catchment to global scale

Verrot, Lucile January 2016 (has links)
The water stored in and flowing through the subsurface is fundamental for sustaining human activities and needs, feeding water and its constituents to surface water bodies and supporting the functioning of their ecosystems. Quantifying the changes that affect the subsurface water is crucial for our understanding of its dynamics and changes driven by climate change and other changes in the landscape, such as in land-use and water-use. It is inherently difficult to directly measure soil moisture and groundwater levels over large spatial scales and long times. Models are therefore needed to capture the soil moisture and groundwater level dynamics over such large spatiotemporal scales. This thesis develops a modeling framework that allows for long-term catchment-scale screening of soil moisture and groundwater level changes. The novelty in this development resides in an explicit link drawn between catchment-scale hydroclimatic and soil hydraulics conditions, using observed runoff data as an approximation of soil water flux and accounting for the effects of snow storage-melting dynamics on that flux. Both past and future relative changes can be assessed by use of this modeling framework, with future change projections based on common climate model outputs. By direct model-observation comparison, the thesis shows that the developed modeling framework can reproduce the temporal variability of large-scale changes in soil water storage, as obtained from the GRACE satellite product, for most of 25 large study catchments around the world. Also compared with locally measured soil water content and groundwater level in 10 U.S. catchments, the modeling approach can reasonably well reproduce relative seasonal fluctuations around long-term average values. The developed modeling framework is further used to project soil moisture changes due to expected future climate change for 81 catchments around the world. The future soil moisture changes depend on the considered radiative forcing scenario (RCP) but are overall large for the occurrence frequency of dry and wet events and the inter-annual variability of seasonal soil moisture. These changes tend to be higher for the dry events and the dry season, respectively, than for the corresponding wet quantities, indicating increased drought risk for some parts of the world.

Page generated in 0.5061 seconds