• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 8
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 77
  • 77
  • 26
  • 15
  • 14
  • 13
  • 12
  • 11
  • 11
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Resource constrained step scheduling of project tasks

Eygelaar, Anton Burger 03 1900 (has links)
Thesis (MScEng (Civil Engineering))--University of Stellenbosch, 2008. / Thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Civil Engineering at the University of Stellenbosch. / ENGLISH ABSTRACT: The logical scheduling of activities in an engineering project currently relies heavily on the experience and intuition of the persons responsible for the schedule. In large projects the complexity of the schedule far exceeds the capacity of human intuition, and systematic techniques are required to compute a consistent sequence of activities. In this study a simple model of the engineering process is described. Based on certain specified relationships between components of the model, a consistent sequence of activities is determined in the form of a logical step schedule. The problem of resource constraints receives special attention. Engineering projects are often executed with limited resources and determining the impact of such restrictions on the logical step schedule is important. This study investigates activityshifting strategies to find a near-optimal sequence of activities that guarantees consistent evolution of deliverables while resolving resource conflicts within the context of logical step schedules. / AFRIKAANSE OPSOMMING: Die logiese skedulering van aktiwiteite in ‘n ingenieursprojek steun swaar op die ondervinding en intuisie van die persone wat verantwoordelik is vir die skedule. In groot projekte is die kompleksiteit van die skedule veel hoër as die kapasiteit van die menslike intuisie, en sistematiese tegnieke word benodig om ‘n konsekwente volgorde van aktiwiteite te bereken. In hierdie studie word ‘n eenvoudige model van die ingenieursproses beskryf. Gebasseer op sommige relasies tussen komponente van die model, kan ‘n konsekwente volgorde van aktiwiteite bepaal word in die vorm van ‘n logiese stap-skedule. Die probleem van beperkte hulpbronne ontvang spesiale aandag. Ingenieursprojekte word dikwels uitgevoer met beperkte hulpbronne en dit is belangrik om die impak daarvan op die logiese stap-skedule te bepaal. Die studie ondersoek die gebruik van aktiwiteit-skuiwende strategieë om ‘n nabyoptimale volgorde van aktiwiteite te vind wat konsekwente ontwikkeling van die projekprodukte waarborg, terwyl hulpbron konflikte opgelos word binne die konteks van ‘n logiese stap-skedule.
32

Spatio-temporal logic for the analysis of biochemical models

Banks, Christopher Jon January 2015 (has links)
Process algebra, formal specification, and model checking are all well studied techniques in the analysis of concurrent computer systems. More recently these techniques have been applied to the analysis of biochemical systems which, at an abstract level, have similar patterns of behaviour to concurrent processes. Process algebraic models and temporal logic specifications, along with their associated model-checking techniques, have been used to analyse biochemical systems. In this thesis we develop a spatio-temporal logic, the Logic of Behaviour in Context (LBC), for the analysis of biochemical models. That is, we define and study the application of a formal specification language which not only expresses temporal properties of biochemical models, but expresses spatial or contextual properties as well. The logic can be used to express, or specify, the behaviour of a model when it is placed into the context of another model. We also explore the types of properties which can be expressed in LBC, various algorithms for model checking LBC - each an improvement on the last, the implementation of the computational tools to support model checking LBC, and a case study on the analysis of models of post-translational biochemical oscillators using LBC. We show that a number of interesting and useful properties can be expressed in LBC and that it is possible to express highly useful properties of real models in the biochemistry domain, with practical application. Statements in LBC can be thought of as expressing computational experiments which can be performed automatically by means of the model checker. Indeed, many of these computational experiments can be higher-order meaning that one succinct and precise specification in LBC can represent a number of experiments which can be automatically executed by the model checker.
33

An Investigation of Traditionally-Aged College Students' Perceptions of the Community of Inquiry

Vignare, Karen Kraus 01 January 2012 (has links)
Online learning courses are taken by nearly 31% of college students (Allen & Seaman, 2011). The majority of those enrolled in online learning are graduate and non-traditional undergraduate students. Survey data from multiple sources show a growing number of traditional students enrolling in online courses or online only universities. There is a lack of information about younger college students enrolling in online courses and those attending online only institutions. Without more research on the perceptions of this population, it is difficult to design an effective online learning environment. The Community of Inquiry (CoI) framework has been used as a process model that defines, describes and measures the tasks supporting online learning. The core elements are the three overlapping presences of teaching, social and cognitive and the interrelationships among those presences. Through more than a decade of work on the framework, a methodology and survey instrument emerged for studying the potential and effectiveness of online learning. Will younger college students enrolling today perceive the CoI framework and the presences from the model, the same way that non-traditional students have in the last decade? Most CoI studies sample non-traditional adults aged 25 and older. American Public University System (APUS) is a for-profit online only institution which publishes research studies that contribute to the growing number of CoI studies available. In the last four years APUS has provided large samples to validate the CoI model and investigate how CoI relates to retention and course design. The limited purpose of the research is to determine whether the CoI framework and its current results, is applicable to a select group of traditional students aged 21 and under who enroll in only online courses APUS. Through an exploratory study using statistical tests including a factorial analysis, the first sample population (n=2,019) consisted of students', 21 and under, responses to the CoI questionnaire and the second sample (n=125,039) was the responses of students older than 21. The samples were compared to determine if there was any significant difference between the perceptions of non-traditional and younger college students on the CoI model. Results indicated that the comparative means of the two populations are highly correlated at .924 but the p value is .000 at the 95% confidence interval. The two populations are different. A factor analysis showed that both samples perceived a three factor solution. The total variance explained was very similar for both samples. For the students who were 21 and under, the three factors accounted for 77.16% of the total and for the older students, three factors accounted for 74.17% of the total. The factor analysis results from the younger students also show that each item from the questionnaire is associated with the appropriate factor. The factor analysis results correspond to previous validated research conducted on the CoI model. The results continue to support the validity of the CoI model, but the differences in the populations are significant. The significance tests are useful but may not be as meaningful as the factor analysis due to the size of the samples. This research adds to the body of knowledge on the CoI model, a dominant theory that describes what learners perceive in an online environment. The results inform the understanding of the CoI model as it applies to younger college learners' perceptions of an effective online learning environment.
34

Formulação de modelos de processo para o planejamento da produção em refinarias de petróleo. / Development of process models for the production planning in petroleum refineries.

Guerra Fernández, Omar José 03 December 2009 (has links)
Durante muitos anos tem-se usado a técnica de programação linear (Linear programming- LP) para abordar o problema do planejamento da produção em refinarias de petróleo. Esta técnica é baseada na linearização do comportamento não linear dos processos num conjunto particular de condições operacionais. Contudo, modelos lineares não são uma boa representação da operação dos processos de refino, pois estes envolvem diferentes operações físicas (separação de fases, mistura de correntes intermediárias, etc.) e químicas (reações de craqueamento, reações de hidrotratamento, etc.) caracterizadas pelo seu comportamento não linear. Como conseqüência, os resultados (plano de operação) de modelos de planejamento da produção baseados na técnica de programação linear são de difícil implementação na operação. Por outro lado, recentes avanços na capacidade de cálculo dos computadores, assim como nas ferramentas matemáticas (algoritmos de otimização) utilizadas para solucionar os modelos de planejamento da produção, permitem a implementação de modelos não lineares de processos em modelos de planejamento da produção. Este trabalho é focado na formulação de modelos de processos adequados (boa aproximação da operação e baixo custo computacional) para o planejamento da produção em refinarias de petróleo. Inicialmente, é abordada a formulação de um modelo empírico para unidade de destilação o qual foi validado com sucesso usando um simulador rigoroso de processos. A seguir, um modelo empírico para uma unidade de craqueamento catalítico fluidizado (FCC) foi gerado e validado a partir de dados obtidos num simulador rigoroso de processo da PETROBRAS. Estes modelos empíricos superam as limitações dos modelos lineares e não lineares para unidades de destilação e unidades de FCC previamente propostos por outros autores. Em seguida, os modelos empíricos desenvolvidos foram implementados no planejamento da produção (para um único período) de duas refinarias: uma de pequena escala (só duas unidades de processo) e uma de média escala (com 7 unidades de processo). Os modelos de planejamento resultaram em dois problemas de programação não linear que foram resolvidos usando três solvers (CONOPT 3, IPOPT e MINOS) disponíveis na plataforma computacional GAMS (General Algebraic Modeling System). Os modelos de planejamento da produção foram resolvidos com sucesso num tempo computacional inferior a um segundo (CPU time < 1 s) pelos solvers. / For many years production planning problems in the petroleum refineries have been addressed using the linear programming (LP) technique. This technique is based on linear process models (linearization of nonlinear process behavior at a particular set of operating conditions). Nevertheless, the linear process models are not suitable for refinery process modeling, since refinery processes involve both physical (phase separations, blending operations, etc.) and chemical operations (cracking reactions, hydrotreating reactions, etc.) that are characterized by their nonlinear nature. Due to this fact, the results (operating plans) from production planning models based on the linear programming technique are operationally not reliable. On the other hand, recent advances in the computer hardware and in the mathematical tools (optimization algorithms) used for the solution of production planning problems, allow the implementation of nonlinear process models in the production planning models. This work deals with the formulation of non linear empirical process models for petroleum refinery planning. Firstly, the formulation of non linear empirical process model for refinery process is addressed. Two processes were studied: distillation and fluid catalytic cracking (FCC). An empirical model for crude distillation units was developed and successfully validated using rigorous simulations carried out in HYSYS® and the empirical model for a FCC unit was developed and successfully validated using a rigorous FCC simulator from PETROBRAS. These empirical models overcome the limitations of both linear and nonlinear empirical models for crude distillation and FCC units previously developed by others authors. Subsequently, the empirical models developed in this work were successfully implemented in the production planning model for two refineries: a small scale refinery (with two process units) and a medium scale refinery (with 7 process units). The production planning models resulted in two nonlinear programming problems that were solved using three different solvers (CONOPT 3, IPOPT and MINOS) available in GAMS (General Algebraic Modeling System) platform. The computational time for the solution of the production planning models were less than 1 second (CPU time < 1 s).
35

Online flood forecasting in fast responding catchments on the basis of a synthesis of artificial neural networks and process models / Online Hochwasservorhersage in schnellreagierenden Einzugsgebieten auf Basis einer Synthese aus Neuronalen Netzen und Prozessmodellen

Cullmann, Johannes 03 April 2007 (has links) (PDF)
A detailed and comprehensive description of the state of the art in the field of flood forecasting opens this work. Advantages and shortcomings of currently available methods are identified and discussed. Amongst others, one important aspect considers the most exigent weak point of today’s forecasting systems: The representation of all the fundamentally different event specific patterns of flood formation with one single set of model parameters. The study exemplarily proposes an alternative for overcoming this restriction by taking into account the different process characteristics of flood events via a dynamic parameterisation strategy. Other fundamental shortcomings in current approaches especially restrict the potential for real time flash flood forecasting, namely the considerable computational requirements together with the rather cumbersome operation of reliable physically based hydrologic models. The new PAI-OFF methodology (Process Modelling and Artificial Intelligence for Online Flood Forecasting) considers these problems and offers a way out of the general dilemma. It combines the reliability and predictive power of physically based, hydrologic models with the operational advantages of artificial intelligence. These operational advantages feature extremely low computation times, absolute robustness and straightforward operation. Such qualities easily allow for predicting flash floods in small catchments taking into account precipitation forecasts, whilst extremely basic computational requirements open the way for online Monte Carlo analysis of the forecast uncertainty. The study encompasses a detailed analysis of hydrological modeling and a problem specific artificial intelligence approach in the form of artificial neural networks, which build the PAI-OFF methodology. Herein, the synthesis of process modelling and artificial neural networks is achieved by a special training procedure. It optimizes the network according to the patterns of possible catchment reaction to rainstorms. This information is provided by means of a physically based catchment model, thus freeing the artificial neural network from its constriction to the range of observed data – the classical reason for unsatisfactory predictive power of netbased approaches. Instead, the PAI-OFF-net learns to portray the dominant process controls of flood formation in the considered catchment, allowing for a reliable predictive performance. The work ends with an exemplary forecasting of the 2002 flood in a 1700 km² East German watershed.
36

An approach to business process management at a higher education institution / Maria Elizabeth Nel

Nel, Maria Elizabeth January 2009 (has links)
Thesis (M.B.A.)--North-West University, Potchefstroom Campus, 2010.
37

An approach to business process management at a higher education institution / Maria Elizabeth Nel

Nel, Maria Elizabeth January 2009 (has links)
Thesis (M.B.A.)--North-West University, Potchefstroom Campus, 2010.
38

A systematic framework of recovering process patterns from project enactment data as inputs to software process improvement

Huo, Ming, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The study of the software development process is a relatively new research area but it is growing rapidly. This development process, also called 'the software life cycle' or 'the software process', is the methodology used throughout the industry for the planning, design, implementation, testing and maintenance that takes place during the creation of a software product. Over the years a variety of different process models have been developed. From the numerous process models now available, project managers need validation of the choice he/she has made for a software development model that he/she believes will provide the best results. Yet the quality software so sought after by software project managers can be enhanced by improving the development process through which it is delivered. Well tested, reliable evidence is needed to assist these project managers in choosing and planning a superior software process as well as for improving the adopted software process. While some guidelines for software process validation and improvement have been provided, such as CMMI, quantitative evidence is, in fact, scarce. The quantitative evidence sometimes may not be able to be obtained from high level processes that refer to a planned process model, such as a waterfall model. Furthermore, there has been little analysis of low level processes. These low level processes refer to the actions of how a development team follow a high level software process model to develop a software product. We describe these low level processes as project enactment. Normally there is a gap between the high level software process and the project enactment. In order to improve this software development process, this gap needs to be identified, measured and analyzed. In this dissertation, we propose an approach that examines the deviation between a planned process model and the project enactment of that plan. We measure the discrepancy from two aspects: consistency and inconsistency. The analytical results of the proposed approach, which include both qualitative and quantitative data, provide powerful and precise evidence for tailoring, planning and selecting any software process model. The entire approach is composed of four major phases: 1) re-presentation of the planned process model, 2) pre-processing the low level process data, 3) process mining, and 4) analysis and comparison of the recovered process model and planned process model. We evaluate the proposed approach in three case studies: a small, a medium, and a large-sized project obtained from an industrial software development organization. The appropriate data on low level processes is collected and our approach is then applied to these projects individually. From each case study we then performed a detailed analysis of the inconsistencies that had surfaced as well as the consistencies between the plan and the enactment models. An analysis of the inconsistencies revealed that several 'agile' practices were introduced during the project's development even though the planned process model was initially based on 'ISO-12207' instead of the 'agile' method. In addition, our analysis identifies the patterns in the process that are frequently repeated. The outcome of the case studies shows that our approach is applicable to a range of software projects. The conclusions derived from these case studies confirmed that our approach could be used to enhance the entire software development process, including tailoring and assessment.
39

Towards a reusable process model structure for higher education institutions

Van der Merwe, Aletta Johanna 30 June 2005 (has links)
One of the tools used during re-engineering of an environment is the process model as modelling tool. The identification of process models within an institution is a difficult and tedious task. A problem is that often process model structures is identified for one specific project and not stored for future reuse. The ideal for institutions is to reuse process model structures within the institution. This study focused on the generic structures within the higher education application domain where the hypothesis for this study was that a generic educational process model structure for higher education institutions can be established; a process management flow procedure can be used to manage the flow within an educational process model; and that aneducational process model can be stored and reused in re-engineering efforts. The study was divided into three research questions, where the first focused on the identification of generic process model structures, the second on the usability of the process model structures within a re-engineering effort, and the last on the preservation of a process model structure. For the first research question, the identification of process model structures, three institutions were used for data collection. It was necessary to develop a requirements elicitation procedure for data collection. The structure derived was confirmed at a fourth institution. For the second research question, which focuses on the usability of process model structures, an ordinal measurement was defined to measure the usefulness of the process model structures in a reengineering effort. A re-engineering procedure was developed for re-engineering the application domain, called the process management flow procedure, and used for a re-engineering effort at one institution. Lastly, for the third research question the preservation of the process model structures, the abstraction of the process model structure was investigated as well as the feasibility of implementing the process model structures physically using existing repository software. The conclusion after the investigation of the three research questions was that the hypothesis was confirmed that there is indeed a set of process model structures within the higher education institution that are generic, preservable and reusable in a re-engineering effort. / Computing / Ph. D. (Computer Science)
40

Formulação de modelos de processo para o planejamento da produção em refinarias de petróleo. / Development of process models for the production planning in petroleum refineries.

Omar José Guerra Fernández 03 December 2009 (has links)
Durante muitos anos tem-se usado a técnica de programação linear (Linear programming- LP) para abordar o problema do planejamento da produção em refinarias de petróleo. Esta técnica é baseada na linearização do comportamento não linear dos processos num conjunto particular de condições operacionais. Contudo, modelos lineares não são uma boa representação da operação dos processos de refino, pois estes envolvem diferentes operações físicas (separação de fases, mistura de correntes intermediárias, etc.) e químicas (reações de craqueamento, reações de hidrotratamento, etc.) caracterizadas pelo seu comportamento não linear. Como conseqüência, os resultados (plano de operação) de modelos de planejamento da produção baseados na técnica de programação linear são de difícil implementação na operação. Por outro lado, recentes avanços na capacidade de cálculo dos computadores, assim como nas ferramentas matemáticas (algoritmos de otimização) utilizadas para solucionar os modelos de planejamento da produção, permitem a implementação de modelos não lineares de processos em modelos de planejamento da produção. Este trabalho é focado na formulação de modelos de processos adequados (boa aproximação da operação e baixo custo computacional) para o planejamento da produção em refinarias de petróleo. Inicialmente, é abordada a formulação de um modelo empírico para unidade de destilação o qual foi validado com sucesso usando um simulador rigoroso de processos. A seguir, um modelo empírico para uma unidade de craqueamento catalítico fluidizado (FCC) foi gerado e validado a partir de dados obtidos num simulador rigoroso de processo da PETROBRAS. Estes modelos empíricos superam as limitações dos modelos lineares e não lineares para unidades de destilação e unidades de FCC previamente propostos por outros autores. Em seguida, os modelos empíricos desenvolvidos foram implementados no planejamento da produção (para um único período) de duas refinarias: uma de pequena escala (só duas unidades de processo) e uma de média escala (com 7 unidades de processo). Os modelos de planejamento resultaram em dois problemas de programação não linear que foram resolvidos usando três solvers (CONOPT 3, IPOPT e MINOS) disponíveis na plataforma computacional GAMS (General Algebraic Modeling System). Os modelos de planejamento da produção foram resolvidos com sucesso num tempo computacional inferior a um segundo (CPU time < 1 s) pelos solvers. / For many years production planning problems in the petroleum refineries have been addressed using the linear programming (LP) technique. This technique is based on linear process models (linearization of nonlinear process behavior at a particular set of operating conditions). Nevertheless, the linear process models are not suitable for refinery process modeling, since refinery processes involve both physical (phase separations, blending operations, etc.) and chemical operations (cracking reactions, hydrotreating reactions, etc.) that are characterized by their nonlinear nature. Due to this fact, the results (operating plans) from production planning models based on the linear programming technique are operationally not reliable. On the other hand, recent advances in the computer hardware and in the mathematical tools (optimization algorithms) used for the solution of production planning problems, allow the implementation of nonlinear process models in the production planning models. This work deals with the formulation of non linear empirical process models for petroleum refinery planning. Firstly, the formulation of non linear empirical process model for refinery process is addressed. Two processes were studied: distillation and fluid catalytic cracking (FCC). An empirical model for crude distillation units was developed and successfully validated using rigorous simulations carried out in HYSYS® and the empirical model for a FCC unit was developed and successfully validated using a rigorous FCC simulator from PETROBRAS. These empirical models overcome the limitations of both linear and nonlinear empirical models for crude distillation and FCC units previously developed by others authors. Subsequently, the empirical models developed in this work were successfully implemented in the production planning model for two refineries: a small scale refinery (with two process units) and a medium scale refinery (with 7 process units). The production planning models resulted in two nonlinear programming problems that were solved using three different solvers (CONOPT 3, IPOPT and MINOS) available in GAMS (General Algebraic Modeling System) platform. The computational time for the solution of the production planning models were less than 1 second (CPU time < 1 s).

Page generated in 0.0482 seconds