Spelling suggestions: "subject:"workflow."" "subject:"iworkflow.""
561 |
Introducing a rule-based architecture for workflow systems in retail supply chain managementLi, Sheng January 2012 (has links)
Problem: While global IT competition is becoming increasingly severe, various business organizations and companies, in order to maximize the profit and gain market competitiveness, are in urgent need of high-performance workflow systems to improve efficiency. However, the workflow systems that are currently used are embedded with fixed business rules that cannot be easily adjusted by users, resulting in the inability of users to make adjustments to the business rules, so as to satisfy changed requirements and deal with high cost of business management and low efficiency. Therefore, it is highly desirable for users of workflow systems, especially retail supply chain companies, to employ a new type of systems that can be easily adjusted by end users themselves when required.Solution: The rule-based workflow system architecture for the management of retail supply chain business process is recommended. In such architecture, the business rules can be separated from the system logic and managed by users via a friendly interface. The rule-based workflow systems can greatly enhance the system efficiency and lower maintenance cost, as compared with the traditional workflow system or other similar information systems. And the efficiency of retail supply chain business process management can be greatly enhanced by employing rule-based workflow systems.Methods: Two main research problems and four sub-research problems, which serve as the guidance to conduct related research work, have been identified. The research work has been divided into the theoretical part and the empirical part. In the theoretical part, the theory of rule base establishment and rule-based workflow system architecture are discussed. In the empirical part, data analysis as well as prototype design are conducted by employing both quantitative and qualitative methods of data collection. Attempts are also made to verify the theories suggested in the theoretical part by means of empirical research. Based on both theoretical and empirical research, attempts are made to find solutions to the research questions. In general, this thesis arms at providing references for the future research related to rule-based workflow system management in retail supply chain management. The thesis also aims to provide references for the practical use of rule-based systems in the retail supply chain field with such issues as system development and maintenance, especially for the system of complex and changeable business processes. Most importantly, some solutions are offered to the challenges of retail supply chain management. / Program: Magisterutbildning i informatik
|
562 |
Um novo processo para refatoração de bancos de dados. / A new process to database refactoring.Márcia Beatriz Pereira Domingues 15 May 2014 (has links)
O projeto e manutenção de bancos de dados é um importante desafio, tendo em vista as frequentes mudanças de requisitos solicitados pelos usuários. Para acompanhar essas mudanças o esquema do banco de dados deve passar por alterações estruturais que muitas vezes prejudicam o desempenho e o projeto das consultas, tais como: relacionamentos desnecessários, chaves primárias ou estrangeiras criadas fortemente acopladas ao domínio, atributos obsoletos e tipos de atributos inadequados. A literatura sobre Métodos Ágeis para desenvolvimento de software propõe o uso de refatorações para evolução do esquema do banco de dados quando há mudanças de requisitos. Uma refatoração é uma alteração simples que melhora o design, mas não altera a semântica do modelo de dados, nem adiciona novas funcionalidades. Esta Tese apresenta um novo processo para aplicar refatorações ao esquema do banco de dados. Este processo é definido por um conjunto de tarefas com o objetivo de executar as refatorações de uma forma controlada e segura, permitindo saber o impacto no desempenho do banco de dados para cada refatoração executada. A notação BPMN foi utilizada para representar e executar as tarefas do processo. Como estudo de caso foi utilizado um banco de dados relacional, o qual é usado por um sistema de informação para agricultura de precisão. Esse sistema, baseado na Web, necessita fazer grandes consultas para plotagem de gráficos com informações georreferenciadas. / The development and maintenance of a database is an important challenge, due to frequent changes and requirements from users. To follow these changes, the database schema suffers structural modifications that, many times, negatively affect its performance and the result of the queries, such as: unnecessary relationships, primary and foreign keys, created strongly attached to the domain, with obsolete attributes or inadequate types of attributes. The literature about Agile Methods for software development suggests the use of refactoring for the evolution of database schemas when there are requirement changes. A refactoring is a simple change that improves the design, but it does not alter the semantics of the data model neither adds new functionalities. This thesis aims at proposing a new process to apply many refactoring to the database schema. This process is defined by a set of refactoring tasks, which is executed in a controlled, secure and automatized form, aiming at improving the design of the schema and allowing the DBA to know exactly the impact on the performance of the database for each refactoring performed. A notation BPMN has been used to represent and execute the tasks of the workflow. As a case study, a relational database, which is used by an information system for precision agriculture was used. This system is web based, and needs to perform large consultations to transfer graphics with geo-referential information.
|
563 |
Mapeamento dos processos de geração e uso das informações clínicas na radiologia médica a partir da análise do fluxo informacional / Mapping of the processes of generation and use clinical information in Medical Radiology through information flow analysisJoeli Espirito Santo da Rocha 28 September 2012 (has links)
Esta pesquisa caracteriza-se como qualitativa e descritiva, tendo por objetivo mapear os processos de geração e uso de informações clínicas no contexto da Radiologia Médica, utilizando como referência o Ciclo Informacional estudado na Ciência da Informação, visando a identificar metadados que possam ser empregados em arquivos didáticos para uso em práticas de ensino em Radiologia. A metodologia é dividida em quatro etapas: 1) estudo do ciclo informacional, com o intuito de conhecer propostas de autores que sistematizam os processos de geração e uso da informação; 2) descrição do fluxo de trabalho de um departamento de Radiologia Médica - IHE -, para identificar os diferentes atores nesse ambiente; 3) análise dos processos do ciclo informacional no fluxo de trabalho de um departamento de Radiologia Médica, para entender o processo de geração e uso de informações na área; e 4) análise comparativa entre os metadados indicados para o fluxo de trabalho de um departamento de Radiologia Médica - DICOM - e os metadados utilizados nas bases de imagens médicas MyPACs e Auntminnie, a fim de levantar um conjunto de informações da prática clínica para uso em arquivos didáticos. Ao estudar o fluxo de trabalho de um departamento de Radiologia foram mapeadas as informações geradas e utilizadas em cada etapa, resultando na identificação de um conjunto de metadados advindos da prática clínica, que podem ser utilizados na criação de arquivos didáticos. Neste trabalho, conclui-se que estudar os processos de geração e uso de informações clínicas, em conjunto com outros temas, pode ajudar a pensar em metodologias para organização da informação, com a finalidade de viabilizar a recuperação de informação entre aplicações que possam ser utilizadas em contextos diversos no ambiente da Radiologia Médica. / This qualitative and descriptive research aimed to map the processes related to generation and use of clinical information in Medical Radiology. In order to identify metadata for creating Radiology teaching files, the Information Cycle - adopted by Information Science - has been used such as a theoretical reference. The methodological approach was divided into four steps: 1) aiming to know approaches proposed by authors who systematize the processes related to information\'s generation and use, the information cycle was investigated; 2) by systematizing the workflow found in a Radiology Department (by using IHE\'s documents), regular users present in this environment were identified; 3) based on Medical Radiology\'s workflow and aiming to understand the information\'s generation and use processes, the steps part of the information cycle were studied; and 4) comparative analysis using metadata part of Radiology Department\'s workflow (based on DICOM standard documentation) and metadata used in the MyPACs and Auntminnie medical image databases, such analysis was done in order to select a set of information from clinical practice which may be used in teaching files\' creation. When the Radiology\'s workflow was studied, the information generated and used in each step was identified. As a result, groups of metadata from the clinical practice were identified and such metadata may be used aiming to create teaching file. In this work, we have concluded that by investigating processes related to clinical information generation and use, combined with other themes, may help to think about information organization\'s methodologies. This offers means to retrieve information from various clinical systems and the retrieved information can be used in different Radiology contexts.
|
564 |
Towards effective and efficient temporal verification in grid workflow systemsChen, Jinjun, n/a January 2007 (has links)
In grid architecture, a grid workflow system is a type of high-level grid middleware
which aims to support large-scale sophisticated scientific or business processes in a
variety of complex e-science or e-business applications such as climate modelling,
disaster recovery, medical surgery, high energy physics, international stock market
modelling and so on. Such sophisticated processes often contain hundreds of
thousands of computation or data intensive activities and take a long time to
complete. In reality, they are normally time constrained. Correspondingly, temporal
constraints are enforced when they are modelled or redesigned as grid workflow
specifications at build-time. The main types of temporal constraints include upper
bound, lower bound and fixed-time. Then, temporal verification would be conducted
so that we can identify any temporal violations and handle them in time.
Conventional temporal verification research and practice have presented some
basic concepts and approaches. However, they have not paid sufficient attention to
overall temporal verification effectiveness and efficiency. In the context of grid
economy, any resources for executing grid workflows must be paid. Therefore, more
resources should be mainly used for execution of grid workflow itself rather than for
temporal verification. Poor temporal verification effectiveness or efficiency would
cause more resources diverted to temporal verification. Hence, temporal verification
effectiveness and efficiency become a prominent issue and deserve an in-depth
investigation.
This thesis systematically investigates the limitations of conventional temporal
verification in terms of temporal verification effectiveness and efficiency. The
detailed analysis of temporal verification effectiveness and efficiency is conducted
for each step of a temporal verification cycle. There are four steps in total: Step 1 -
defining temporal consistency; Step 2 - assigning temporal constraints; Step 3 -
selecting appropriate checkpoints; and Step 4 - verifying temporal constraints.
Based on the investigation and analysis, we propose some new concepts and develop
a set of innovative methods and algorithms towards more effective and efficient
temporal verification. Comparisons, quantitative evaluations and/or mathematical
proofs are also presented at each step of the temporal verification cycle. These
demonstrate that our new concepts, innovative methods and algorithms can
significantly improve overall temporal verification effectiveness and efficiency.
Specifically, in Step 1, we analyse the limitations of two temporal consistency
states which are defined by conventional verification work. After, we propose four
new states towards better temporal verification effectiveness. In Step 2, we analyse
the necessity of a number of temporal constraints in terms of temporal verification
effectiveness. Then we design a novel algorithm for assigning a series of finegrained
temporal constraints within a few user-set coarse-grained ones. In Step 3, we
discuss the problem of existing representative checkpoint selection strategies in
terms of temporal verification effectiveness and efficiency. The problem is that they
often ignore some necessary checkpoints and/or select some unnecessary ones. To
solve this problem, we develop an innovative strategy and corresponding algorithms
which only select sufficient and necessary checkpoints. In Step 4, we investigate a
phenomenon which is ignored by existing temporal verification work, i.e. temporal
dependency. Temporal dependency means temporal constraints are often dependent
on each other in terms of their verification. We analyse its impact on overall
temporal verification effectiveness and efficiency. Based on this, we develop some
novel temporal verification algorithms which can significantly improve overall
temporal verification effectiveness and efficiency. Finally, we present an extension
to our research about handling temporal verification results since these verification
results are based on our four new temporal consistency states.
The major contributions of this research are that we have provided a set of new
concepts, innovative methods and algorithms for temporal verification in grid
workflow systems. With these, we can significantly improve overall temporal
verification effectiveness and efficiency. This would eventually improve the overall
performance and usability of grid workflow systems because temporal verification
can be viewed as a service or function of grid workflow systems. Consequently, by
deploying the new concepts, innovative methods and algorithms, grid workflow
systems would be able to better support large-scale sophisticated scientific and
business processes in complex e-science and e-business applications in the context
of grid economy.
|
565 |
應用同步選擇網路在派翠網路之分析 / Application of SNC (Synchronized choice net) to analysis Petri nets巫亮宏 Unknown Date (has links)
Well-behaved SNC covers well-behaved and various classes of FC (free-choice) and is not included in AC (asymmetric choice). An SNC allows internal choices and concurrency and hence is powerful for irodeling. Any SNC is bounded and its liveness conditions are simple. An integrated algorithm, has been presented for verification of a net being SNC and its liveness with polynomial time complexity. Scholars often need to verify properties on nets appearing in literatures. Verification by CAD tool is less desirable than that by hand due to the extra efforts to input the model aid learn to use the tool. We propose to manually search the maximum SNC component followed by locating bad siphons in an incremental manner. We then apply Lautenback's Maridng Condition (MC) for liveness to berify the property of liveness. But there are two drawbacks associated with the above MC. First, it guarantees only deadlock-freeness, and not necessary liveness. We have identified the structure cause for this and developed its livess conditions correspondingly. Second a net may be live even if the MC is not satisfied. We have identified the structure cause for this. The MC has been readjusted based on our proposed new theorey.
|
566 |
Information categories and editorial processes in multiple channel publishingSabelström Möller, Kristina January 2001 (has links)
No description available.
|
567 |
CAD Tools for DNA Micro-Array Design, Manufacture and ApplicationHundewale, Nisar 04 December 2006 (has links)
Motivation: As the human genome project progresses and some microbial and eukaryotic genomes are recognized, numerous biotechnological processes have attracted increasing number of biologists, bioengineers and computer scientists recently. Biotechnological processes profoundly involve production and analysis of highthroughput experimental data. Numerous sequence libraries of DNA and protein structures of a large number of micro-organisms and a variety of other databases related to biology and chemistry are available. For example, microarray technology, a novel biotechnology, promises to monitor the whole genome at once, so that researchers can study the whole genome on the global level and have a better picture of the expressions among millions of genes simultaneously. Today, it is widely used in many fields- disease diagnosis, gene classification, gene regulatory network, and drug discovery. For example, designing organism specific microarray and analysis of experimental data require combining heterogeneous computational tools that usually differ in the data format; such as, GeneMark for ORF extraction, Promide for DNA probe selection, Chip for probe placement on microarray chip, BLAST to compare sequences, MEGA for phylogenetic analysis, and ClustalX for multiple alignments. Solution: Surprisingly enough, despite huge research efforts invested in DNA array applications, very few works are devoted to computer-aided optimization of DNA array design and manufacturing. Current design practices are dominated by ad-hoc heuristics incorporated in proprietary tools with unknown suboptimality. This will soon become a bottleneck for the new generation of high-density arrays, such as the ones currently being designed at Perlegen [109]. The goal of the already accomplished research was to develop highly scalable tools, with predictable runtime and quality, for cost-effective, computer-aided design and manufacturing of DNA probe arrays. We illustrate the utility of our approach by taking a concrete example of combining the design tools of microarray technology for Harpes B virus DNA data.
|
568 |
Exception prediction in workflow managementLeong, Iok Fai January 2010 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science
|
569 |
Process-mediated Planning of AEC Projects through Structured DialoguesVerheij, Johan Michiel 18 November 2005 (has links)
Project planning in the Architecture, Engineering and Construction (AEC) industry at present relies heavily on individual skills, experience and improvisation. In an attempt to increase predictability and efficiency, and to improve knowledge retention across projects, this thesis proposes a more systematic approach to project planning. It does so by introducing the notion of a meta-process model that embodies and cultivates the logic and intelligence of incremental and collaborative planning activities in a given domain. Planning tasks are encoded and enforced as a set of structured dialogues between project partners. To make this possible, a taxonomy extension to current workflow modeling technology is introduced. The concept of the chosen approach can thus be classified as process mediation through structured dialogues. It is applied to the particular example case of Design-Build project delivery for which a detailed workflow model was created. This model serves as a partial instantiation of the larger Project Management Body Of Knowledge, an abstract framework put forward by the US Project Management Institute. A prototype system architecture is devised as an extension to an existing collaborative virtual environment developed in the European e-HUBs research project. This experimental Web-based platform supports the enactment of workflows that are expressed in the standardized syntax of the neutral process definition language XPDL. The functional richness of the structured dialogue extensions is demonstrated through a dialogue management prototype developed as a separate MS Access database application.
|
570 |
Implementation Of Concurrent Constraint Transaction Logic And Its User InterfaceAltunyuva, Fethi 01 September 2006 (has links) (PDF)
This thesis implements a logical formalism framework called Concurrent Constraint Transaction Logic (abbr.,CCTR) which was defined for modeling and scheduling of workflows under resource allocation and cost constraints and develops an extensible and flexible graphical user interface for the framework. CCTR extends Concurrent Transaction Logic and integrates with Constraint Logic Programming to find the correct scheduling of tasks that involves resource and cost constraints. The developed system, which integrates Prolog and Java Platforms, is designed to serve as the basic environment for enterprise applications that involves CCTR based workflows and schedulers. Full implementation described in this thesis clearly illustrated that CCTR can be used as a workflow scheduler that involves not only temporal and causal constraints but also resource and cost constraints.
|
Page generated in 0.0269 seconds