Spelling suggestions: "subject:"workflow."" "subject:"iworkflow.""
521 |
Fume Cover Flash Chromatography system : The design of a Fume CoverHorn, Alexander, Schenk, Hannes January 2023 (has links)
In laboratory work and working with Biotage’s Selekt Enkel chromatography system, it is common that operators are exposed to harmful and bad-smelling solvents gases. These are common issues that Biotage wants the students to solve. For this thesis work, the mission and task is to propose the best possible design of a Fume Cover that can remove or at least reduce those certain issues. To understand and gain the necessary knowledge, different data-gathering methods like workshops with staff and an extensive literature study of solvents and materials helped to establish a good design framework. Three concepts were built as 3D – models. The concept of Pugh’s iterative improvement method was used to produce an even better concept than the previous three. The chosen concept was iteratively improved for manufacturing and tested in an airflow experiment with carbon dioxide ice determining the optimal design. The conclusion is that it is possible to design a fume cover to reduce the solvent gas exposure, but further testing with light solvent gases and redesign will be required.
|
522 |
Developments of Infrared Thermography-based Tools to Assist Surgical Procedures and WorkflowUnger, Michael 03 February 2023 (has links)
No description available.
|
523 |
Att mäta Flow på Byggarbetsplatser : En undersökning av olika metoder att fånga arbetstillfredsställelse / To measure Flow on Construction sites : An investigation of different methods to capture work satisfactionSiltala, Minna January 2022 (has links)
Inom byggbranschen skedde mycket förändring under tiden för datainsamlingen, inte minst genom digitalisering och effektivisering. Detta till trots fanns väldigt lite forskning kring hur de anställda inom byggbranschen upplevde att deras arbetstillfredsställelse påverkades under förändring. Ett koncept som tog hänsyn till individen kopplat till deras arbetstillfredsställelse var flow konceptet framtaget av Csíkszentmihályi. Detta koncept var uppbyggt av 8 punkter som behövde uppfyllas så bra som möjligt för att uppleva det Csíkszentmihályi kallar för ”flow” såsom exempelvis; ”balans mellan möjlighet och skicklighet”, ”tydliga målsättningar”, ”omedelbar feedback”, ”känsla av kontroll” och så vidare. Ju fler av dessa koncept som stämde in och ju bättre de upplevdes, desto större chans var det att uppleva flow och därför också mer sannolikt att nå god arbetstillfredsställelse. Examensarbetet undersökte arbetstillfredsställelsen med stöd i flow konceptet genom tre olika metoder (enkäter, observationer och intervjuer) och jämförde dessa för att bidra med rekommendationer för framtiden om vilken eller vilka metoder som bäst fångar arbetstillfredsställelsen. Detta så att byggbranschen fortsatt ska kunna genomföra stora förändringar och samtidigt öka arbetstillfredsställelsen hos sina anställda. Det fanns styrkor och svagheter med samtliga metodvalen. För att nämna några möjliggjorde enkäterna och observationerna anonymitet hos de deltagande medan intervjuerna svarade på frågorna ”hur” och ”varför”, vilket öppnade upp för en dialog kring upplevelsen. I samarbete med ett byggprojekt på Skanska, NA3, och ett forskningsprojekt på Luleå tekniska universitet undersökte detta examensarbete hur de anställdas upplevelse av arbetstillfredsställelsen påverkades under förändring. Urvalet för undersökningen bestod av arbetsledarna och planeraren i ett pågående byggprojekt som planerades få tillgång till ett nytt digitalt verktyg under tiden för datainsamlingen. Datainsamlingen genomfördes mellan v.5 och v.16 år 2020. Observationer och enkäter gjordes veckovis medan intervjuerna hölls i slutet av datainsamlingen, under v.16. Samtliga resultat över lag visade på hög chans att uppleva flow och därför god arbetstillfredsställelse. Det nya digitala verktyget blev försenat så det skedde inte så stora förändringar i de undersöktas arbetsvardag till en början, men under tiden för datainsamlingen fångades en plötslig neråtgående trend parallellt med att hela världen drabbades av Covid-19. Covid 19 i sig hade ingen direkt koppling till projektet, men den förde med sig skyndsamma digitala förändringar i arbetet kopplat till plötsligt distansarbete och online möten, men även sjukfrånvaro, svårigheter att få nog stora arbetsstyrkor, försenade leveranser samt stor oro och stress koppat till projektet och dess framfart men även för varje människas liv och egen hälsa. Flow konceptet och de metoder som användes fungerade således för att fånga flow och arbetstillfredsställelse på det aktuella projektet. De använda metoderna hade som tidigare nämnts många olika för- och nackdelar metodmässigt men även kopplat till flow konceptet. Sammanfattningsvis ansågs enkäter vara den metod som var mest tillförlitlig när det kommer till att undersöka flow. Enkäter i kombination med intervjuer, möjliggör för att kunna följa och analysera påverkan på arbetstillfredsställelsen över tid för att sedan kunna ställa följdfrågor vid behov för att få en bredare förståelse. Vidare hur väl den uppmätta arbetstillfredsställelsen stämmer överens med verkligheten behöver fortsatt forskning. / The construction industry was in a state of rapid change during 2020, not least through digital transformation and continual improvement. Despite this, there was at that time very little research on how employees in the construction industry experience how their job satisfaction is affected by change. A concept that considers the individual and their job satisfaction was the flow concept developed by Csíkszentmihályi. This concept consists of 8 aspects that should be met as well as possible to experience what Csíkszentmihályi calls "flow" such as; "balance between ability and skill", "clear goals", "immediate feedback", "sense of control" and so on. The better these aspects are fulfilled, the greater the chance of experiencing flow and therefore also good job satisfaction. This thesis therefore examined job satisfaction through the flow concept using three different methods (surveys, observations and interviews) and compared these methods to achieve recommendations on which method or methods best capture job satisfaction. By following the recommendations, the construction industry can continue to implement major changes and at the same time increase the job satisfaction of its employees. Each of the examined methods had their own strengths and weaknesses. For example, surveys and observations enabled anonymity among the participants while an interview answered more questions such as “how” and “why”, and therefore allowed for dialogue. In collaboration with a construction project at Skanska and a research project at Luleå University of Technology, this thesis examined how employees experience job satisfaction during change. The sample for the study was predetermined by the scope of the collaboration between Skanska and LTU. It consisted of the supervisors and the planning engineer in an ongoing construction project where their tasks were due to undergo major digital transformations during the study. The data collection was carried out between weeks 5 and 16 during 2020. Observations and surveys were carried out weekly while the interviews were held at the end of the data collection period, during week 16. The results showed an overall high chance of experiencing flow, but during the data collection a sudden downward trend was captured at the same time as Covid-19 spread the world over. Covid-19 brought with it a rapid switch to work from home for some individuals, resulted in absences and delays in deliveries, as well as great anxiety and stress linked to the project and its progress but also for each person's life and even their own health. The chosen flow concept and study methods were successful in capturing flow and job satisfaction on the construction project. The chosen methods had many different advantages and disadvantages methodologically, but also regarding the flow concept. In summary, surveys are the most reliable method when it comes to examining flow. Surveys, in combination with interviews, make it possible to follow job satisfaction over time and follow-up with complementary questions to gain a broader understanding. Further research is needed on how well the measured job satisfaction corresponds to reality.
|
524 |
Development of a Fast and Accurate Mutation Assay in Human Cell LinesRobeson, Kalen Z. 01 May 2017 (has links)
No description available.
|
525 |
Generalized Empirical Bayes: Theory, Methodology, and ApplicationsFletcher, Douglas January 2019 (has links)
The two key issues of modern Bayesian statistics are: (i) establishing a principled approach for \textit{distilling} a statistical prior distribution that is \textit{consistent} with the given data from an initial believable scientific prior; and (ii) development of a \textit{consolidated} Bayes-frequentist data analysis workflow that is more effective than either of the two separately. In this thesis, we propose generalized empirical Bayes as a new framework for exploring these fundamental questions along with a wide range of applications spanning fields as diverse as clinical trials, metrology, insurance, medicine, and ecology. Our research marks a significant step towards bridging the ``gap'' between Bayesian and frequentist schools of thought that has plagued statisticians for over 250 years. Chapters 1 and 2---based on \cite{mukhopadhyay2018generalized}---introduces the core theory and methods of our proposed generalized empirical Bayes (gEB) framework that solves a long-standing puzzle of modern Bayes, originally posed by Herbert Robbins (1980). One of the main contributions of this research is to introduce and study a new class of nonparametric priors ${\rm DS}(G, m)$ that allows exploratory Bayesian modeling. However, at a practical level, major practical advantages of our proposal are: (i) computational ease (it does not require Markov chain Monte Carlo (MCMC), variational methods, or any other sophisticated computational techniques); (ii) simplicity and interpretability of the underlying theoretical framework which is general enough to include almost all commonly encountered models; and (iii) easy integration with mainframe Bayesian analysis that makes it readily applicable to a wide range of problems. Connections with other Bayesian cultures are also presented in the chapter. Chapter 3 deals with the topic of measurement uncertainty from a new angle by introducing the foundation of nonparametric meta-analysis. We have applied the proposed methodology to real data examples from astronomy, physics, and medical disciplines. Chapter 4 discusses some further extensions and application of our theory to distributed big data modeling and the missing species problem. The dissertation concludes by highlighting two important areas of future work: a full Bayesian implementation workflow and potential applications in cybersecurity. / Statistics
|
526 |
Continuously Extensible Information Systems: Extending the 5S Framework by Integrating UX and WorkflowsChandrasekar, Prashant 11 June 2021 (has links)
In Virginia Tech's Digital Library Research Laboratory, we support subject-matter-experts (SMEs) in their pursuit of research goals. Their goals include everything from data collection to analysis to reporting. Their research commonly involves an analysis of an extensive collection of data such as tweets or web pages. Without support -- such as by our lab, developers, or data analysts/scientists -- they would undertake the data analysis themselves, using available analytical tools, frameworks, and languages. Then, to extract and produce the information needed to achieve their goals, the researchers/users would need to know what sequences of functions or algorithms to run using such tools, after considering all of their extensive functionality. Our research addresses these problems directly by designing a system that lowers the information barriers. Our approach is broken down into three parts. In the first two parts, we introduce a system that supports discovery of both information and supporting services. In the first part, we describe the methodology that incorporates User eXperience (UX) research into the process of workflow design. Through the methodology, we capture (a) what are the different user roles and goals, (b) how we break down the user goals into tasks and sub-tasks, and (c) what functions and services are required to solve each (sub-)task. In the second part, we identify and describe key components of the infrastructure implementation. This implementation captures the various goals/tasks/services associations in a manner that supports information inquiry of two types: (1) Given an information goal as query, what is the workflow to derive this information? and (2) Given a data resource, what information can we derive using this data resource as input? We demonstrate both parts of the approach, describing how we teach and apply the methodology, with three case studies. In the third part of this research, we rely on formalisms used in describing digital libraries to explain the components that make up the information system. The formal description serves as a guide to support the development of information systems that generate workflows to support SME information needs. We also specifically describe an information system meant to support information goals that relate to Twitter data. / Doctor of Philosophy / In Virginia Tech's Digital Library Research Laboratory, we support subject-matter-experts (SMEs) in their pursuit of research goals. This includes everything from data collection to analysis to reporting. Their research commonly involves an analysis of an extensive collection of data such as tweets or web pages. Without support -- such as by our lab, developers, or data analysts/scientists -- they would undertake the data analysis themselves, using available analytical tools, frameworks, and languages. Then, to extract and produce the information needed to achieve their goals, the researchers/users would need to know what sequences of functions or algorithms to run using such tools, after considering all of their extensive functionality. Further, as more algorithms are being discovered and datasets are getting larger, the information processing effort is getting more and more complicated. Our research aims to address these problems directly by attempting to lower the barriers, through a methodology that integrates the full life cycle, including the activities carried out by User eXperience (UX), analysis, development, and implementation experts. We devise a three part approach to this research. The first two parts concern building a system that supports discovery of both information and supporting services. First, we describe the methodology that introduces UX research into the process of workflow design. Second, we identify and describe key components of the infrastructure implementation. We demonstrate both parts of the approach, describing how we teach and apply the methodology, with three case studies. In the third part of this research, we extend formalisms used in describing digital libraries to encompass the components that make up our new type of extensible information system.
|
527 |
Exploiting Heterogeneity in Distributed Software FrameworksKumaraswamy Ravindranathan, Krishnaraj 08 January 2016 (has links)
The objective of this thesis is to address the challenges faced in sustaining efficient, high-performance and scalable Distributed Software Frameworks (DSFs), such as MapReduce, Hadoop, Dryad, and Pregel, for supporting data-intensive scientific and enterprise applications on emerging heterogeneous compute, storage and network infrastructure. Large DSF deployments in the cloud continue to grow both in size and number, given DSFs are cost-effective and easy to deploy. DSFs are becoming heterogeneous with the use of advanced hardware technologies and due to regular upgrades to the system. For instance, low-cost, power-efficient clusters that employ traditional servers along with specialized resources such as FPGAs, GPUs, powerPC, MIPS and ARM based embedded devices, and high-end server-on-chip solutions will drive future DSFs infrastructure. Similarly, high-throughput DSF storage is trending towards hybrid and tiered approaches that use large in-memory buffers, SSDs, etc., in addition to disks. However, the schedulers and resource managers of these DSFs assume the underlying hardware to be similar or homogeneous. Another problem faced in evolving applications is that they are typically complex workflows comprising of different kernels. The kernels can be diverse, e.g., compute-intensive processing followed by data-intensive visualization and each kernel will have a different affinity towards different hardware. Because of the inability of the DSFs to understand heterogeneity of the underlying hardware architecture and applications, existing resource managers cannot ensure appropriate resource-application match for better performance and resource usage.
In this dissertation, we design, implement, and evaluate DerbyhatS, an application-characteristics-aware resource manager for DSFs, which predicts the performance of the application under different hardware configurations and dynamically manage compute and storage resources as per the application needs. We adopt a quantitative approach where we first study the detailed behavior of various Hadoop applications running on different hardware configurations and propose application-attuned dynamic system management in order to improve the resource-application match. We re-design the Hadoop Distributed File System (HDFS) into a multi-tiered storage system that seamlessly integrates heterogeneous storage technologies into the HDFS. We also propose data placement and retrieval policies to improve the utilization of the storage devices based on their characteristics such as I/O throughput and capacity. DerbyhatS workflow scheduler is an application-attuned workflow scheduler and is constituted by two components. phi-Sched coupled with epsilon-Sched manages the compute heterogeneity and DUX coupled with AptStore manages the storage substrate to exploit heterogeneity. DerbyhatS will help realize the full potential of the emerging infrastructure for DSFs, e.g., cloud data centers, by offering many advantages over the state of the art by ensuring application-attuned, dynamic heterogeneous resource management. / Ph. D.
|
528 |
Extraction automatique de protocoles de communication pour la composition de services Web / Automatic extraction of communication protocols for web services compositionMusaraj, Kreshnik 13 December 2010 (has links)
La gestion des processus-métiers, des architectures orientées-services et leur rétro-ingénierie s’appuie fortement sur l’extraction des protocoles-métier des services Web et des modèles des processus-métiers à partir de fichiers de journaux. La fouille et l’extraction de ces modèles visent la (re)découverte du comportement d'un modèle mis en œuvre lors de son exécution en utilisant uniquement les traces d'activité, ne faisant usage d’aucune information a priori sur le modèle cible. Notre étude préliminaire montre que : (i) une minorité de données sur l'interaction sont enregistrées par le processus et les architectures de services, (ii) un nombre limité de méthodes d'extraction découvrent ce modèle sans connaître ni les instances positives du protocole, ni l'information pour les déduire, et (iii) les approches actuelles se basent sur des hypothèses restrictives que seule une fraction des services Web issus du monde réel satisfont. Rendre possible l'extraction de ces modèles d'interaction des journaux d'activité, en se basant sur des hypothèses réalistes nécessite: (i) des approches qui font abstraction du contexte de l'entreprise afin de permettre une utilisation élargie et générique, et (ii) des outils pour évaluer le résultat de la fouille à travers la mise en œuvre du cycle de vie des modèles découverts de services. En outre, puisque les journaux d'interaction sont souvent incomplets, comportent des erreurs et de l’information incertaine, alors les approches d'extraction proposées dans cette thèse doivent être capables de traiter ces imperfections correctement. Nous proposons un ensemble de modèles mathématiques qui englobent les différents aspects de la fouille des protocoles-métiers. Les approches d’extraction que nous présentons, issues de l'algèbre linéaire, nous permettent d'extraire le protocole-métier tout en fusionnant les étapes classiques de la fouille des processus-métiers. D'autre part, notre représentation du protocole basée sur des séries temporelles des variations de densité de flux permet de récupérer l'ordre temporel de l'exécution des événements et des messages dans un processus. En outre, nous proposons la définition des expirations propres pour identifier les transitions temporisées, et fournissons une méthode pour les extraire en dépit de leur propriété d'être invisible dans les journaux. Finalement, nous présentons un cadre multitâche visant à soutenir toutes les étapes du cycle de vie des workflow de processus et des protocoles, allant de la conception à l'optimisation. Les approches présentées dans ce manuscrit ont été implantées dans des outils de prototypage, et validées expérimentalement sur des ensembles de données et des modèles de processus et de services Web. Le protocole-métier découvert, peut ensuite être utilisé pour effectuer une multitude de tâches dans une organisation ou une entreprise. / Business process management, service-oriented architectures and their reverse engineering heavily rely on the fundamental endeavor of mining business process models and Web service business protocols from log files. Model extraction and mining aim at the (re)discovery of the behavior of a running model implementation using solely its interaction and activity traces, and no a priori information on the target model. Our preliminary study shows that : (i) a minority of interaction data is recorded by process and service-aware architectures, (ii) a limited number of methods achieve model extraction without knowledge of either positive process and protocol instances or the information to infer them, and (iii) the existing approaches rely on restrictive assumptions that only a fraction of real-world Web services satisfy. Enabling the extraction of these interaction models from activity logs based on realistic hypothesis necessitates: (i) approaches that make abstraction of the business context in order to allow their extended and generic usage, and (ii) tools for assessing the mining result through implementation of the process and service life-cycle. Moreover, since interaction logs are often incomplete, uncertain and contain errors, then mining approaches proposed in this work need to be capable of handling these imperfections properly. We propose a set of mathematical models that encompass the different aspects of process and protocol mining. The extraction approaches that we present, issued from linear algebra, allow us to extract the business protocol while merging the classic process mining stages. On the other hand, our protocol representation based on time series of flow density variations makes it possible to recover the temporal order of execution of events and messages in the process. In addition, we propose the concept of proper timeouts to refer to timed transitions, and provide a method for extracting them despite their property of being invisible in logs. In the end, we present a multitask framework aimed at supporting all the steps of the process workflow and business protocol life-cycle from design to optimization.The approaches presented in this manuscript have been implemented in prototype tools, and experimentally validated on scalable datasets and real-world process and web service models.The discovered business protocols, can thus be used to perform a multitude of tasks in an organization or enterprise.
|
529 |
Fluxo de dados em redes de Petri coloridas e em grafos orientados a atores / Dataflow in colored Petri nets and in actors-oriented workflow graphsBorges, Grace Anne Pontes 11 September 2008 (has links)
Há três décadas, os sistemas de informação corporativos eram projetados para apoiar a execução de tarefas pontuais. Atualmente, esses sistemas também precisam gerenciar os fluxos de trabalho (workflows) e processos de negócio de uma organização. Em comunidades científicas de físicos, astrônomos, biólogos, geólogos, entre outras, seus sistemas de informações distinguem-se dos existentes em ambientes corporativos por: tarefas repetitivas (como re-execução de um mesmo experimento), processamento de dados brutos em resultados adequados para publicação; e controle de condução de experimentos em diferentes ambientes de hardware e software. As diferentes características dos dois ambientes corporativo e científico propiciam que ferramentas e formalismos existentes ou priorizem o controle de fluxo de tarefas, ou o controle de fluxo de dados. Entretanto, há situações em que é preciso atender simultaneamente ao controle de transferência de dados e ao controle de fluxo de tarefas. Este trabalho visa caracterizar e delimitar o controle e representação do fluxo de dados em processos de negócios e workflows científicos. Para isso, são comparadas as ferramentas CPN Tools e KEPLER, que estão fundamentadas em dois formalismos: redes de Petri coloridas e grafos de workflow orientados a atores, respectivamente. A comparação é feita por meio de implementações de casos práticos, usando os padrões de controle de dados como base de comparação entre as ferramentas. / Three decades ago, business information systems were designed to support the execution of individual tasks. Todays information systems also need to support the organizational workflows and business processes. In scientific communities composed by physicists, astronomers, biologists, geologists, among others, information systems have different characteristics from those existing in business environments, like: repetitive procedures (such as re-execution of an experiment), transforming raw data into publishable results; and coordinating the execution of experiments in several different software and hardware environments. The different characteristics of business and scientific environments propitiate the existence of tools and formalisms that emphasize control-flow or dataflow. However, there are situations where we must simultaneously handle the data transfer and control-flow. This work aims to characterize and define the dataflow representation and control in business processes and scientific workflows. In order to achieve this, two tools are being compared: CPN Tools and KEPLER, which are based in the formalisms: colored Petri nets and actors-oriented workflow graphs, respectively. The comparison will be done through implementation of practical cases, using the dataflow patterns as comparison basis.
|
530 |
Método de avaliação do modelo de processos de negócio do EKD / Assessment method of business process model of EKDPádua, Silvia Inês Dallavalle de 03 December 2004 (has links)
Atualmente as empresas precisam de sistemas ágeis a mudanças constantes do ambiente do negócio e para garantir que os sistemas cumpram com sua finalidade, os desenvolvedores devem possuir uma compreensão mais aprofundada sobre a organização, seus objetivos, metas e estratégias de mercado. O principal problema para o desenvolvimento de sistemas de software tem sido a dificuldade em se obter informações sobre o domínio da aplicação. Essa dificuldade levou ao surgimento de técnicas de modelagem organizacional, sendo uma atividade valiosa para a compreensão do ambiente empresarial. O EKD - Enterprise Knowledge Development - é uma metodologia que fornece uma forma sistemática e controlada de analisar, entender, desenvolver e documentar uma organização. Infelizmente não possui uma sintaxe e semântica bem definidas, dificultando análises mais complexas dos modelos. Como resultado, o modelo de processos de negócio do EKD pode ser ambíguo e de difícil análise, principalmente em sistemas mais complexos, não sendo possível verificar a consistência e completude do modelo. Neste trabalho, esses problemas serão estudados sob uma abordagem baseada em redes de Petri. O formalismo de redes de Petri a torna uma importante técnica de modelagem para a representação de processos. Além disso, redes de Petri permitem rastrear cada etapa da operação sem ambigüidade e possuem métodos eficientes de análise que garantem que o modelo está livre de erros. Assim, este trabalho tem como objetivo desenvolver um método de avaliação do modelo de processo de negócio do EKD (MPN-EKD). Por meio desse método é possível verificar se o modelo tem erros de construção e travamentos. Este método pode ser aplicado em modelos direcionados para o desenvolvimento de sistema de informação ou de controle do fluxo de trabalho, e pode ser utilizado também para o estudo de estratégias de trabalho e simulação do fluxo de trabalho. / Nowadays all companies need fast systems and frequent changes on the business environment and to guarantee that the systems are reaching their goals, the developers must have a deeper comprehension of the enterprise, its goals and market strategies. The main problem to the development of software systems has been the difficulty to obtain information about the application domain. This difficulty leaded to the creation of enterprise modeling techniques, which is a valuable activity for the comprehension of business environment. The EKD - Enterprise Knowledge Development - is a methodology that gives a systematic and controlled way to analyze, understand, develop, and document an enterprise. Unfortunately it doesn\'t have syntax neither a semantic well defined, which doesn\'t help on more complex analyses of the models. As a result, the enterprise process model of EKD can be ambiguous and hard to analyze, especially on more complex systems, and also it is not possible to verify the consistency and entireness of the model. On this paper, these problems will be studied under an approach based on Petri nets. Because of the Petri nets formalism this is an important modeling technique to process representation. Furthermore, Petri nets allow the tracking of each step of the operation without ambiguity and also they have efficient methodology for analyses, which guarantee the accuracy of the model. Therefore, this work has the objective to develop an evaluation methodology of the business process model of EKD (MPN-EKD). Such methodology will make possible the verification of possible building and locking model errors. This methodology can be applied to information systems or workflow, and also can be used to study the strategies of work and workflow simulations.
|
Page generated in 0.0392 seconds