61 |
Techniques and Tools for Mining Pre-Deployment Testing DataChan, BRIAN 17 September 2009 (has links)
Pre-deployment field testing in is the process of testing software to uncover unforeseen problems before it is released in the market. It is commonly conducted by recruiting users to experiment with the software in as natural setting as possible. Information regarding the software is then sent to the developers as logs. Log data helps developers fix bugs and better understand the user behaviors so they can refine functionality to user needs. More importantly, logs contain specific problems as well as call traces that can be used by developers to trace its origins. However, developers focus their analysis on post-deployment data such as bug reports and CVS data to resolve problems, which has the disadvantage of releasing software before it can be optimized. Therefore, more techniques are needed to harness field testing data to reduce post deployment problems.
We propose techniques to process log data generated by users in order to resolve problems in the application before its deployment. We introduce a metric system to predict the user perceived quality in software if it were to be released into market in its current state. We also provide visualization techniques which can identify the state of problems and patterns of problem interaction with users that provide insight into solving the problems. The visualization techniques can also be extended to determine the point of origin of a problem, to resolve it more efficiently. Additionally, we devise a method to determine the priority of reported problems.
The results generated from the case studies on mobile software applications. The metric results showed a strong ability predict the number of reported bugs in the software after its release. The visualization techniques uncovered problem patterns that provided insight to developers to the relationship between problems and users themselves. Our analysis on the characteristics of problems determined the highest priority problems and their distribution among users. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2009-09-16 17:50:31.094
|
62 |
Plant-wide Performance Monitoring and Controller PrioritizationPareek, Samidh Unknown Date
No description available.
|
63 |
Reliability and Maintenance of Medical DevicesTaghipour, Sharareh 31 August 2011 (has links)
For decades, reliability engineering techniques have been successfully applied in many industries to improve the performance of equipment maintenance management. Numerous inspection and optimization models are developed and widely used to achieve maintenance excellence, i.e. the balance of performance, risk, resources and cost to reach to an optimal solution. However, the application of all these techniques and models to medical devices is new. Hospitals, due to possessing a large number of difference devices, can benefit significantly if the optimization techniques are used properly in the equipment management processes. Most research in the area of reliability engineering for medical equipment mainly considers the devices in their design or manufacturing stage and suggests some techniques to improve the reliability. To this point, best maintenance strategies for medical equipment in their operating context have not been considered.
We aim to address this gap and propose methods to improve current maintenance strategies in the healthcare industry. More specifically, we first identify or propose the criteria which are important to assess the criticality of medical devices, and propose a model for the prioritization of medical equipment for maintenance decisions. The model is a novel application of multi-criteria decision making methodology to prioritize medical devices in a hospital according to their criticality. The devices with high level of criticality should be included in the hospital’s maintenance management program.
Then, we propose a method to statistically analyze maintenance data for complex medical devices with censoring and missing information. We present a classification of failure types and establish policies for analyzing data at different levels of the device. Moreover, a new method for trend analysis of censored failure data is proposed. A novel feature of this work is that it considers dependent failure histories which are censored by inspection intervals. Trend analysis of this type of data has not been discussed in the literature.
Finally, we introduce some assumptions based on the results of the analysis, and develop several new models to find the optimal inspection interval for a system subject to hard and soft failures. Hard failures are instantaneously revealed and fixed. Soft failures are only rectified at inspections. They do not halt the system, although they reduce its performance or productivity. The models are constructed for two main cases with the assumption of periodic inspections, and periodic and opportunistic inspections, respectively. All numerical examples and case studies presented in the dissertation are adapted from the maintenance data received from a Canadian hospital.
|
64 |
Reliability and Maintenance of Medical DevicesTaghipour, Sharareh 31 August 2011 (has links)
For decades, reliability engineering techniques have been successfully applied in many industries to improve the performance of equipment maintenance management. Numerous inspection and optimization models are developed and widely used to achieve maintenance excellence, i.e. the balance of performance, risk, resources and cost to reach to an optimal solution. However, the application of all these techniques and models to medical devices is new. Hospitals, due to possessing a large number of difference devices, can benefit significantly if the optimization techniques are used properly in the equipment management processes. Most research in the area of reliability engineering for medical equipment mainly considers the devices in their design or manufacturing stage and suggests some techniques to improve the reliability. To this point, best maintenance strategies for medical equipment in their operating context have not been considered.
We aim to address this gap and propose methods to improve current maintenance strategies in the healthcare industry. More specifically, we first identify or propose the criteria which are important to assess the criticality of medical devices, and propose a model for the prioritization of medical equipment for maintenance decisions. The model is a novel application of multi-criteria decision making methodology to prioritize medical devices in a hospital according to their criticality. The devices with high level of criticality should be included in the hospital’s maintenance management program.
Then, we propose a method to statistically analyze maintenance data for complex medical devices with censoring and missing information. We present a classification of failure types and establish policies for analyzing data at different levels of the device. Moreover, a new method for trend analysis of censored failure data is proposed. A novel feature of this work is that it considers dependent failure histories which are censored by inspection intervals. Trend analysis of this type of data has not been discussed in the literature.
Finally, we introduce some assumptions based on the results of the analysis, and develop several new models to find the optimal inspection interval for a system subject to hard and soft failures. Hard failures are instantaneously revealed and fixed. Soft failures are only rectified at inspections. They do not halt the system, although they reduce its performance or productivity. The models are constructed for two main cases with the assumption of periodic inspections, and periodic and opportunistic inspections, respectively. All numerical examples and case studies presented in the dissertation are adapted from the maintenance data received from a Canadian hospital.
|
65 |
Plant-wide Performance Monitoring and Controller PrioritizationPareek, Samidh 06 1900 (has links)
Plant-wide performance monitoring has generated a lot of interest in the control engineering community. The idea is to judge the performance of a plant as a whole rather than looking at performance of individual controllers. Data based methods are currently used to generate a variety of statistical performance indices to help us judge the performance of production units and control assets. However, so much information can often be overwhelming if it lacks precise information. Powerful computing and data storage capabilities have enabled industries to store huge amounts of data. Commercial performance monitoring softwares such as those available from many vendor companies such as Honeywell, Matrikon, ExperTune etc typically use this data to generate huge amounts of information. The problem of data overload has in this way turned into an information overload problem. This work focuses on developing methods that reconcile these various statistical measures of performance and generate useful diagnostic measures in order to optimize process performance of a unit/plant. These methods are also able to identify the relative importance of controllers in the way that they affect the performance of the unit/plant under consideration. / Process Control
|
66 |
The soft time constraint : studies of project extension within an aid agencyKrohwinkel-Karlsson, Anna January 2008 (has links)
Diss. Stockholm : Handelshögskolan i Stockholm, 2008
|
67 |
Association Based Prioritization of GenesJanuary 2011 (has links)
abstract: Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. Genes have widely different pertinences to the etiology and pathology of diseases. Thus, they can be ranked according to their disease-significance on a genomic scale, which is the subject of gene prioritization. Given a set of genes known to be related to a disease, it is reasonable to use them as a basis to determine the significance of other candidate genes, which will then be ranked based on the association they exhibit with respect to the given set of known genes. Experimental and computational data of various kinds have different reliability and relevance to a disease under study. This work presents a gene prioritization method based on integrated biological networks that incorporates and models the various levels of relevance and reliability of diverse sources. The method is shown to achieve significantly higher performance as compared to two well-known gene prioritization algorithms. Essentially, no bias in the performance was seen as it was applied to diseases of diverse ethnology, e.g., monogenic, polygenic and cancer. The method was highly stable and robust against significant levels of noise in the data. Biological networks are often sparse, which can impede the operation of associationbased gene prioritization algorithms such as the one presented here from a computational perspective. As a potential approach to overcome this limitation, we explore the value that transcription factor binding sites can have in elucidating suitable targets. Transcription factors are needed for the expression of most genes, especially in higher organisms and hence genes can be associated via their genetic regulatory properties. While each transcription factor recognizes specific DNA sequence patterns, such patterns are mostly unknown for many transcription factors. Even those that are known are inconsistently reported in the literature, implying a potentially high level of inaccuracy. We developed computational methods for prediction and improvement of transcription factor binding patterns. Tests performed on the improvement method by employing synthetic patterns under various conditions showed that the method is very robust and the patterns produced invariably converge to nearly identical series of patterns. Preliminary tests were conducted to incorporate knowledge from transcription factor binding sites into our networkbased model for prioritization, with encouraging results. To validate these approaches in a disease-specific context, we built a schizophreniaspecific network based on the inferred associations and performed a comprehensive prioritization of human genes with respect to the disease. These results are expected to be validated empirically, but computational validation using known targets are very positive. / Dissertation/Thesis / Ph.D. Computer Science 2011
|
68 |
Self-learning algorithms applied in Continuous Integration systemTummala, Akhil January 2018 (has links)
Context: Continuous Integration (CI) is a software development practice where a developer integrates a code into the shared repository. And, then an automated system verifies the code and runs automated test cases to find integration error. For this research, Ericsson’s CI system is used. The tests that are performed in CI are regression tests. Based on the time scopes, the regression test suites are categorized into hourly and daily test suits. The hourly test is performed on all the commits made in a day, whereas the daily test is performed at night on the latest build that passed the hourly test. Here, the hourly and daily test suites are static, and the hourly test suite is a subset of the daily test suite. Since the daily test is performed at the end of the day, the results are obtained on the next day, which is delaying the feedback to the developers regarding the integration errors. To mitigate this problem, research is performed to find the possibility of creating a learning model and integrating into the CI system, which can then create a dynamic hourly test suite for faster feedback. Objectives: This research aims to find the suitable machine learning algorithm for CI system and investigate the feasibility of creating self-learning test machinery. This goal is achieved by examining the CI system and, finding out what type data is required for creating the learning model for prioritizing the test cases. Once the necessary data is obtained, then the selected algorithms are evaluated to find the suitable learning algorithm for creating self-learning test machinery. And then, the investigation is done whether the created learning model can be integrated into the CI workflow to create the self-learning test machinery. Methods: In this research, an experiment is conducted for evaluating the learning algorithms. For this experimentation, the data is provided by Ericsson AB, Gothenburg. The dataset consists of the daily test information and the test case results. The algorithms that are evaluated in this experiment are Naïve Bayes, Support vector machines, and Decision trees. This evaluation is done by performing leave-one-out cross-validation. And, the learning algorithm performance is calculated by using the prediction accuracy. After obtaining the accuracies, the algorithms are compared to find the suitable machine learning algorithm for CI system. Results: Based on the Experiment results it is found that support vector machines have outperformed Naïve Bayes and Decision tree algorithms in performance. But, due to the challenges present in the current CI system, the created learning model is not feasible to integrate into the CI. The primary challenge faced by the CI system is, mapping of test case failure to its respective commit is no possible (cannot find which commit made the test case to fail). This is because the daily test is performed on the latest build which is the combination of commits made in that day. Another challenge present is low data storage. Due to this low data storage, problems like the curse of dimensionality and class imbalance has occurred. Conclusions: By conducting this research, a suitable learning algorithm is identified for creating a self-learning machinery. And, also identified the challenges facing to integrate the model in CI. Based on the results obtained from the experiment, it is recognized that support vector machines have high prediction accuracy in test case result classification compared to Naïve Bayes and Decision trees.
|
69 |
Proposta de um modelo para reduzir o GAP entre a definição e a execução de estratégias corporativasCarvalho, Haroldo Blini de January 2007 (has links)
Submitted by Marcia Bacha (marcia.bacha@fgv.br) on 2011-05-19T20:15:36Z
No. of bitstreams: 1
000407822.pdf: 10592162 bytes, checksum: de8d981ab390d5bdb31df4f68e9bed71 (MD5) / Approved for entry into archive by Marcia Bacha(marcia.bacha@fgv.br) on 2011-05-19T20:17:24Z (GMT) No. of bitstreams: 1
000407822.pdf: 10592162 bytes, checksum: de8d981ab390d5bdb31df4f68e9bed71 (MD5) / Approved for entry into archive by Marcia Bacha(marcia.bacha@fgv.br) on 2011-05-19T20:18:29Z (GMT) No. of bitstreams: 1
000407822.pdf: 10592162 bytes, checksum: de8d981ab390d5bdb31df4f68e9bed71 (MD5) / Made available in DSpace on 2011-05-19T20:19:46Z (GMT). No. of bitstreams: 1
000407822.pdf: 10592162 bytes, checksum: de8d981ab390d5bdb31df4f68e9bed71 (MD5)
Previous issue date: 2007 / In despite the enormous time and energy that goes into strategy development, many companies have little to show for their efforts. The research by the consultancy Marakon Associates (2004) suggests that on average, only about 60% of the financiaI achievements results from the company's strategy planning. Ones research is focused primarily on incorporating strategy, obtained from the balanced scorecard strategy map, into the performing company operation and its projects. Its overall objective is to evaluate the proposed model, hence prioritize and balance projects In accordance with the planned strategies. This can minimize the gap between the strategy formulation and execution. The composition of the model involves various processes and evaluation methods, using theory in corporate strategic management and portfólio management as a reference. This model was applied in one of the telecommunication companies situated in Brazil (which one will refer to as company X for confidential reasons). However, such model is feasible in other companies with similar features and characteristics, with some customization. The strength of this model is that it takes all dimensions of the company's strategy such as learning and growth, customer perspective and internaI processes and not only the financiaI perspective of the company. In contrast, its weakness mainly lies in its large dependency in the quality of information produced by the third person, in analyzing, prioritizing and balancing the portfólio. Due to this inherent characteristic few targets of the balance scorecard indicators results differed. In using this proposed model shown, the conclusion is drawn that its results are very consistent and in line with the strategy planned in the balanced scorecard. / Ao analisar o grande esforço e tempo dedicado ao planejamento da estratégia corporativa, nota-se que muitas empresas não apresentam resultados coerentes com a dedicação empregada. Um estudo realizado pela consultoria Marakon Associates (2004), sugere que, na média, apenas 60% dos resultados financeiros da empresa derivam do plano estratégico. O presente trabalho foca principalmente em incorporar a estratégia, refletida no mapa estratégico do balanced scorecard, às atividades e projetos da empresa, a fim de que sejam alcançados os resultados almejados. O objetivo é avaliar o modelo proposto, priorizando e balanceando os projetos de acordo com a estratégia planejada. Com isto, podese minimizar o gap entre a formulação e a execução da estratégia, para tanto, o modelo é composto de vários processos e métodos de avaliação, utilizando teorias relacionadas à estratégia e gestão de portfólio como referência. No presente trabalho, este modelo foi aplicado em uma empresa de telecomunicações situada no Brasil (para assegurar confidencialidade a empresa será tratada como 'empresa X'). Porém, este modelo também poderá ser aplicado em empresas com características similares, desde que sejam efetuadas as customizações devidas. O ponto forte deste modelo é que engloba todas as dimensões da estratégia corporativa, tais quais, evolução organizacional, clientes e processos internos, e não somente a perspectiva financeira, como na maioria dos outros modelos. Por outro lado, o ponto fraco do modelo é a dependência em relação à qualidade da informação recebida para efetuar a análise, priorização e balanceamento do portfólio de projetos. Devido a esta característica inerente ao modelo, algumas metas de indicadores do balanced scorecard apresentaram diferenças em relação ao valor realizado. Conclui-se, com a utilização do modelo proposto, que os resultados obtidos são bem consistentes e alinhados com o plano estratégico do balanced scorecard.
|
70 |
Sistemática para avaliação e priorização de opções de investimento aplicada ao franchisingSilveira, Fernando Mynarski January 2017 (has links)
O Franchising apresentou um crescimento expressivo no Brasil nas últimas duas décadas. O principal marco regulatório se deu com o advento da Lei 8.955 de 1994. Tal crescimento é expresso tanto em número de franquias instaladas, quanto na diversidade dos segmentos das mesmas. Em função disso, uma questão recorrente é justamente saber à qual franquia o pretenso franqueado deve aderir dada uma gama de opções colocada à sua disposição pelo mercado. Com o objetivo de auxiliar a solução desse problema, o presente trabalho propõe uma sistemática baseada no uso de método multicritério e simulação. Primeiramente são identificados tanto na literatura, quanto em coleta de informações provenientes de trabalhos de campo, os critérios balizadores da escolha de franquias. Posteriormente e com base nestes critérios, realizam-se análises de cunho econômico financeiro onde geram-se como produtos dois rankings: Um proveniente do uso de método de decisão multicritério e outro proveniente de avaliação rentabilidade-risco realizada através do uso de simulação. Assim, considerando semelhanças e diferenças entre esses dois rankings, um pretenso franqueado poderia, seguindo esta proposta de sistemática estruturada, optar pela adesão à franquia mais atraente. Essa é a principal contribuição de cunho prático. Já a principal contribuição de cunho acadêmico é suprir lacunas existentes na literatura, principalmente pelo fato de tratar o assunto franchising conjugado com o uso de métodos estatísticos e matemáticos. / In the last two decades franchising has grown significantly in Brazil. The main regulation mark occurred with the advent of 8.955/1994 Franchise Law. Such growth is expressed both in the number of franchises installed as well in the diversity of their segments. Accordingly, a recurring issue is the question to which franchise the prospective franchisee must choose considering a range of options at its disposal in the market. With the objective of helping to solve this problem, the present work proposes a system based on the use of multicriteria method and simulation. Firstly, the criteria for choosing franchises are identified both in the literature and in the collection of information from work fields. Subsequently, based on these criteria, economic and financial analysis is carried out where two rankings are generated as products: One deriving from the use of a multi-criteria decision method and the other one from a profitability-risk valuation carried out through the use of simulation. Thus, considering similarities and differences between these two rankings, a prospective franchisee could, following this proposal of structured system choose the most attractive franchise. This is the main practical contribution since the main benefit of academic nature is to fill gaps in the literature, mainly to deal with the franchising subject in conjunction with the use of statistical and mathematical methods.
|
Page generated in 0.0974 seconds