1 |
PVIT: A task-based approach for design and evaluation of interactive visualizations for preferential choiceBautista, Jeanette Lyn 05 1900 (has links)
In decision theory the process of selecting the best option is called preferential
choice. Many personal, business, and professional preferential choice decisions
are made every day. In these situations, a decision maker must select the optimal option among multiple alternatives. In order to do this, she must be able
to analyze a model of her preferences with respect to the objectives that are important to her. Prescriptive decision theory suggests several ways to effectively
develop a decision model. However, these methods often end up too tedious
and complicated to apply to complex decisions that involve many objectives
and alternatives.
In order to help people make better decisions, an easier, more intuitive way
to develop interactive models for analysis of decision contexts is needed. The
application of interactive visualization techniques to this problem is an opportune solution. A visualization tool to help in preferential choice must take into
account important aspects from both fields of Information Visualization and
Decision Theory. There exists some proposals that claim to aid preferential
choice, but some key tasks and steps from at least one of these areas are often
overlooked. An added missing element in these proposals is an adequate user
evaluation. In fact, the concept of a good evaluation in the field of information
visualization is a topic of debate, since the goals of such systems stretch beyond
what can be concluded from traditional usability testing. In our research we
investigate ways to overcome some of the challenges faced in the design and
evaluation of visualization systems for preferential choice.
In previous work, Carenini and Lloyd proposed ValueCharts, a set of visualizations and interactive techniques to support the inspection of linear models
of preferences. We now identify the need to consider the decision process in its
entirety, and to redesign ValueCharts in order to support all phases of preferential choice. We present our task-based approach to the redesign of ValueCharts
grounded in recent findings from both Decision Analysis and Information Visualization. We propose a set of domain-independent tasks for the design and
evaluation of interactive visualizations for preferential choice. We then use the
resulting framework as a basis for an analytical evaluation of our tool and alternative approaches. Finally, we use an application of the task model in conjunction with a new blend of evaluation methods to assess the utility of ValueCharts.
|
2 |
PVIT: A task-based approach for design and evaluation of interactive visualizations for preferential choiceBautista, Jeanette Lyn 05 1900 (has links)
In decision theory the process of selecting the best option is called preferential
choice. Many personal, business, and professional preferential choice decisions
are made every day. In these situations, a decision maker must select the optimal option among multiple alternatives. In order to do this, she must be able
to analyze a model of her preferences with respect to the objectives that are important to her. Prescriptive decision theory suggests several ways to effectively
develop a decision model. However, these methods often end up too tedious
and complicated to apply to complex decisions that involve many objectives
and alternatives.
In order to help people make better decisions, an easier, more intuitive way
to develop interactive models for analysis of decision contexts is needed. The
application of interactive visualization techniques to this problem is an opportune solution. A visualization tool to help in preferential choice must take into
account important aspects from both fields of Information Visualization and
Decision Theory. There exists some proposals that claim to aid preferential
choice, but some key tasks and steps from at least one of these areas are often
overlooked. An added missing element in these proposals is an adequate user
evaluation. In fact, the concept of a good evaluation in the field of information
visualization is a topic of debate, since the goals of such systems stretch beyond
what can be concluded from traditional usability testing. In our research we
investigate ways to overcome some of the challenges faced in the design and
evaluation of visualization systems for preferential choice.
In previous work, Carenini and Lloyd proposed ValueCharts, a set of visualizations and interactive techniques to support the inspection of linear models
of preferences. We now identify the need to consider the decision process in its
entirety, and to redesign ValueCharts in order to support all phases of preferential choice. We present our task-based approach to the redesign of ValueCharts
grounded in recent findings from both Decision Analysis and Information Visualization. We propose a set of domain-independent tasks for the design and
evaluation of interactive visualizations for preferential choice. We then use the
resulting framework as a basis for an analytical evaluation of our tool and alternative approaches. Finally, we use an application of the task model in conjunction with a new blend of evaluation methods to assess the utility of ValueCharts.
|
3 |
PVIT: A task-based approach for design and evaluation of interactive visualizations for preferential choiceBautista, Jeanette Lyn 05 1900 (has links)
In decision theory the process of selecting the best option is called preferential
choice. Many personal, business, and professional preferential choice decisions
are made every day. In these situations, a decision maker must select the optimal option among multiple alternatives. In order to do this, she must be able
to analyze a model of her preferences with respect to the objectives that are important to her. Prescriptive decision theory suggests several ways to effectively
develop a decision model. However, these methods often end up too tedious
and complicated to apply to complex decisions that involve many objectives
and alternatives.
In order to help people make better decisions, an easier, more intuitive way
to develop interactive models for analysis of decision contexts is needed. The
application of interactive visualization techniques to this problem is an opportune solution. A visualization tool to help in preferential choice must take into
account important aspects from both fields of Information Visualization and
Decision Theory. There exists some proposals that claim to aid preferential
choice, but some key tasks and steps from at least one of these areas are often
overlooked. An added missing element in these proposals is an adequate user
evaluation. In fact, the concept of a good evaluation in the field of information
visualization is a topic of debate, since the goals of such systems stretch beyond
what can be concluded from traditional usability testing. In our research we
investigate ways to overcome some of the challenges faced in the design and
evaluation of visualization systems for preferential choice.
In previous work, Carenini and Lloyd proposed ValueCharts, a set of visualizations and interactive techniques to support the inspection of linear models
of preferences. We now identify the need to consider the decision process in its
entirety, and to redesign ValueCharts in order to support all phases of preferential choice. We present our task-based approach to the redesign of ValueCharts
grounded in recent findings from both Decision Analysis and Information Visualization. We propose a set of domain-independent tasks for the design and
evaluation of interactive visualizations for preferential choice. We then use the
resulting framework as a basis for an analytical evaluation of our tool and alternative approaches. Finally, we use an application of the task model in conjunction with a new blend of evaluation methods to assess the utility of ValueCharts. / Science, Faculty of / Computer Science, Department of / Graduate
|
4 |
Modularity analysis of use case implementationsRodrigues dos Santos d'Amorim, Fernanda 31 January 2010 (has links)
Made available in DSpace on 2014-06-12T15:57:56Z (GMT). No. of bitstreams: 2
arquivo3237_1.pdf: 1530844 bytes, checksum: dcdb6221a7c974cbfc9e96c7629001ef (MD5)
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2010 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Atualmente, arquitetura baseada em componentes é a abordagem mais utilizada no desenvolvimento
de softwares complexos; esta tem como principal objetivo a atribuição dos
requisitos da aplicação aos componentes. Uma das técnicas mais difundidas para especificação de requisitos é a utilização de Casos de Uso. Em geral, arquiteturas de software
baseadas em componentes resultam em implementações onde o código relativo a um caso
de uso está espalhado e entrelaçado em diversos componentes do sistema, caracterizando
um crosscutting concern. Isto ocorre porque técnicas tradicionais, como Orientação a
Objetos (OO), não oferecem mecanismos que sejam capazes de modularizar este tipo de
concern. Recentemente, novas técnicas de modularização como aspectos, mixins e classes
virtuais, foram propostas para tentar resolver este problema. Estas técnicas podem ser
usadas para agrupar o código relacionado a um único caso de uso em uma nova unidade
de modularização.
Este trabalho analisa qualitativa e quantitativamente o impacto causado por este tipo
de modularização de casos de uso. Nós exploramos duas técnicas baseadas em Orientação
a Aspectos (OA): (i) Casos de Uso como Aspectos - onde utilizamos os construtores de
AspectJ para isolar todo código relativo à implementação de um caso de uso em um
aspecto; e (ii) Casos de Uso como Colaborações Plugáveis - onde usamos os construtores
de CaesarJ para modularizar implementações de casos de uso através de uma composição
hierárquica de colaborações. Nós executamos dois estudos de casos onde comparamos as
implementações OA de casos de uso com sua implementação OO. No processo de avaliação
extraímos métricas tradicionais e contemporâneas incluindo coesão, acoplamento e separação de concerns e analisamos modularidade em termos de atributos de qualidade de
software como: plugabilidade, rastreabilidade e suporte para desenvolvimento em paralelo.
Nossos resultados indicam que modularidade é um conceito relativo e sua análise
depende de outros fatores além do sistema alvo, das métricas e da técnica aplicada
|
5 |
A Retrospective View of the Phillips Curve and Its Empirical Validity since the 1950sDo, Hoang-Phuong 07 May 2021 (has links)
Since the 1960s, the Phillips curve has survived various significant changes (Kuhnian paradigm shifts) in macroeconomic theory and generated endless controversies. This dissertation revisits several important, representative papers throughout the curve's four historical, formative periods: Phillips' foundational paper in 1958, the wage determination literature in the 1960s, the expectations-augmented Phillips curve in the 1970s, and the latest New Keynesian iteration. The purpose is to provide a retrospective evaluation of the curve's empirical evidence. In each period, the preeminent role of the theoretical considerations over statistical learning from the data is first explored. To further appraise the trustworthiness of empirical evidence, a few key empirical models are then selected and evaluated for their statistical adequacy, which refers to the validity of the probabilistic assumptions comprising the statistical models. The evaluation results, using the historical (vintage) data in the first three periods and the modern data in the final one, show that nearly all of the models in the appraisal are misspecified - at least one probabilistic assumption is not valid. The statistically adequate models produced from the respecification with the same data suggest new understandings of the main variables' behaviors. The dissertations' findings from the representative papers cast doubt on the traditional narrative of the Phillips curve, which the representative papers play a crucial role in establishing. / Doctor of Philosophy / The empirical regularity of the Phillips curve, which captures the inverse relationship between the inflation and unemployment rates, has been widely debated in academic economic research and between policymakers in the last 60 years. To shed light on the debate, this dissertation examines a selected list of influential, representative studies from the Phillips curves' empirical history through its four formative periods. The examinations of these papers are conducted as a blend between a discussion on the methodology of econometrics (the primary quantitative method in economics), the role of theory vs. statistical learning from the observed data, and evaluations of the validity of the probabilistic assumptions assumed behind the empirical models. The main contention is that any departure of probabilistic assumptions produces unreliable statistical inference, rendering the empirical analysis untrustworthy. The evaluation results show that nearly all of the models in the appraisal are untrustworthy - at least one assumption is not valid. Then, an attempt to produce improved empirical models is made to produce new understandings. Overall, the dissertation's findings cast doubt on the traditional narrative of the Phillips curve, which the representative papers play a crucial role in establishing.
|
6 |
Static Code Analysis: A Systematic Literature Review and an Industrial SurveyIlyas, Bilal, Elkhalifa, Islam January 2016 (has links)
Context: Static code analysis is a software verification technique that refers to the process of examining code without executing it in order to capture defects in the code early, avoiding later costly fixations. The lack of realistic empirical evaluations in software engineering has been identified as a major issue limiting the ability of research to impact industry and in turn preventing feedback from industry that can improve, guide and orient research. Studies emphasized rigor and relevance as important criteria to assess the quality and realism of research. The rigor defines how adequately a study has been carried out and reported, while relevance defines the potential impact of the study on industry. Despite the importance of static code analysis techniques and its existence for more than three decades, the number of empirical evaluations in this field are less in number and do not take into account the rigor and relevance into consideration. Objectives: The aim of this study is to contribute toward bridging the gap between static code analysis research and industry by improving the ability of research to impact industry and vice versa. This study has two main objectives. First, developing guidelines for researchers, which will explore the existing research work in static code analysis research to identify the current status, shortcomings, rigor and industrial relevance of the research, reported benefits/limitations of different static code analysis techniques, and finally, give recommendations to researchers to help improve the future research to make it more industrial oriented. Second, developing guidelines for practitioners, which will investigate the adoption of different static code analysis techniques in industry and identify benefits/limitations of these techniques as perceived by industrial professionals. Then cross-analyze the findings of the SLR and the surbvey to draw final conclusions, and finally, give recommendations to professionals to help them decide which techniques to adopt. Methods: A sequential exploratory strategy characterized by the collection and analysis of qualitative data (systematic literature review) followed by the collection and analysis of quantitative data (survey), has been used to conduct this research. In order to achieve the first objective, a thorough systematic literature review has been conducted using Kitchenham guidelines. To achieve the second study objective, a questionnaire-based online survey was conducted, targeting professionals from software industry in order to collect their responses regarding the usage of different static code analysis techniques, as well as their benefits and limitations. The quantitative data obtained was subjected to statistical analysis for the further interpretation of the data and draw results based on it. Results: In static code analysis research, inspection and static analysis tools received significantly more attention than the other techniques. The benefits and limitations of static code analysis techniques were extracted and seven recurrent variables were used to report them. The existing research work in static code analysis field significantly lacks rigor and relevance and the reason behind it has been identified. Somre recommendations are developed outlining how to improve static code analysis research and make it more industrial oriented. From the industrial point of view, static analysis tools are widely used followed by informal reviews, while inspections and walkthroughs are rarely used. The benefits and limitations of different static code analysis techniques, as perceived by industrial professionals, have been identified along with the influential factors. Conclusions: The SLR concluded that the techniques having a formal, well-defined process and process elements have receive more attention in research, however, this doesn’t necessarily mean that technique is better than the other techniques. The experiments have been used widely as a research method in static code analysis research, but the outcome variables in the majority of the experiments are inconsistent. The use of experiments in academic context contributed nothing to improve the relevance, while the inadequate reporting of validity threats and their mitigation strategies contributed significantly to poor rigor of research. The benefits and limitations of different static code analysis techniques identified by the SLR could not complement the survey findings, because the rigor and relevance of most of the studies reporting them was weak. The survey concluded that the adoption of static code analysis techniques in the industry is more influenced by the software life-cycle models in practice in organizations, while software product type and company size do not have much influence. The amount of attention a static code analysis technique has received in research doesn’t necessarily influence its adoption in industry which indicates a wide gap between research and industry. However, the company size, product type, and software life-cycle model do influence professionals perception on benefits and limitations of different static code analysis techniques.
|
7 |
Updating Bridge Deck Condition Transition Probabilities as New Inspection Data are Collected: Methodology and Empirical EvaluationLi, Zequn, LI January 2017 (has links)
No description available.
|
8 |
Empirical Evaluation of AdaBoost Method in Detecting Transparent and Occluded ObjectsTamang, Sujan 29 May 2018 (has links)
No description available.
|
9 |
Improved Methods for Interrupted Time Series Analysis Useful When Outcomes are Aggregated: Accounting for heterogeneity across patients and healthcare settingsEwusie, Joycelyne E January 2019 (has links)
This is a sandwich thesis / In an interrupted time series (ITS) design, data are collected at multiple time points before and after the implementation of an intervention or program to investigate the effect of the intervention on an outcome of interest. ITS design is often implemented in healthcare settings and is considered the strongest quasi-experimental design in terms of internal and external validity as well as its ability to establish causal relationships. There are several statistical methods that can be used to analyze data from ITS studies. Nevertheless, limitations exist in practical applications, where researchers inappropriately apply the methods, and frequently ignore the assumptions and factors that may influence the optimality of the statistical analysis. Moreover, there is little to no guidance available regarding the application of the various methods, and a standardized framework for analysis of ITS studies does not exist. As such, there is a need to identify and compare existing ITS methods in terms of their strengths and limitations. Their methodological challenges also need to be investigated to inform and direct future research. In light of this, this PhD thesis addresses two main objectives: 1) to conduct a scoping review of the methods that have been employed in the analysis of ITS studies, and 2) to develop improved methods that address a major limitation of the statistical methods frequently used in ITS data analysis. These objectives are addressed in three projects.
For the first project, a scoping review of the methods that have been used in analyzing ITS data was conducted, with the focus on ITS applications in health research. The review was based on the Arksey and O’Malley framework and the Joanna Briggs Handbook for scoping reviews. A total of 1389 studies were included in our scoping review. The articles were grouped into methods papers and applications papers based on the focus of the article. For the methods papers, we narratively described the identified methods and discussed their strengths and limitations. The application papers were summarized using frequencies and percentages. We identified some limitations of current methods and provided some recommendations useful in health research.
In the second project, we developed and presented an improved method for ITS analysis when the data at each time point are aggregated across several participants, which is the most common case in ITS studies in healthcare settings. We considered the segmented linear regression approach, which our scoping review identified as the most frequently used method in ITS studies. When data are aggregated, heterogeneity is introduced due to variability in the patient population within sites (e.g. healthcare facilities) and this is ignored in the segmented linear regression method. Moreover, statistical uncertainty (imprecision) is introduced in the data because of the sample size (number of participants from whom data are aggregated). Ignoring this variability and uncertainty will likely lead to invalid estimates and loss of statistical power, which in turn leads to erroneous conclusions. Our proposed method incorporates patient variability and sample size as weights in a weighted segmented regression model. We performed extensive simulations and assessed the performance of our method using established performance criteria, such as bias, mean squared error, level and statistical power. We also compared our method with the segmented linear regression approach. The results indicated that the weighted segmented regression was uniformly more precise, less biased and more powerful than the segmented linear regression method.
In the third project, we extended the weighted method to multisite ITS studies, where data are aggregated at two levels: across several participants within sites as well as across multiple sites. The extended method incorporates the two levels of heterogeneity using weights, where the weights are defined using patient variability, sample size, number of sites as well as site-to-site variability. This extended weighted regression model, which follows the weighted least squares approach is employed to estimate parameters and perform significance testing. We conducted extensive empirical evaluations using various scenarios generated from a multi-site ITS study and compared the performance of our method with that of the segmented linear regression model as well as a pooled analysis method previously developed for multisite studies. We observed that for most scenarios considered, our method produced estimates with narrower 95% confidence intervals and smaller p-values, indicating that our method is more precise and is associated with more statistical power. In some scenarios, where we considered low levels of heterogeneity, our method and the previously proposed method showed comparable results.
In conclusion, this PhD thesis facilitates future ITS research by laying the groundwork for developing standard guidelines for the design and analysis of ITS studies. The proposed improved method for ITS analysis, which is the weighted segmented regression, contributes to the advancement of ITS research and will enable researchers to optimize their analysis, leading to more precise and powerful results. / Thesis / Doctor of Philosophy (PhD)
|
10 |
Hybrid classical-quantum algorithms for optimization and machine learningZardini, Enrico 30 April 2024 (has links)
Quantum computing is a form of computation that exploits quantum mechanical phenomena for information processing, with promising applications (among others) in optimization and machine learning. Indeed, quantum machine learning is currently one of the most popular directions of research in quantum computing, offering solutions with an at-least-theoretical advantage compared to the classical counterparts. Nevertheless, the quantum devices available in the current Noisy Intermediate-Scale Quantum (NISQ) era are limited in the number of qubits and significantly affected by noise. An interesting alternative to the current prototypes of general-purpose quantum devices is represented by quantum annealers, specific-purpose quantum machines implementing the heuristic search for solving optimization problems known as quantum annealing. However, despite the higher number of qubits, the current quantum annealers are characterised by very sparse topologies. These practical issues have led to the development of hybrid classical-quantum schemes, aiming at leveraging the strengths of both paradigms while circumventing some of the limitations of the available devices. In this thesis, several hybrid classical-quantum algorithms for optimization and machine learning are introduced and/or empirically assessed, as the empirical evaluation is a fundamental part of algorithmic research. The quantum computing models taken into account are both quantum annealing and circuit-based universal quantum computing. The results obtained have shown the effectiveness of most of the proposed approaches.
|
Page generated in 0.1263 seconds