• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 473
  • 281
  • 75
  • 64
  • 35
  • 15
  • 10
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1156
  • 242
  • 174
  • 160
  • 159
  • 151
  • 143
  • 131
  • 108
  • 97
  • 96
  • 95
  • 87
  • 86
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

New metrics on networks arising from modulus and applications of Fulkerson duality

Fernando, Nethali January 1900 (has links)
Doctor of Philosophy / Department of Mathematics / Pietro Poggi-Corradini / This thesis contains six chapters. In the first chapter, the continuous and the discrete cases of p-modulus is introduced. We present properties of p-modulus and its connection to classical quantities. We also introduce use Arne Beurling's criterion for extremality to build insight and intuition regarding the modulus. After building an intuitive understanding of the p-modulus, we then proceed to switch perspectives to that of convex analysis. Using the theory of convex analysis, the uniqueness and existence of extremal densities is shown. We end this chapter with the introduction of the probabilistic interpretation of Modulus. In the second chapter, we introduce the Fulkerson duality. After defining the Fulkerson dual, we will investigate the blocking duality for different families of objects that the NODE research group has been studying and has been established. An important result that connects the Fulkerson dual and modulus is given at the end of this chapter. This important theorem will be used in proving one of the main results that [delta]p (introduced in Chapter 4) is a metric on graphs. The third chapter will discuss about metrics and ultrametrics on networks. Among these metrics, effective resistance is given special attention because the proof of [delta]p metric also serves as a new proof that effective resistance is a metric on graphs. We define effective resistance and give two different proves that show it is a metric, namely flows and the Laplacian. Two new families of metrics on graphs that arises through modulus are introduced in the fourth chapter. We also show how the two families are related as the d_p metric is viewed as a snowflaked version of the [delta]p metric. We end this chapter with some numerical examples that proves this connection and also serves as a set of plentiful examples of modulus calculations. Clutters and blockers is also another topic that is very much related to families of objects. While it has different rules and conditions, the study of clutters and blockers can give more insights to both modulus and clutters. We explore these relations in chapter 5. We provide some examples of clutters and blockers and finally reveal the relationship between the blocker and Fulkerson dual. Finally, in chapter 6, we end the thesis by presenting some of the open questions that we would like to explore and find answers in the future. In the second chapter, we introduce the Fulkerson duality. After defining the Fulkerson dual, we will investigate the blocking duality for different families of objects that the NODE research group has been studying and has been established. An important result that connects the Fulkerson dual and modulus is given at the end of this chapter. This important theorem will be used in proving one of the main results that delta_p (introduced in Chapter 4) is a metric on graphs. The third chapter will discuss about metrics and ultrametrics on networks. Among these metrics, effective resistance is given special attention because the proof of delta_p metric also serves as a new proof that effective resistance is a metric on graphs. We define effective resistance and give two different proves that show it is a metric, namely flows and the Laplacian. Two new families of metrics on graphs that arises through modulus are introduced in the fourth chapter. We also show how the two families are related as the d_p metric is viewed as a snowflaked version of the delta_p metric. We end this chapter with some numerical examples that proves this connection and also serves as a set of plentiful examples of modulus calculations. Clutters and blockers is also another topic that is very much related to families of objects. While it has different rules and conditions, the study of clutters and blockers can give more insights to both modulus and clutters. We explore these relations in chapter 5. We provide some examples of clutters and blockers and finally reveal the relationship between the blocker and Fulkerson dual. Finally, in chapter 6, we end the thesis by presenting some of the open questions that we would like to explore and find answers in the future.
12

Software Architectural Metrics for the Scania Internet of Things Platform : From a Microservice Perspectiv

Ulander, David January 2017 (has links)
There are limited tools to evaluate a microservice architecture and no common definition of how the architecture should be designed. Moreover, developing systems with microservices introduces additional complexity to the software architecture. That, together with the fact the systems are becoming more complex has led to a desire for architecture evaluation methods. In this thesis a set of quality attributes measured by structural metrics are used to evaluate Scania's IoT Offboard platform. By implementing a metrics evaluation program the quality of the software architecture can be improved. Also, metrics can assist developers and architects while they are becoming more efficient since they better understand how performance is measured, i.e. which quality attributes are the most important and how these are measured. For Scania's IoT Offboard platform the studied quality attributes are listed in decreasing importance: flexibility, reusability and understandability. All the microservices are loosely coupled in the platform, which results in a loosely coupled architecture. This indicates a flexible, reusable and understandable system, in terms of coupling. Furthermore, the architecture is decentralized, i.e. the system is unflexible and difficult to change. The other metrics were lacking a reference scale, hence they will act as a point of reference for future measurements as the architecture evolves. To improve the flexibility, reusability and understandability of the architecture the large microservices should be divided into several smaller microservices. Also aggregators should be utilized more to make the system more flexible.
13

Marketing metrics use in South Africa

Mathare, Waweru 06 May 2010 (has links)
The marketing function has been under immense pressure to be more document it contribution to the performance of the firm, this pressure comes from shareholders seeking a return on their funds, CEO’s seeking savings and from their peers as they seek to become more relevant in the organisation. Efforts to track marketing have been hindered by among other issues a lack of numeracy by marketers, the primacy of financial measurements and a laundry list of metrics from research and practice that makes it hard to chose, few and pertinent ones. The use of marketing metrics has proven to contribute to better business performance, and during recessions when budgets are tight, it becomes even more urgent that the marketing function have and understand marketing metrics. This study aimed to evaluate the extent of marketing metrics use in South Africa, determine the levels and frequency of review, examine whether use of metrics changes due to severe economic conditions and evaluate whether the change in use of metrics contributes to better firm performance. The study found that use, review and collection of metrics is at par with other countries, but there is no change in the level and frequency of review during a recession. Evidence was found of better firm performance that is linked to the change of use of metrics. / Dissertation (MBA)--University of Pretoria, 2010. / Gordon Institute of Business Science (GIBS) / unrestricted
14

An Investigation of Software Metrics Affect on Cobol Program Reliability

Day, Henry Jesse II 20 June 1996 (has links)
The purpose of this research was to predict a COBOL program's reliability from software characteristics that are found in the program's source code. The first step was to select factors based on the human information processing model that are associated with changes in computer program reliability. Then these factors (software metrics) were quantitatively studied to determine which factors affect COBOL program reliability. Then a statistical model was developed that predicts COBOL program reliability. Reliability was selected because the reliability of computer programs can be used by systems professionals and auditors to make decisions. Using the Human Information Processing Model to study the act of creating a computer program, several hypotheses were derived about program characteristics and reliability. These hypotheses were categorized as size, structure, and temporal hypotheses. These characteristics were then used to test several prediction models for the reliability of COBOL programs. Program characteristics were measured by a program called METRICS. METRICS was written by the author using the Pascal programming language. It accepts COBOL programs as input and produces as output seventeen measures of complexity. Actual programs and related data were then gathered from a large insurance company over the course of one year. The data were used to test the hypotheses and to find a model for predicting the reliability of COBOL programs. The operational definition for reliability was the probability of a program executing without abending. The size of a program, its cyclomatic complexity, and the number of times a program has been executed were used to predict reliability. A regression model was developed that predicted the reliability of a COBOL program from a program's characteristics. The model had a prediction error of 9.3%, a R2 of 15%, and an adjusted R2 of 13%. The most important thing learned from the research is that increasing the size of a program's modules, not the total size of a program, is associated with decreased reliability. / Ph. D.
15

A Goal-Driven Methodology for Developing Health Care Quality Metrics

Villar Corrales, Carlos 29 March 2011 (has links)
The definition of metrics capable of reporting on quality issues is a difficult task in the health care sector. This thesis proposes a goal-driven methodology for the development, collection, and analysis of health care quality metrics that expose in a quantifiable way the progress of measurement goals stated by interested stakeholders. In other words, this methodology produces reports containing metrics that enable the understanding of information out of health care data. The resulting Health Care Goal Question Metric (HC-GQM) methodology is based on the Goal Question Metric (GQM) approach, a methodology originally created for the software development industry and adapted to the context and specificities of the health care sector. HC-GQM benefits from a double loop validation process where the methodology is first implemented, then analysed, and finally improved. The validation process takes place in the context of adverse event management and incident reporting initiatives at a Canadian teaching hospital, where the HC-GQM provides a set of meaningful metrics and reports on the occurrence of adverse events and incidents to the stakeholders involved. The results of a survey suggest that the users of HC-GQM have found it beneficial and would use it again.
16

A Goal-Driven Methodology for Developing Health Care Quality Metrics

Villar Corrales, Carlos 29 March 2011 (has links)
The definition of metrics capable of reporting on quality issues is a difficult task in the health care sector. This thesis proposes a goal-driven methodology for the development, collection, and analysis of health care quality metrics that expose in a quantifiable way the progress of measurement goals stated by interested stakeholders. In other words, this methodology produces reports containing metrics that enable the understanding of information out of health care data. The resulting Health Care Goal Question Metric (HC-GQM) methodology is based on the Goal Question Metric (GQM) approach, a methodology originally created for the software development industry and adapted to the context and specificities of the health care sector. HC-GQM benefits from a double loop validation process where the methodology is first implemented, then analysed, and finally improved. The validation process takes place in the context of adverse event management and incident reporting initiatives at a Canadian teaching hospital, where the HC-GQM provides a set of meaningful metrics and reports on the occurrence of adverse events and incidents to the stakeholders involved. The results of a survey suggest that the users of HC-GQM have found it beneficial and would use it again.
17

A Goal-Driven Methodology for Developing Health Care Quality Metrics

Villar Corrales, Carlos 29 March 2011 (has links)
The definition of metrics capable of reporting on quality issues is a difficult task in the health care sector. This thesis proposes a goal-driven methodology for the development, collection, and analysis of health care quality metrics that expose in a quantifiable way the progress of measurement goals stated by interested stakeholders. In other words, this methodology produces reports containing metrics that enable the understanding of information out of health care data. The resulting Health Care Goal Question Metric (HC-GQM) methodology is based on the Goal Question Metric (GQM) approach, a methodology originally created for the software development industry and adapted to the context and specificities of the health care sector. HC-GQM benefits from a double loop validation process where the methodology is first implemented, then analysed, and finally improved. The validation process takes place in the context of adverse event management and incident reporting initiatives at a Canadian teaching hospital, where the HC-GQM provides a set of meaningful metrics and reports on the occurrence of adverse events and incidents to the stakeholders involved. The results of a survey suggest that the users of HC-GQM have found it beneficial and would use it again.
18

A Goal-Driven Methodology for Developing Health Care Quality Metrics

Villar Corrales, Carlos January 2011 (has links)
The definition of metrics capable of reporting on quality issues is a difficult task in the health care sector. This thesis proposes a goal-driven methodology for the development, collection, and analysis of health care quality metrics that expose in a quantifiable way the progress of measurement goals stated by interested stakeholders. In other words, this methodology produces reports containing metrics that enable the understanding of information out of health care data. The resulting Health Care Goal Question Metric (HC-GQM) methodology is based on the Goal Question Metric (GQM) approach, a methodology originally created for the software development industry and adapted to the context and specificities of the health care sector. HC-GQM benefits from a double loop validation process where the methodology is first implemented, then analysed, and finally improved. The validation process takes place in the context of adverse event management and incident reporting initiatives at a Canadian teaching hospital, where the HC-GQM provides a set of meaningful metrics and reports on the occurrence of adverse events and incidents to the stakeholders involved. The results of a survey suggest that the users of HC-GQM have found it beneficial and would use it again.
19

Reordering metrics for statistical machine translation

Birch, Alexandra January 2011 (has links)
Natural languages display a great variety of different word orders, and one of the major challenges facing statistical machine translation is in modelling these differences. This thesis is motivated by a survey of 110 different language pairs drawn from the Europarl project, which shows that word order differences account for more variation in translation performance than any other factor. This wide ranging analysis provides compelling evidence for the importance of research into reordering. There has already been a great deal of research into improving the quality of the word order in machine translation output. However, there has been very little analysis of how best to evaluate this research. Current machine translation metrics are largely focused on evaluating the words used in translations, and their ability to measure the quality of word order has not been demonstrated. In this thesis we introduce novel metrics for quantitatively evaluating reordering. Our approach isolates the word order in translations by using word alignments. We reduce alignment information to permutations and apply standard distance metrics to compare the word order in the reference to that of the translation. We show that our metrics correlate more strongly with human judgements of word order quality than current machine translation metrics. We also show that a combined lexical and reordering metric, the LRscore, is useful for training translation model parameters. Humans prefer the output of models trained using the LRscore as the objective function, over those trained with the de facto standard translation metric, the BLEU score. The LRscore thus provides researchers with a reliable metric for evaluating the impact of their research on the quality of word order.
20

Landscape ecological planning for protected areas using spatial and temporal metrics

Mirkarimi, Hamed, hamed.mirkarimi@student.rmit.edu.au January 2007 (has links)
The natural characteristics of protected areas have changed for a variety of reasons through time. Changes in protected area landscapes can occur because of natural and/or cultural processes. Natural processes such as geomorphological disturbance and climatic condition can permanently and/or temporarily change the characteristics of the environment. In addition, changes in human needs, knowledge and activities are the cultural driving forces behind changing characteristics of landscape through time. These changes can be studied both spatially and temporally. Spatially, protected area landscape structures such as shape, size and location with respect to their neighbourhood context can be studied to describe landscape configuration. Temporally, landscape functions such as different geographical locations and land characteristics can be studied to determine the rate of temporal variability in landscape. Any changes in temporal characteristics may lead to changes in spatial characteristics of protected areas and vice versa. This thesis has developed a framework to enhance the landscape ecological planning approach with attention to changes in landscapes of protected areas. Considering landscape ecological concepts, this framework draws upon spatial and temporal characteristics of protected areas. Initially, a basic model of the landscape ecological approach to protected area planning and data requirements for landscape ecological planning was developed according to the concept of landscape ecological planning. In order to examine the model in the real world, the data requirements for landscape ecological planning were implemented using a case study method. The basic list of data required for landscape ecological planning was further developed through the case study approach by highlighting the importance of road metrics in the process of planning. In addition, the case study approach proved that spatial and temporal metrics can be used in the interpretation of spatial configuration and temporal variability of protected areas th rough a quantitative method. The framework was developed for three case studies in Iran and three case studies in Australia. A number of metrics were applied in order to quantify spatial and temporal aspects of the protected areas. A list of spatial and temporal criteria was developed to assist interpretation of area compaction, spatial fragmentation and temporal variability of protected areas. Using the criteria list, a new framework for spatial and temporal evaluation of protected areas has been developed. This can be used to determine spatial and temporal management issues of protected areas at the landscape scale. Then planning scenarios for spatial and temporal issues of protected areas at the landscape scale can be suggested. The developed framework has the potential to be applied to all protected areas even where detailed ecological data and information are not available. In addition, when all data required are available, the developed framework using spatial and temporal metrics has the potential to suggest a flexible zoning plan for protected areas.

Page generated in 0.0561 seconds