• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 476
  • 281
  • 75
  • 64
  • 35
  • 15
  • 10
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1161
  • 243
  • 174
  • 162
  • 160
  • 151
  • 145
  • 131
  • 108
  • 98
  • 97
  • 95
  • 87
  • 87
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A metrics study in Virtual Reality

Ray, Andrew A. 23 August 2004 (has links)
Virtual Reality is a young field and needs more research to mature. In order to help speed the maturity process research was performed to see if knowledge from the domain of software engineering could be applied to the development of Virtual Reality software. Software engineering is a field within computer science that studies how to improve both product and process. One of the sub-fields of software engineering is metrics, which seeks to measure software products and processes. This allows for prediction of certain attributes such as quality. There are several software toolkits that exist in virtual reality that have not had formal software engineering methodologies applied during their development. This research looks at applying knowledge gained from the metrics discipline to the software toolkits used in virtual reality. When metrics are used to measure the toolkits in virtual reality, the metrics seem to behave--produce similar significant correlations--in a similar fashion as when they are applied in previously studied domains. / Master of Science
32

AVERAGE TYPICAL MISSION AVAILABILITY: A FREQUENCY MANAGEMENT METRIC

Jones, Charles H. 10 1900 (has links)
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California / One approach to improving spectrum usage efficiency is to manage the scheduling of frequencies more effectively. The use of metrics to analyze frequency scheduling could aid frequency managers in a variety of ways. However, the basic question of what is a good metric for representing and analyzing spectral usage remains unanswered. Some metrics capture spectral occupancy. This paper introduces metrics that change the focus from occupancy to availability. Just because spectrum is not in use does not mean it is available for use. A significant factor in creating unused but unusable spectrum is fragmentation. A mission profile for spectrum usage can be considered a rectangle in a standard time versus frequency grid. Even intelligent placement of these rectangles (i.e., the scheduling of a missions spectrum usage) can not always utilize all portions of the spectrum. The average typical mission availability (ATMA) metric provides a way of numerically answering the question: Could we have scheduled another typical mission? This is a much more practical question than: Did we occupy the entire spectrum? If another mission couldn’t have been scheduled, then the entire spectrum was effectively used, even if the entire spectrum wasn’t occupied.
33

Four-Dimensional Non-Reductive Homogeneous Manifolds with Neutral Metrics

Renner, Andrew 01 May 2004 (has links)
A method due to É. Cartan was used to algebraically classify the possible four-dimensional manifolds that allow a (2, 2)-signature metric with a transitive group action which acts by isometries. These manifolds are classified according to the Lie algebra of the group action. There are six possibilities: four non-parameterized Lie algebras, one discretely parameterized family, and one family parameterized by R.
34

Requirements-Oriented Methodology for Evaluating Ontologies

Yu, Jonathan, Jonathan.Yu@csiro.au January 2009 (has links)
Ontologies play key roles in many applications today. Therefore, whether using a newly-specified ontology or an existing ontology for use in its target application, it is important to determine the suitability of an ontology to the application at hand. This need is addressed by carrying out ontology evaluation, which determines qualities of an ontology using methodologies, criteria or measures. However, for addressing the ontology requirements from a given application, it is necessary to determine what the appropriate set of criteria and measures are. In this thesis, we propose a Requirements-Oriented Methodology for Evaluating Ontologies (ROMEO). ROMEO outlines a methodology for determining appropriate methods for ontology evaluation that incorporates a suite of existing ontology evaluation criteria and measures. ROMEO helps ontology engineers to determine relevant ontology evaluation measures for a given set of ontology requirements by linking these requirements to existing ontology evaluation measures through a set of questions. There are three main parts to ROMEO. First, ontology requirements are elicited from a given application and form the basis for an appropriate evaluation of ontologies. Second, appropriate questions are mapped to each ontology requirement. Third, relevant ontology evaluation measures are mapped to each of those questions. From the ontology requirements of an application, ROMEO is used to determine appropriate methods for ontology evaluation by mapping applicable questions to the requirements and mapping those questions to appropriate measures. In this thesis, we perform the ROMEO methodology to obtain appropriate ontology evaluation methods for ontology-driven applications through case studies of Lonely Planet and Wikipedia. Since the mappings determined by ROMEO are dependent on the analysis of the ontology engineer, the validation of these mappings is needed. As such, in addition to proposing the ROMEO methodology, a method for the empirical validation of ROMEO mappings is proposed in this thesis. We report on two empirical validation experiments that are carried out in controlled environments to examine the performance of the ontologies over a set of tasks. These tasks vary and are used to compare the performance of a set of ontologies in the respective experimental environment. The ontologies used vary on a specific ontology quality or measure being examined. Empirical validation experiments are conducted for two mappings between questions and their associated measures, which are drawn from case studies of Lonely Planet and Wikipedia. These validation experiments focus on mappings between questions and their measures. Furthermore, as these mappings are application-independent, they may be reusable in subsequent applications of the ROMEO methodology. Using a ROMEO mapping from the Lonely Planet case study, we validate a mapping of a coverage question to the F-measure. The validation experiment carried out for this mapping was inconclusive, thus requiring further analysis. Using a ROMEO mapping from the Wikipedia case study, we carry out a separate validation experiment examining a mapping between an intersectedness question and the tangledness measure. The results from this experiment showed the mapping to be valid. For future work, we propose additional validation experiments for mappings that have been identified between questions and measures.
35

Adaptive Techniques for Enhancing the Robustness and Performance of Speciated PSOs in Multimodal Environments

Bird, Stefan Charles, stbird@seatiger.org January 2008 (has links)
This thesis proposes several new techniques to improve the performance of speciated particle swarms in multimodal environments. We investigate how these algorithms can become more robust and adaptive, easier to use and able to solve a wider variety of optimisation problems. We then develop a technique that uses regression to vastly improve an algorithm's convergence speed without requiring extra evaluations. Speciation techniques play an important role in particle swarms. They allow an algorithm to locate multiple optima, providing the user with a choice of solutions. Speciation also provides diversity preservation, which can be critical for dynamic optimisation. By increasing diversity and tracking multiple peaks simultaneously, speciated algorithms are better able to handle the changes inherent in dynamic environments. Speciation algorithms often require a user to specify a parameter that controls how species form. This is a major drawback since the knowledge may not be available a priori. If the parameter is incorrectly set, the algorithm's performance is likely to be highly degraded. We propose using a time-based measure to control the speciation, allowing the algorithm to define species far more adaptively, using the population's characteristics and behaviour to control membership. Two new techniques presented in this thesis, ANPSO and ESPSO, use time-based convergence measures to define species. These methods are shown to be robust while still providing highly competitive performance. Both algorithms effectively optimised all of our test functions without requiring any tuning. Speciated algorithms are ideally suited to optimising dynamic environments, however the complexity of these environments makes them far more difficult to design algorithms for. To increase an algorithm's performance it is necessary to determine in what ways it should be improved. While all performance metrics allow optimisation techniques to be compared, they cannot show how to improve an algorithm. Until now this has been done largely by trial and error. This is extremely inefficient, in the same way it is inefficient trying to improve a program's speed without profiling it first. This thesis proposes a new metric that exclusively measures convergence speed. We show that an algorithm can be profiled by correlating the performance as measured by multiple metrics. By combining these two techniques, we can obtain far better insight into how best to improve an algorithm. Using this information, we then propose a local convergence enhancement that greatly increases performance by actively estimating the location of an optimum. The enhancement uses regression to fit a surface to the peak, guiding the search by estimating the peak's true location. By incorporating this technique, the algorithm is able to use the information contained within the fitness landscape far more effectively. We show that by combining the regression with an existing speciated algorithm, we are able to vastly improve the algorithm's performance. This technique will greatly enhance the utility of PSO on problems where fitness evaluations are expensive, or that require fast reaction to change.
36

Metrics for sampling-based motion planning

Morales Aguirre, Marco Antonio 15 May 2009 (has links)
A motion planner finds a sequence of potential motions for a robot to transit from an initial to a goal state. To deal with the intractability of this problem, a class of methods known as sampling-based planners build approximate representations of potential motions through random sampling. This selective random exploration of the space has produced many remarkable results, including solving many previously unsolved problems. Sampling-based planners usually represent the motions as a graph (e.g., the Probabilistic Roadmap Methods or PRMs), or as a tree (e.g., the Rapidly exploring Random Tree or RRT). Although many sampling-based planners have been proposed, we do not know how to select among them because their different sampling biases make their performance depend on the features of the planning space. Moreover, since a single problem can contain regions with vastly different features, there may not exist a simple exploration strategy that will perform well in every region. Unfortunately, we lack quantitative tools to analyze problem features and planners performance that would enable us to match planners to problems. We introduce novel metrics for the analysis of problem features and planner performance at multiple levels: node level, global level, and region level. At the node level, we evaluate how new samples improve coverage and connectivity of the evolving model. At the global level, we evaluate how new samples improve the structure of the model. At the region level, we identify groups or regions that share similar features. This is a set of general metrics that can be applied in both graph-based and tree-based planners. We show several applications for these tools to compare planners, to decide whether to stop planning or to switch strategies, and to adjust sampling in different regions of the problem.
37

Defining a Software Analysis Framework

Dogan, Oguzhan January 2008 (has links)
<p>Nowadays, assessing software quality and making predictions about the software are not</p><p>possible. Software metrics are useful tools for assessing software quality and for making</p><p>predictions. But currently the interpretation of the measured values is based on personal</p><p>experience. In order to be able to assess software quality, quantitative data has to be</p><p>obtained.</p><p>VizzAnalyzer is a program for analyzing open source Java Projects. It can be used</p><p>for collecting quantitative data for defining thresholds that can support the interpretation</p><p>of the measurement values. It helps to assess software quality by calculating over 20</p><p>different software metrics. I define a process for obtaining, storing and maintaining</p><p>software projects. I have used the defined process to analyze 60-80 software projects</p><p>delivering a large database with quantitative data.</p>
38

Quantitative Evaluation of Software Quality Metrics in Open-Source Projects

Barkmann, Henrike January 2009 (has links)
<p>The validation of software quality metrics lacks statistical</p><p>significance. One reason for this is that the data collection</p><p>requires quite some effort. To help solve this problem,</p><p>we develop tools for metrics analysis of a large number of</p><p>software projects (146 projects with ca. 70.000 classes and</p><p>interfaces and over 11 million lines of code). Moreover, validation</p><p>of software quality metrics should focus on relevant</p><p>metrics, i.e., correlated metrics need not to be validated independently.</p><p>Based on our statistical basis, we identify correlation</p><p>between several metrics from well-known objectoriented</p><p>metrics suites. Besides, we present early results of</p><p>typical metrics values and possible thresholds.</p>
39

Innovation in Service Organizations : The development of a suitable innovation measurement system

Johansson, Amanda, Smith, Emelie January 2015 (has links)
Innovation in services has arisen to be a hot topic of today and being innovative serve as a key in staying competitive in most business settings, the service sector is no exception.Although important, service innovation is difficult to measure and the service perspective has been noticeably absent in traditional approaches where innovation measurement has tended to focus mainly on products and production related systems. These measurement indicators fail to capture the diversity and intricacy of innovation processes emerging in service firms, where innovation rarely requires R&amp;D. Until now, a coherent instrument or tool for measuring innovation in a service company has not existed resulting in that research studies on service innovation lag behind those of product innovation. The need for an innovation measurement instrument is obvious as it would not only assist companies in understanding their current innovation practices or capabilities, but would also help clarify what the organization need to focus on to maximize its success. With basis in aforementioned, this study sets out to extend the knowledge regarding factors affecting innovation within the service sector. As a result, a developed and tested questionnaire, suitable for measuring innovation within a service firm is provided and a managerial and theoretical contribution has been made.
40

Usability and productivity for silicon debug software: a case study

Singh, Punit 24 February 2012 (has links)
Semiconductor manufacturing is complex. Companies strive to lead in the markets by delivering timely chips which are bug (a.k.a defect) free and have very low power consumption. The new research drives new features in chips. The case study research reported here is about the usability and productivity of the silicon debug software tools. Silicon debug software tools are a set of software used to find bugs before delivering chips to the customer. The study has an objective to improve usability and productivity of the tools, by introducing metrics. The results of the measurements drive a concrete plan of action. The GQM (Goal, Questions, Metrics) methodology was used to define and gather data for the measurements. The project was developed in two parts or phases. We took the measurements using the method over the two phases of the tool development. The findings from phase one improved the tool usability in the second phase. The lesson learnt is that tool usability is a complex measurement. Improving usability means that the user will use less of the tool help button; the user will have less downtime and will not input incorrect data. Even though for this study the focus was on three important tools, the same usability metrics can be applied to the remaining five tools. For defining productivity metrics, we also used the GQM methodology. A productivity measurement using historic data was done to establish a baseline. The baseline measurements identified some existing bottlenecks in the overall silicon debug process. We link productivity to time it takes for a debug tool user to complete the assigned task(s). The total time taken for using all the tools does not give us any actionable items for improving productivity. We will need to measure the time it takes for use of each tool in the debug process to give us actionable items. This is identified as future work. To improve usability we recommend making tools that are more robust to error handling and having good help features. To improve productivity we recommend getting data on where the user is spending most of the debug time. Then, we can focus on improving that time-consuming part of debug to make the users more productive. / text

Page generated in 0.0412 seconds