• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 14
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 59
  • 59
  • 31
  • 14
  • 12
  • 12
  • 10
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Design of a performance evaluation tool for multimedia databases with special reference to Oracle

Stakemire, Tonia January 2004 (has links)
Increased production and use of multimedia data has led to the development of a more advanced Database Management System (DBMS), like an Object Relational Database Management System (ORDBMS). These advanced databases are necessitated by the complexity in structure and the functionality required by multimedia data. Unfortunately, no suitable benchmarks exist with which to test the performance of databases when handling multimedia data. This thesis describes the design of a benchmark to measure the performance of basic functionality found in multimedia databases. The benchmark, called MORD (Multimedia Object Relational Databases), targets Oracle, a well known commercial Object Relational Database Management System (ORDBMS) that can handle multimedia data. Although MORD targets Oracle, it can easily be applied to other Multimedia Database Management System (MMDBMS) as a result of a design that stressed its portability, and simplicity. MORD consists of a database schema, test data, and code to simulate representative queries on multimedia databases. A number of experiments are described that validate MORD and ensure its correct design and that its objectives are met. A by-product of these experiments is an initial understanding of the performance of multimedia databases. The experiments show that with multimedia data the buffer cache should be at least large enough to hold the largest dataset, a bigger block size improves the performance, and turning off logging and caching for bulk loading improves the performance. MORD can be used to compare different ORDBMS or to assist in the configuration of a specific database.
12

Performance measurement as a tool for software engineering

Van Aardt, Jan Markus 22 July 2005 (has links)
Some software development teams regard software performance measurement as a mere luxury. When it happens, it often tends to be infrequent, insufficient and subjective. Countless software projects were sent into an uncontrollable spiral of poor management and unsatisfactory results. By revisiting old ideas and policies, many companies have turned themselves around. To ensure that software engineering does the same, technologies and procedures have to be reevaluated. The fact that many companies have decided to cut costs on technology expenditure necessitates software development teams to look for alternative options for deploying high performance software systems. As many companies are moving into the electronic era and evolving to the next stage of evolution, electronic commerce, the more important it has become to apply these concepts on Internet development projects and procedures. The Internet market has shown that two software providers are aiming for worldwide domination of Internet server deployment, being Microsoft and Apache. Currently, the Apache web server is the most commonly used server on the Internet today (60%), with Microsoft's Internet Information Server (25%) in a strong second place. The need for higher throughput and better services is getting more with each passing day. It increases the pressure on these two software vendors to provide the best architecture for their clients' needs. This study intends to provide the reader with an objective view of a basic performance comparison between these two products and tries to find a correlation between the performance tests and the products' popularity standings. The tests for this study were performed on identical hardware architectures with one difference, being the operating system. By comparing the costly proprietary Microsoft solution with its cheaper open source rival, Linux, certain opinions were tested. Would a product developed by a software company that invests millions of dollars in their products perform better than this free-for-all solution, or would the selfless inputs of hundreds of developers all over the world finally payoff through the creation of the world's best Internet server? The results of these tests were evaluated through formal statistical methods, providing overall comparisons of several common uses of web servers. These results were implemented in a small field test to prove the findings in practice with some interesting outcomes in terms of supportive technologies, new rapid application development (RAD) tools and data access models. This research in itself will not change the mind of any Internet programmer. What it hopes to achieve is to demonstrate software engineers that current processes and methods of developing software are not always the right way of doing things. Furthermore, it highlights many important factors often ignored or overlooked while managing software projects. Change management, process re-engineering and risk management form crucial elements of software development projects. By not adhering to certain critical elements of software development, software projects stand the chance of not reaching their goals and could even fail completely. Performance measurement acts as a tool for software engineering, providing guidelines for technological decisions, project management and ultimately, project success. / Dissertation (MSc (Computer Science))--University of Pretoria, 2005. / Computer Science / unrestricted
13

Utilizing Runtime Information for Accurate Root Cause Identification in Performance Diagnosis

Weng, Lingmei January 2023 (has links)
This dissertation highlights that existing performance diagnostic tools often become less effective due to their inherent inaccuracies in modern software. To overcome these inaccuracies and effectively identify the root causes of performance issues, it is necessary to incorporate supplementary runtime information into these tools. Within this context, the dissertation integrates specific runtime information into two typical performance diagnostic tools: profilers and causal tracing tools. The integration yields a substantial enhancement in the effectiveness of performance diagnosis. Among these tools, gprof stands out as a representative profiler for performance diagnosis. Nonetheless, its effectiveness diminishes as the time cost calculated based on CPU sampling fails to accurately and adequately pinpoint the root causes of performance issues in complex software. To tackle this challenge, the dissertation introduces an innovative methodology called value-assisted cost profiling (vProf). This approach incorporates variable values observed during runtime into the profiling process. By continuously sampling variable values from both normal and problematic executions, vProf refines function cost estimates, identifies anomalies in value distributions, and highlights potentially problematic code areas that could be the actual sources of performance is- sues. The effectiveness of vProf is validated through the diagnosis of 18 real-world performance is- sues in four widely-used applications. Remarkably, vProf outperforms other state-of-the-art tools, successfully diagnosing all issues, including three that had remained unresolved for over four years. Causal tracing tools reveal the root causes of performance issues in complex software by generating tracing graphs. However, these graphs often suffer from inherent inaccuracies, characterized by superfluous (over-connected) and missed (under-connected) edges. These inaccuracies arise from the diversity of programming paradigms. To mitigate the inaccuracies, the dissertation proposes an approach to derive strong and weak edges in tracing graphs based on the vertices’ semantics collected during runtime. By leveraging these edge types, a beam-search-based diagnostic algorithm is employed to identify the most probable causal paths. Causal paths from normal and buggy executions are differentiated to provide key insights into the root causes of performance issues. To validate this approach, a causal tracing tool named Argus is developed and tested across multiple versions of macOS. It is evaluated on 12 well-known spinning pinwheel issues in popular macOS applications. Notably, Argus successfully diagnoses the root causes of all identified issues, including 10 issues that had remained unresolved for several years. The results from both tools exemplify a substantial enhancement of performance diagnostic tools achieved by harnessing runtime information. The integration can effectively mitigate inherent inaccuracies, lend support to inaccuracy-tolerant diagnostic algorithms, and provide key insights to pinpoint the root causes.
14

Information clues : content analysis of document representations retrieved by the Web search engines Altavista, Infoseek Ultra, Lycos and Open text index

Epp, Mary Anne 05 1900 (has links)
The purpose of this thesis is to identify and quantify the information clues found in the document representations in the World Wide Web environment. This study uses three topics to find document representations: custom publishing, distance education, and tactile graphics. Four Web search engines are used: AltaVista, InfoSeek Ultra, Lycos, and Open Text Index. The findings of the random sample show that the search engines produce little duplication in their display of the results. Just over half of the cases reveal information clues about the document's authorship, origin, format or subject. The summary field shows the highest number of information clues. The title and Uniform Resource Locator fields do not contain many information clues. Few of the fields contain clues about the authorship of the documents. Topical relevance is questionable in many of the cases. The study recommends further research on the comparison of search engines, on the study of searches on the Web for commercial, academic and personal topics, and on information seeking behaviors relating to Web searching. Recommendations are made for Web training and Web page design to assist users in finding relevant information more quickly.
15

Information clues : content analysis of document representations retrieved by the Web search engines Altavista, Infoseek Ultra, Lycos and Open text index

Epp, Mary Anne 05 1900 (has links)
The purpose of this thesis is to identify and quantify the information clues found in the document representations in the World Wide Web environment. This study uses three topics to find document representations: custom publishing, distance education, and tactile graphics. Four Web search engines are used: AltaVista, InfoSeek Ultra, Lycos, and Open Text Index. The findings of the random sample show that the search engines produce little duplication in their display of the results. Just over half of the cases reveal information clues about the document's authorship, origin, format or subject. The summary field shows the highest number of information clues. The title and Uniform Resource Locator fields do not contain many information clues. Few of the fields contain clues about the authorship of the documents. Topical relevance is questionable in many of the cases. The study recommends further research on the comparison of search engines, on the study of searches on the Web for commercial, academic and personal topics, and on information seeking behaviors relating to Web searching. Recommendations are made for Web training and Web page design to assist users in finding relevant information more quickly. / Arts, Faculty of / Library, Archival and Information Studies (SLAIS), School of / Graduate
16

Software architecture evaluation for framework-based systems.

Zhu, Liming, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Complex modern software is often built using existing application frameworks and middleware frameworks. These frameworks provide useful common services, while simultaneously imposing architectural rules and constraints. Existing software architecture evaluation methods do not explicitly consider the implications of these frameworks for software architecture. This research extends scenario-based architecture evaluation methods by incorporating framework-related information into different evaluation activities. I propose four techniques which target four different activities within a scenario-based architecture evaluation method. 1) Scenario development: A new technique was designed aiming to extract general scenarios and tactics from framework-related architectural patterns. The technique is intended to complement the current scenario development process. The feasibility of the technique was validated through a case study. Significant improvements of scenario quality were observed in a controlled experiment conducted by another colleague. 2) Architecture representation: A new metrics-driven technique was created to reconstruct software architecture in a just-in-time fashion. This technique was validated in a case study. This approach has significantly improved the efficiency of architecture representation in a complex environment. 3) Attribute specific analysis (performance only): A model-driven approach to performance measurement was applied by decoupling framework-specific information from performance testing requirements. This technique was validated on two platforms (J2EE and Web Services) through a number of case studies. This technique leads to the benchmark producing more representative measures of the eventual application. It reduces the complexity behind the load testing suite and framework-specific performance data collecting utilities. 4) Trade-off and sensitivity analysis: A new technique was designed seeking to improve the Analytical Hierarchical Process (AHP) for trade-off and sensitivity analysis during a framework selection process. This approach was validated in a case study using data from a commercial project. The approach can identify 1) trade-offs implied by an architecture alternative, along with the magnitude of these trade-offs. 2) the most critical decisions in the overall decision process 3) the sensitivity of the final decision and its capability for handling quality attribute priority changes.
17

Deutschsprachige Fragebögen zur Usability-Evaluation im Vergleich

Figl, Kathrin January 2010 (has links) (PDF)
Für die Konstruktion gebrauchstauglicher Anwendungssysteme ist eine exakte Evaluierung der Usability eine wertvolle Unterstützung. Zu diesem Zweck werden in der Praxis häufig Usability-Fragebögen herangezogen. Im deutschen Sprachraum sind die beiden Fragebögen Isonorm 9241/10 und Isometrics, die beide Software gemäß der EN ISO 9241-110 evaluieren, weit verbreitet. Die vorliegende Studie widmete sich einem Vergleich dieser beiden Fragebögen hinsichtlich testtheoretischer Gütekriterien. Im Rahmen eines experimentellen Designs wurden die beiden Fragebögen eingesetzt um die Usability von zwei Standard-Softwarepaketen zu bewerten. Hinsichtlich der inhaltlichen Validität der Fragebögen zeigten die Ergebnisse eine hohe Übereinstimmung der Usability-Messung der beiden Fragebögen. Auch weitere testtheoretische Analysen lieferten eine ähnliche Qualitätsbeurteilung beider Fragebögen, weshalb sie aus diesem Blickwinkel gleichermaßen für Forschung und Praxis empfohlen werden können.
18

Analysis of PSP-like processes for software engineering

Conrad, Paul Jefferson 01 January 2006 (has links)
The purpose of this thesis is to provide the California State University, San Bernardino, Department of Computer Science with an analysis and recommended solution to improving the software development process.
19

Microcomputer-assisted site design in landscape architecture: evaluation of selected commercial software

Hahn, Howard Davis. January 1985 (has links)
Call number: LD2668 .T4 1985 H33 / Master of Landscape Architecture
20

The development of an instrument for evaluating computer assisted language programs

麥建年, Maclean, William Brian. January 1986 (has links)
published_or_final_version / Education / Master / Master of Education

Page generated in 0.1934 seconds