• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 473
  • 281
  • 75
  • 64
  • 35
  • 15
  • 10
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1156
  • 242
  • 174
  • 160
  • 159
  • 151
  • 143
  • 131
  • 108
  • 97
  • 96
  • 95
  • 87
  • 86
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

An analytical study of metrics and refactoring

Iyer, Suchitra S. 03 September 2009 (has links)
Object-oriented systems that undergo repeated modifications commonly endure a loss of quality and design decay. This problem is often remedied by applying refactorings. Refactoring is one of the most important and commonly used techniques to improve the quality of the code by eliminating redundancy and reducing complexity; frequently refactored code is believed to be easier to understand, maintain and test. Object-oriented metrics provide an easy means to extract useful and measurable information about the structure of a software system. Metrics have been used to identify refactoring opportunities, detect refactorings that have previously been applied and gauge quality improvements after the application of refactorings. This thesis provides an in-depth analytical study of the relationship between metrics and refactorings. For this purpose we analyzed 136 versions of 4 different open source projects. We used RefactoringCrawler, an automatic refactoring detection tool to identify refactorings and then analyzed various metrics to study whether metrics can be used to (1) reliably identify refactoring opportunities, (2) detect refactorings that were previously applied, and (3) estimate the impact of refactoring on software quality. In conclusion, our study showed that metrics cannot be reliably used to either identify refactoring opportunities or detect refactorings. It is very difficult to use metrics to estimate the impact of refactoring, however studying the evolution of metrics at a system level indicates that refactoring does improve software quality and reduce complexity. / text
92

Defining success : a distinction between inputs and outputs of successful public housing projects

Bachman, Emily Catherine 06 October 2014 (has links)
Public housing across the United States differs greatly in physical form, construction quality, and reception by the community, among myriad other variables. This reports examines what successful public housing looks like, and what characteristics make certain public housing projects more successful than others. There is a great deal of thought and literature predicting this success. However, it is rarely accompanied by a corresponding picture of the “outputs” of successful public housing. Assessment measures presented in existing literature and the U.S. Department of Housing and Urban Development’s publications do not provide a thorough metric by which to measure public housing success on a project-by-project basis. This report examines the existing metrics—both explicit and inferred—and assesses their suitability for this purpose. Finally, it compiles indicators of success from various sources and lobbies for a comprehensive success metric at an individual public housing project level. / text
93

Above and Below Ground Assessment of Pinus radiate

McQuillan, Shane January 2013 (has links)
A comparison of above ground forest metrics with below ground soil CO₂ respiration was carried out in an attempt to reveal if any correlations exist. Above ground measurements of 2720 clonally propagated trees were taken assessing the silvicultural treatments of stocking, herbicide and fertiliser. These were compared to 480 below ground soil CO₂ respiration measurements. Using measurements of mean height, mean dbh and basal area the data was analysed and returned significant results for mean dbh and the interactions of herbicide and clones, and stocking and herbicide. Mean height returned a significant result for the interaction of stocking and herbicide. Below ground measurements showed an interaction between ripping and stocking; however these results were not ratified by the above ground results. Overall the results were encouraging and should aid in future experiments that seek to understand what effect above ground treatments have on below ground CO₂ activity.
94

Dynamic alpha-invariants of del Pezzo surfaces with boundary

Martinez Garcia, Jesus January 2013 (has links)
The global log canonical threshold, algebraic counterpart to Tian's alpha-invariant, plays an important role when studying the geometry of Fano varieties. In particular, Tian showed that Fano manifolds with big alpha-invariant can be equipped with a Kahler-Einstein metric. In recent years Donaldson drafted a programme to precisely determine when a smooth Fano variety X admits a Kahler-Einstein metric. It was conjectured that the existence of such a metric is equivalent to X being K-stable, an algebraic-geometric property. A crucial step in Donaldson's programme consists on finding a Kahler-Einstein metric with edge singularities of small angle along a smooth anticanonical boundary. Jeffres, Mazzeo and Rubinstein showed that a dynamic version of the alpha-invariant could be used to find such metrics. The global log canonical threshold measures how anticanonical pairs fail to be log canonical. In this thesis we compute the global log canonical threshold of del Pezzo surfaces in various settings. First we extend Cheltsov's computation of the global log canonical threshold of complex del Pezzo surfaces to non-singular del Pezzo surfaces over a ground field which is algebraically closed and has arbitrary characteristic. Then we study which anticanonical pairs fail to be log canonical. In particular, we give a very explicit classifiation of very singular anticanonical pairs for del Pezzo surfaces of degree smaller or equal than 3. We conjecture under which circumstances such a classifcation is plausible for an arbitrary Fano variety and derive several consequences. As an application, we compute the dynamic alpha-invariant on smooth del Pezzo surfaces of small degree, where the boundary is any smooth elliptic curve C. Our main result is a computation of the dynamic alpha-invariant on all smooth del Pezzo surfaces with boundary any smooth elliptic curve C. The values of the alpha-invariant depend on the choice of C. We apply our computation to find Kahler-Einstein metrics with edge singularities of angle β along C.
95

An investigation of success metrics for the design of e-commerce Web sites.

Cutshall, Robert C. 05 1900 (has links)
The majority of Web site design literature mainly concentrates on the technical and functional aspects of Web site design. There is a definite lack of literature, in the IS field, that concentrates on the visual and aesthetic aspects of Web design. Preliminary research into the relationship between visual design and successful electronic commerce Web sites was conducted. The emphasis of this research was to answer the following three questions. What role do visual design elements play in the success of electronic commerce Web sites? What role do visual design principles play in the success of electronic commerce Web sites? What role do the typographic variables of visual design play in the success of electronic commerce Web sites? Forty-three undergraduate students enrolled in an introductory level MIS course used a Likert-style survey instrument to evaluate aesthetic aspects of 501 electronic commerce Web pages. The instrument employed a taxonomy of visual design that focused on three dimensions: design elements, design principles, and typography. The data collected were correlated against Internet usage success metrics data provided by Nielsen/NetRatings. Results indicate that 22 of the 135 tested relationships were statistically significant. Positive relationships existed between four different aesthetic dimensions and one single success measure. The other 18 significant relationships were negatively correlated. The visual design elements of space, color as hue, and value were negatively correlated with three of the success measures. The visual design principles of contrast, emphasis radiated through contrast, and contrast shape were negatively correlated with three of the success measures. Finally, the typographic variables of placement and type size were both negatively correlated with two of the success measures. This research provides support to the importance of visual design theory in Web site design. This preliminary research should be viewed as a realization of the need for Web sites to be designed with both visual design theory and usability in mind.
96

The Impact of Objective Quality Ratings on Patient Selection of Community Pharmacies: A Discrete Choice Experiment and Latent Class Analysis

Patterson, Julie A 01 January 2017 (has links)
Background: Pharmacy-related performance measures have gained significant attention in the transition to value-based healthcare. Pharmacy-level quality measures, including those developed by the Pharmacy Quality Alliance, are not yet publicly accessible. However, the publication of report cards for individual pharmacies has been discussed as a way to help direct patients towards high-quality pharmacies. This study aimed to measure the relative strength of patient preferences for community pharmacy attributes, including pharmacy quality. Additionally, this study aimed to identify and describe community pharmacy market segments based on patient preferences for pharmacy attributes. Methods: This study elicited patient preferences for community pharmacy attributes using a discrete choice experiment (DCE) among a sample of 773 adults aged 18 years and older. Six attributes were selected based on published literature, expert opinion, and pilot testing feedback. The attributes included hours of operation, staff friendliness/courtesy, pharmacist communication, pharmacist willingness to establish a personal relationship, overall quality, and a drug-drug interaction specific quality metric. Participants responded to a block of ten random choice tasks assigned by Sawtooth v9.2 and two fixed tasks, including a dominant and a hold-out scenario. The data were analyzed using conditional logit and latent class regression models, and Hierarchical Bayes estimates of individual-level utilities were used to compare preferences across demographic subgroups. Results: Among the 773 respondents who began the survey, 741 (95.9%) completed the DCE and demographic questionnaire. Overall, study participants expressed the strongest preferences for quality-related pharmacy attributes. The attribute importance values (AIVs) were highest for the specific, drug-drug interaction (DDI) quality measure, presented as, “The pharmacy ensured there were no patients who were dispensed two medications that can cause harm when taken together,” (40.3%) and the overall pharmacy quality measure (31.3%). The utility values for 5-star DDI and overall quality ratings were higher among women (83.0 and 103.8, respectively) than men (76.2 and 94.5, respectively), and patients with inadequate health literacy ascribed higher utility to pharmacist efforts to get to know their patients (26.0) than their higher literacy counterparts (16.3). The best model from the latent class analysis contained three classes, coined the Quality Class (67.6% of participants), the Relationship Class (28.3%), and the Convenience Class (4.2%). Conclusions: The participants in this discrete choice experiment exhibited strong preferences for pharmacies with higher quality ratings. This finding may reflect patient expectations of community pharmacists, namely that pharmacists ensure that patients are not harmed by the medications filled at their pharmacies. Latent class analysis revealed underlying heterogeneity in patient preferences for community pharmacy attributes.
97

Performance metrics for network intrusion systems

Tucker, Christopher John January 2013 (has links)
Intrusion systems have been the subject of considerable research during the past 33 years, since the original work of Anderson. Much has been published attempting to improve their performance using advanced data processing techniques including neural nets, statistical pattern recognition and genetic algorithms. Whilst some significant improvements have been achieved they are often the result of assumptions that are difficult to justify and comparing performance between different research groups is difficult. The thesis develops a new approach to defining performance focussed on comparing intrusion systems and technologies. A new taxonomy is proposed in which the type of output and the data scale over which an intrusion system operates is used for classification. The inconsistencies and inadequacies of existing definitions of detection are examined and five new intrusion levels are proposed from analogy with other detection-based technologies. These levels are known as detection, recognition, identification, confirmation and prosecution, each representing an increase in the information output from, and functionality of, the intrusion system. These levels are contrasted over four physical data scales, from application/host through to enterprise networks, introducing and developing the concept of a footprint as a pictorial representation of the scope of an intrusion system. An intrusion is now defined as “an activity that leads to the violation of the security policy of a computer system”. Five different intrusion technologies are illustrated using the footprint with current challenges also shown to stimulate further research. Integrity in the presence of mixed trust data streams at the highest intrusion level is identified as particularly challenging. Two metrics new to intrusion systems are defined to quantify performance and further aid comparison. Sensitivity is introduced to define basic detectability of an attack in terms of a single parameter, rather than the usual four currently in use. Selectivity is used to describe the ability of an intrusion system to discriminate between attack types. These metrics are quantified experimentally for network intrusion using the DARPA 1999 dataset and SNORT. Only nine of the 58 attack types present were detected with sensitivities in excess of 12dB indicating that detection performance of the attack types present in this dataset remains a challenge. The measured selectivity was also poor indicting that only three of the attack types could be confidently distinguished. The highest value of selectivity was 3.52, significantly lower than the theoretical limit of 5.83 for the evaluated system. Options for improving selectivity and sensitivity through additional measurements are examined.
98

Srovnání a výběr krajinných indexů pro hodnocení míry suburbanizace / Comparison and selection of landscape indices for assessing the rate of urban sprawl

Majerová, Martina January 2016 (has links)
A process of suburbanization is currently a very much discussed topic. This phenomenon of population and human activities transfer from core cities to their background can have harmful effects not only on local inhabitants, but also on surrounding landscape and its function. Landscape ecology responds to this development by quantifying and evaluating its impact on landscape functions. This diploma thesis summarizes published results about effects of suburbanization on natural environment. The main objective of the thesis is selection of an appropriate indicator(s) (landscapes metrics) to evaluate rate and intensity of this process. These metrics are applied in the study area and the results are discussed. Key words: suburbanization, urban sprawl, landscape metrics
99

What model should be used to evaluate the efficiency and effectiveness of a field contracting office

O'Sullivan, Daniel F. 06 1900 (has links)
Approved for public release, distribution is unlimited / In the Federal Acquisition Regulations (FAR) Statement of Guiding Principles for the Federal Acquisition System, the vision of the Federal Acquisition System is to deliver best value products or services to the customer. Contracting Officers must achieve this while balancing the many competing interests of the stakeholders in the System. The paradox of efficiency vs. effectiveness can be found in the second sentence by the phrase "balancing the many competing interests in the System". This statement indicates the diverse interest of the many stakeholders involved in the System that in many instances prevent the Contracting Office from being efficient and effective. The Government Performance Results Act of 1993 also requires each agency to establish projected outcomes or results by which they will be evaluated against. This thesis examines various literature and existing measurement systems of field contracting offices to determine if we are properly evaluating efficiency and effectiveness. The thesis also utilizes the Organizational Configuration Model developed by Nancy Roberts to determine where field offices fit. The thesis identifies common themes found in metrics and draws conclusions based on that information. Finally, the researcher proposes a model for Field Contracting Offices to use for evaluating their efficiency and effectiveness. It is the researcher's hope that this thesis will be of benefit to all field contracting offices that struggle with determining their efficiency and effectiveness. Also, it is hoped that Systems Commands find some useful information in this thesis. / Civilian, Department of the Navy
100

Effort Modeling and Programmer Participation in Open Source Software Projects

Koch, Stefan January 2005 (has links) (PDF)
This paper analyses and develops models for programmer participation and effort estimation in open source software projects. This has not yet been a centre of research, although any results would be of high importance for assessing the efficiency of this model and for various decision-makers. In this paper, a case study is used for hypotheses generation regarding manpower function and effort modeling, then a large data set retrieved from a project repository is used to test these hypotheses. The main results are that Norden-Rayleigh-based approaches need to be complemented to account for the addition of new features during the lifecycle to be usable in this context, and that programmer-participation based effort models show significantly less effort than those based on output metrics like lines-of-code. (author's abstract) / Series: Working Papers on Information Systems, Information Business and Operations

Page generated in 0.0882 seconds