• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 475
  • 281
  • 75
  • 64
  • 35
  • 15
  • 10
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1160
  • 243
  • 174
  • 162
  • 159
  • 151
  • 144
  • 131
  • 108
  • 97
  • 97
  • 95
  • 87
  • 87
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

The Effect Of Software Design Patterns On Object-oriented Software Quality And Maintainability

Turk, Tuna 01 September 2009 (has links) (PDF)
This study investigates the connection between design patterns, object oriented (OO) quality metrics and software maintainability. The literature on OO metrics, design patterns and software maintainability are reviewed, the relation between OO metrics and software maintainability is investigated, and then, in terms of obtained maintainability indicator metrics, the maintainability change of an application due to usage of design patterns is observed.
302

An Automated Quality Measurement Approach For Business Process Models

Gurbuz, Ozge 01 September 2011 (has links) (PDF)
Business process modeling has become a common need for organizations. Therefore process quality is also having an important role for the organizations. The most of the quality studies are based on cost and time which can be analyzed during or after the execution of the business processes. There are also quality measures which help analyzing measures before the execution of the business processes. This type of measures can give early feedback about the processes. There are three frameworks defined in the literature for a more comprehensive measurement. One of the frameworks is adapted from software programs and it aims to enable process design to be less error-prone, understandable and maintainable. The second framework is adapted from object-oriented software designs and it provides object-oriented view to the design of the business process. The last framework is adapted from ISO/IEC Software Product Quality enabling to measure the quality of process itself rather than the design. By conducting a case study, the measures defined in the frameworks are explored in terms of applicability, automation potential and required time and effort on a set of business process model. As a result of this study it is observed that measurement takes time and requires effort and is always error-prone. Therefore, an approach is implemented by automating the measures which have automation potential, in order to decrease the required time and effort and also to increase the accuracy of the measurement. The second case study is then conducted on a set of another business process models in order to validate the approach.
303

Relating facility performance indicators to organizational sustainability performance in public higher education facilities

Adams, Gregory Keith 07 April 2010 (has links)
This research seeks to identify how an organization's facility management (FM) practices relate with the state of sustainability in the organization. A review of the literature leads to presentation of a model defining these relationships. The concepts of direct and indirect FM sustainability roles in organizational sustainability are presented. Accepted facility metrics found in the APPA Facilities Performance Indicator Survey are used as indicators of FM in University System of Georgia institutions and are tested for correlation with sustainability best practices scores generated in an assessment performed for this research. FM performance indicators representing the direct role of FM are not found to be correlated with organizational sustainability best practicesin USG higher education organizations.
304

Estimación temprana de proyectos de software mediante Léxico Extendido del Lenguaje y Puntos de Caso de Uso

Vido, Alan 11 June 2015 (has links)
Actualmente existe un gran número de técnicas y herramientas para realizar estimaciones en los procesos de software, pero muchas de ellas requieren de gran volumen de información del proyecto que se está analizando, dificultando una estimación temprana del esfuerzo requerido para desarrollar dicho proyecto. Aquellos analistas que trabajan con el Léxico Extendido del Lenguaje, al contar con este modelo en etapas tempranas del software, pueden inferir ciertas características del proyecto, como pueden ser los Casos de Uso, las clases y entidades de base de datos que formaran parte del diseño del proyecto. Por otro lado, existen técnicas de estimación de esfuerzo ampliamente utilizadas y estandarizadas que se valen de estas características, como por ejemplo Puntos Caso de Uso, pero que en una etapa temprana de elicitación de requerimientos no son aplicables por falta de información. Este trabajo pretende brindar a los usuarios que utilizan Léxico Extendido del Lenguaje en su proceso de elicitación de requerimientos, una herramienta que, a partir de la información recabada en las etapas tempranas de dicho proceso, proporcione una estimación del esfuerzo necesario para realizar el proyecto, basada en un método ampliamente utilizado y estandarizado.
305

The Role of leadership in high performance software development teams

Ward, John Mason 08 February 2012 (has links)
The purpose of this research was to investigate the role of leadership in creating high performance software development teams. Of specific interest were the challenges faced by the Project Manager without a software engineering background. These challenges included management of a non-visible process, planning projects with significant uncertainty, and working with teams that don’t trust their leadership. Conclusions were drawn from the author’s experience as a software development manager facing these problems and a broad literature review of experts from the software and knowledge worker management fields. The primary conclusion was that, until the next big breakthrough, gains in software development productivity resulting from technology are limited. The only way for a group to distinguish itself as performing at the highest levels is teamwork enabled by good leadership. / text
306

Aspects of Modeling Fraud Prevention of Online Financial Services

Dan, Gorton January 2015 (has links)
Banking and online financial services are part of our critical infrastructure. As such, they comprise an Achilles heel in society and need to be protected accordingly. The last ten years have seen a steady shift from traditional show-off hacking towards cybercrime with great economic consequences for society. The different threats against online services are getting worse, and risk management with respect to denial-of-service attacks, phishing, and banking Trojans is now part of the agenda of most financial institutions. This trend is overseen by responsible authorities who step up their minimum requirements for risk management of financial services and, among other things, require regular risk assessment of current and emerging threats.For the financial institution, this situation creates a need to understand all parts of the incident response process of the online services, including the technology, sub-processes, and the resources working with online fraud prevention. The effectiveness of each countermeasure has traditionally been measured for one technology at a time, for example, leaving the fraud prevention manager with separate values for the effectiveness of authentication, intrusion detection, and fraud prevention. In this thesis, we address two problems with this situation. Firstly, there is a need for a tool which is able to model current countermeasures in light of emerging threats. Secondly, the development process of fraud detection is hampered by the lack of accessible data.In the main part of this thesis, we highlight the importance of looking at the “big risk picture” of the incident response process, and not just focusing on one technology at a time. In the first article, we present a tool which makes it possible to measure the effectiveness of the incident response process. We call this an incident response tree (IRT). In the second article, we present additional scenarios relevant for risk management of online financial services using IRTs. Furthermore, we introduce a complementary model which is inspired by existing models used for measuring credit risks. This enables us to compare different online services, using two measures, which we call Expected Fraud and Conditional Fraud Value at Risk. Finally, in the third article, we create a simulation tool which enables us to use scenario-specific results together with models like return of security investment, to support decisions about future security investments.In the second part of the thesis, we develop a method for producing realistic-looking data for testing fraud detection. In the fourth article, we introduce multi-agent based simulations together with social network analysis to create data which can be used to fine-tune fraud prevention, and in the fifth article, we continue this effort by adding a platform for testing fraud detection. / Finansiella nättjänster är en del av vår kritiska infrastruktur. På så vis utgör de en akilleshäl i samhället och måste skyddas på erforderligt sätt. Under de senaste tio åren har det skett en förskjutning från traditionella dataintrång för att visa upp att man kan till en it-brottslighet med stora ekonomiska konsekvenser för samhället. De olika hoten mot nättjänster har blivit värre och riskhantering med avseende på överbelastningsattacker, nätfiske och banktrojaner är nu en del av dagordningen för finansiella institutioner. Denna trend övervakas av ansvariga myndigheter som efterhand ökar sina minimikrav för riskhantering och bland annat kräver regelbunden riskbedömning av befintliga och nya hot.För den finansiella institutionen skapar denna situation ett behov av att förstå alla delar av incidenthanteringsprocessen, inklusive dess teknik, delprocesser och de resurser som kan arbeta med bedrägeribekämpning. Traditionellt har varje motåtgärds effektivitet mätts, om möjligt, för en teknik i taget, vilket leder till att ansvariga för bedrägeribekämpning får separata värden för autentisering, intrångsdetektering och bedrägeridetektering.I denna avhandling har vi fokuserat på två problem med denna situation. För det första finns det ett behov av ett verktyg som kan modellera effektiviteten för institutionens samlade motåtgärder mot bakgrund av befintliga och nya hot. För det andra saknas det tillgång till data för forskning rörande bedrägeridetektering, vilket hämmar utvecklingen inom området.I huvuddelen av avhandlingen ligger tonvikten på att studera ”hela” incidenthanteringsprocessen istället för att fokusera på en teknik i taget. I den första artikeln presenterar vi ett verktyg som gör det möjligt att mäta effektiviteten i incidenthanteringsprocessen. Vi kallar detta verktyg för ”incident response tree” (IRT) eller ”incidenthanteringsträd”. I den andra artikeln presenterar vi ett flertal scenarier som är relevanta för riskhantering av finansiella nättjänster med hjälp av IRT. Vi utvecklar också en kompletterande modell som är inspirerad av befintliga modeller för att mäta kreditrisk. Med hjälp av scenarioberoende mått för ”förväntat bedrägeri” och ”value at risk”, har vi möjlighet att jämföra risker mellan olika nättjänster. Slutligen, i den tredje artikeln, skapar vi ett agentbaserat simuleringsverktyg som gör det möjligt att använda scenariospecifika resultat tillsammans med modeller som ”avkastning på säkerhetsinvesteringar” för att stödja beslut om framtida investeringar i motåtgärder.I den andra delen av avhandlingen utvecklar vi en metod för att generera syntetiskt data för test av bedrägeridetektering. I den fjärde artikeln presenterar vi ett agentbaserat simuleringsverktyg som med hjälp av bland annat ”sociala nätverksanalyser” kan användas för att generera syntetiskt data med realistiskt utseende. I den femte artikeln fortsätter vi detta arbete genom att lägga till en plattform för testning av bedrägeridetektering. / <p>QC 20151103</p>
307

Impact of Data Sources on Citation Counts and Rankings of LIS Faculty: Web of Science vs. Scopus and Google Scholar

Meho, Lokman I., Yang, Kiduk 01 1900 (has links)
The Institute for Scientific Information's (ISI) citation databases have been used for decades as a starting point and often as the only tools for locating citations and/or conducting citation analyses. ISI databases (or Web of Science [WoS]), however, may no longer be sufficient because new databases and tools that allow citation searching are now available. Using citations to the work of 25 library and information science faculty members as a case study, this paper examines the effects of using Scopus and Google Scholar (GS) on the citation counts and rankings of scholars as measured by WoS. Overall, more than 10,000 citing and purportedly citing documents were examined. Results show that Scopus significantly alters the relative ranking of those scholars that appear in the middle of the rankings and that GS stands out in its coverage of conference proceedings as well as international, non-English language journals. The use of Scopus and GS, in addition to WoS, helps reveal a more accurate and comprehensive picture of the scholarly impact of authors. WoS data took about 100 hours of collecting and processing time, Scopus consumed 200 hours, and GS a grueling 3,000 hours.
308

Searching the long tail: Hidden structure in social tagging

Tonkin, Emma January 2006 (has links)
In this paper we explore a method of decomposition of compound tags found in social tagging systems and outline several results, including improvement of search indexes, extraction of semantic information, and benefits to usability. Analysis of tagging habits demonstrates that social tagging systems such as del.icio.us and flickr include both formal metadata, such as geotags, and informally created metadata, such as annotations and descriptions. The majority of tags represent informal metadata; that is, they are not structured according to a formal model, nor do they correspond to a formal ontology. Statistical exploration of the main tag corpus demonstrates that such searches use only a subset of the available tags; for example, many tags are composed as ad hoc compounds of terms. In order to improve accuracy of searching across the data contained within these tags, a method must be employed to decompose compounds in such a way that there is a high degree of confidence in the result. An approach to decomposition of English-language compounds, designed for use within a small initial sample tagset, is described. Possible decompositions are identified from a generous wordlist, subject to selective lexicon snipping. In order to identify the most likely, a Bayesian classifier is used across term elements. To compensate for the limited sample set, a word classifier is employed and the results classified using a similar method, resulting in a successful classification rate of 88%, and a false negative rate of only 1%.
309

Finding Finding Aids on the World Wide Web

Tibbo, Helen R., Meho, Lokman I. January 2001 (has links)
Reports results of a study to explore how well six popular Web search engines performed in retrieving specific electronic finding aids mounted on the World Wide Web. A random sample of online finding aids was selected and then searched using AltaVista, Excite, Fast Search, Google, Hotbot and Northern Light, employing both word and phrase searching. As of February 2000, approximately 8 percent of repositories listed at the 'Repositories of Primary Resources' Web site had mounted at least four full finding aids on the Web. The most striking finding of this study was the importance of using phrase searches whenever possible, rather than word searches. Also of significance was the fact that if a finding aid were to be found using any search engine, it was generally found in the first ten or twenty items at most. The study identifies the best performers among the six chosen search engines. Combinations of search engines often produced much better results than did the search engines individually, evidence that there may be little overlap among the top hits provided by individual engines.
310

Αξιολόγηση και διασφάλιση ποιότητας λογισμικού

Κατωπόδης, Σπύρος 01 December 2009 (has links)
Η Ποιότητα Λογισμικού αποτελεί σήμερα ένα πολύ σημαντικό και ενδιαφέρον κεφάλαιο στην Επιστήμη των Υπολογιστών. Με το πέρασμα του χρόνου, καθώς επίσης και με την εξέλιξη της τεχνολογίας η ανάγκη για την εξασφάλιση της ποιότητας σε πρώτο στάδιο, και ακολούθως η ανάγκη για σωστή αξιολόγηση και επιτυχή διασφάλιση της ποιότητας λογισμικού γίνονται όλο και μεγαλύτερες και αποτελούν βασικότατες επιδιώξεις επιχειρήσεων, οργανισμών και προγραμματιστών. Ο όρος Ποιότητας Λογισμικού μπορεί να αποκτήσει πολλές διαστάσεις και ερμηνείες, αναλόγως τις επιδιώξεις, τους στόχους και τις ανάγκες του κάθε χρήστη. Η διπλωματική αυτή επικεντρώνεται στην ανάλυση της αξιολόγησης και της διασφάλισης της ποιότητας λογισμικού, παρουσιάζοντας τρόπους και μοντέλα, με τη βοήθεια των οποίων είναι εφικτή η αποτελεσματική αξιολόγηση και διασφάλιση της ποιότητας. Στο πρώτο κεφάλαιο αναλύονται οι όροι «αξιολόγηση» και «διασφάλιση» της ποιότητας λογισμικού και παρουσιάζονται οι απαιτήσεις στα πλαίσια του ελέγχου και της εξασφάλισης της ποιότητας ενός έργου. Επιπροσθέτως, παρουσιάζεται η σπουδαιότητα της αξιοπιστίας και της αξιολόγησης λογισμικού, αναλύεται η διαδικασία της επαλήθευσης και επικύρωσης κατά το σχεδιασμό λογισμικού και περιγράφεται η διαδικασία ελέγχου. Στο δεύτερο κεφάλαιο, παρουσιάζονται τα υπάρχοντα μοντέλα αξιολόγησης λογισμικού που είναι τα περισσότερο δημοφιλή και γνωρίζουν ευρεία εφαρμογή. Αναλύεται ο όρος «μετρική», παρουσιάζονται οι συχνές τάσεις στην Τεχνολογία Λογισμικού, τα χαρακτηριστικά των αντικειμενοστρεφών μετρικών της Τεχνολογίας Λογισμικού, ενώ τέλος περιγράφεται η Διαδικασία της Εξασφάλισης Ποιότητας. Στο τρίτο κεφάλαιο παρουσιάζεται το Διεθνές Πρότυπο ISO/IEC 9126, το οποίο κατέχει δεσπόζουσα θέση ανάμεσα στα άλλα Πρότυπα Ποιότητας, παρουσιάζει μεγάλο ενδιαφέρον και είναι από τα δημοφιλέστερα. Περιγράφονται τα χαρακτηριστικά του Προτύπου ISO/IEC 9126 και οι βασικές του λειτουργίες. Στο τέταρτο κεφάλαιο παρουσιάζεται η συνοπτική περιγραφή και η λεπτομερής αξιολόγηση του Διεθνούς Πρότυπου ISO/IEC 9126, παρουσιάζονται τα τμήματα τα οποία το απαρτίζουν, ακολουθεί μία επισκόπηση πειράματος και γίνεται ανάλυση των αποτελεσμάτων που προκύπτουν ύστερα από χρήση του προτύπου αυτού. Ακόμη, γίνεται αναφορά στα μειονεκτήματα του μοντέλου και στα προβλήματα που προκύπτουν από τη χρήση του, ενώ παρατίθεται και η προσωπική μου εκτίμηση όσον αφορά το Πρότυπο αυτό. Στο πέμπτο κεφάλαιο γίνεται παρουσίαση της Αναλυτικής Διεργασίας Ιεραρχίας και της Πολυκριτήριας Ανάλυσης. Περιγράφεται η διαδικασία της επέκτασης του Διεθνούς Πρότυπου ISO/IEC 9126 για την ανάπτυξη ενός γενικευμένου μοντέλου ποιότητας η οποία λαμβάνει χώρα με την εφαρμογή της Πολυκριτήριας Ανάλυσης και αποσκοπεί σε περισσότερο βελτιωμένη αξιολόγηση και καλύτερη διασφάλιση της Ποιότητας Λογισμικού. Ακολουθεί η ανάλυση του μοντέλου της Αναλυτικής Διεργασίας Ιεραρχίας, η λειτουργία του μοντέλου αυτού και η επεξήγηση του. Στο έκτο και τελευταίο κεφάλαιο παρουσιάζεται ένα προτεινόμενο από εμένα μοντέλο το οποίο έρχεται να αντισταθμίσει τα μειονεκτήματα που προσφέρουν τα παραπάνω μοντέλα, συνδυάζοντας τα πλεονεκτήματα τους. Περιγράφεται σε πρώτη φάση ο σημαντικός ρόλος του λήπτη αποφάσεων ,ο οποίος είναι καθοριστικός για την παραμετροποίηση του μοντέλου, αναλύονται τα πλεονεκτήματα των προηγούμενων μοντέλων τα οποία συνδυάζονται στο μοντέλο αυτό για την αντιστάθμιση των μειονεκτημάτων και περιγράφονται τα χαρακτηριστικά του προτεινόμενου μοντέλου αξιολόγησης της Ποιότητας Λογισμικού. Τέλος, ακολουθεί ένα παράδειγμα εφαρμογής του μοντέλου σε κώδικα λογισμικού και σχολιασμός των αποτελεσμάτων που προκύπτουν από την εφαρμογή του μοντέλου αυτού, συγκριτικά με τα αποτελέσματα που προκύπτουν από την εφαρμογή του Προτύπου ISO/IEC 9126. / The quality of Software constitutes a very important and interesting capital in the Science of Computers today. With the passage of time, as well as with the development of technology, the need for the guarantee of quality in the very first stage, and accordingly the need for a correct evaluation and a successful guarantee of quality of software become always bigger and constitute the most basic objectives of enterprises, organisms and programmers. The term Quality of Software can acquire a lot of dimensions and interpretations, depending on the objectives, the goals and the needs of each user. This dissertation, focused on the analysis of evaluation and the guarantee of quality of software, presents ways and models, with the help of which the effective evaluation and guarantee of quality are feasible. In the first chapter the terms “evaluation” and “guarantee of” quality of software are analyzed and the requirements concerning the control and guarantee of quality of work are presented. Besides, the importance of reliability and the evaluation of software are presented and the process of verification and ratification at the planning of software are analyzed and described in terms of the process of control. In the second chapter, the existing models of evaluation of software that are the most popular and know wide application are presented. The term “metric” is analyzed and the frequent tendencies in the Technology of Software are presented. Furthermore, the characteristics of object-oriented metrics of Technology of Software are described, along with the Process of Guarantee of Quality. In the third chapter the International Model ISO/IEC 9126 is presented. The International Model ISO/IEC 9126 possesses dominating position between the other Models of Quality, presenting big interest and is among the most popular Models of Quality. In addition, the characteristics of Model ISO/IEC 9126 and its basic operations are described. In the fourth chapter the concise description and the detailed evaluation of International Model ISO/IEC 9126 are presented, along with its component. Moreover, a review of experiment is described thoroughly and the results of the experiment are analysed and evaluated. Finally, the disadvantages of model as well as the problems that result from its use are reported, while also my personal estimation and opinion concerning the Model is stated. In the fifth chapter the process of Analytic Activity of Hierarchy is presented. Also, the process of the extension of the International Model ISO/IEC 9126 for the growth of a generalised model of quality with the use of Analytic Activity of Hierarchy is described. This generalised model aims at a more improved evaluation and better guarantee of Quality of Software. Finally, the analysis of the model of Analytic Activity of Hierarchy, its function and its functionality follow. In the sixth and last chapter, a proposed model is presented which compensates for the disadvantages of the previous models, providing all the stated advantages of the previous models. Firstly, the important role of the maker of decisions, which is decisive for the parametrization of model, is mentioned. The advantages of the previous models which are also provided by the proposed model are described and in addition, the characteristics of the proposed model concerning the evaluation of the Quality of Software are explained. Finally, a comparison of the results that stem from the application of model in a software product and the results that stem from the application of the Model ISO/IEC 9126 in the same software product are reported.

Page generated in 0.0704 seconds