• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 475
  • 281
  • 75
  • 64
  • 35
  • 15
  • 10
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1158
  • 243
  • 174
  • 162
  • 159
  • 151
  • 144
  • 131
  • 108
  • 97
  • 97
  • 95
  • 87
  • 87
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Exploring the Academic Invisible Web

Lewandowski, Dirk, Mayr, Philipp 05 1900 (has links)
Purpose: To provide a critical review of Bergmanâ s 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scientific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estimation based on infor-metric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergmanâ s size estimation of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimation is given. Research limitations/implications: The precision of our estimation is limited due to small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search en-gines with data on the size and attributes of the Academic Invisible Web.
132

Web impact factors for Iranian Universities

Noruzi, Alireza 04 1900 (has links)
This study investigates the Web Impact Factors (WIFs) for Iranian universities and introduces a new system of measurement. Counts of links to the web sites of Iranian universities were calculated from the output of AltaVista search engine. The WIFs for Iranian universities were calculated by dividing link page counts by the number of pages found in AltaVista for each university at a given point in time. These WIFs were then compared, to study the impact, visibility, and influence of Iranian university web sites. Overall, Iranian university web sites have a low inlink WIF. While specific features of sites may affect an institution's Web Impact Factor, there is a significant correlation between the proportion of English-language pages at an institution's site and the institution's backlink counts. This indicates that for linguistic reasons, Iranian (Persian-language) web sites may not attract the attention they deserve from the World Wide Web. This raises the possibility that information may be ignored due to linguistic and geographic barriers, and this should be taken into account in the development of the global Web.
133

Métricas aplicables a la evaluación de sitios e-government y su impacto social

Screpnik, Claudia 07 March 2014 (has links)
El objetivo principal del presente trabajo es establecer un marco conceptual sobre la evaluación de sitios e-government y proponer un modelo y herramientas para medir el impacto social del uso de un sitio Web por parte de los ciudadanos. Para ello se especificará un procedimiento que contenga los pasos necesarios a seguir en la definición y medición de las características y atributos principales del modelo de evaluación. Esta monografía pretende exponer como herramienta la evaluación externa, como la percepción del ciudadano y su impacto social en la población, acotando la medición al entorno específico de la provincia del Chaco. Por ello solo se toman como eje central del trabajo las características y subcaracterísticas respecto de la percepción visible por el visitante, dejando de lado las propiedades internas del producto. Su aplicación es fundamental en el ámbito de desempeño profesional, Tribunal de Cuentas, debido a la necesidad de contar con un procedimiento de trabajo objetivo y repetible a ser usado para evaluar sitios Web de las Jurisdicciones bajo su contralor. Se plantea la comparación de una serie de métricas y criterios de decisión que entreguen información útil para la gestión y valoración de espacios de Gobierno Electrónico, en estricto rigor, se propone evaluar y poner a prueba acciones rutinarias del sitio Web para medir si es eficiente, oportuno, y útil para los visitantes, basándose en datos científicos y que emanen de una investigación seria y objetiva.
134

Improving predictive models of software quality using search-based metric selection and decision trees

Vivanco, Rodrigo Antonio 10 September 2010 (has links)
Predictive models are used to identify potentially problematic components that decrease product quality. Design and source code metrics are used as input features for predictive models; however, there exist a large number of structural measures that capture different aspects of coupling, cohesion, inheritance, complexity and size. An important question to answer is: Which metrics should be used with a model for a particular predictive objective? Identifying a metric subset that improves the performance for the classifier may also provide insights into the structural properties that lead to problematic modules. In this work, a genetic algorithm (GA) is used as a search-based metric selection strategy. A comparative study has been carried out between GA, the Chidamber and Kemerer (CK) metrics suite, and principal component analysis (PCA) as metric selection strategies with different datasets. Program comprehension is important for programmers and the first dataset evaluated uses source code inspections as a subjective measure of cognitively complexity. Predicting the likely location of system failures is important in order to improve a system’s reliability. The second dataset uses an objective measure of faults found in system modules in order to predict fault-prone components. The aim of this research has been to advance the current state of the art in predictive models of software quality by exploring the efficacy of a search-based approach in selecting appropriate metrics subsets. Results show that GA performs well as a metric selection strategy when used with a linear discriminant analysis classifier. When predicting cognitive complex classes, GA achieved an F-value of 0.845 compared to an F-value of 0.740 using PCA, and 0.750 for the CK metrics. By examining the GA chosen metrics with a white box predictive model (decision tree classifier) additional insights into the structural properties of a system that degrade product quality were observed. Source code metrics have been designed for human understanding and program comprehension and predictive models for cognitive complexity perform well with just source code metrics. Models for fault prone modules do not perform as well when using only source metrics and need additional non-source code information, such module modification history or testing history.
135

Dependency Injection and Mock on Software and Testing

Veng, Mengkeang January 2014 (has links)
Software testing has been integrated within software development life cycle due to its importance in assuring software quality, software safety, and customers' satisfaction. However, problems in software testing are prominent among software developers as system grows in size and complexity. Dependency injection becomes an appealing solution for developers with its practicality to improve software design, improve testability, and enable mock testing technique. The study aims to discover the extent to which the dependency injection facilitates software design and software testing. In addition, the effect of mock practice on testing is also assessed. Metrics for investigation are defined and measured on various aspects of two systems. The two systems are selected and developed based on the same user requirements, development technologies, and methodologies. By comparing the two systems against the investigated metrics, we aim to reveal whether the dependency injection improve the code design. Then four test suites from both systems are evaluated in relation to testability. The results demonstrate that the dependency injection does not seem to improve the code design if comparing on the selected metrics. Even though it does not score better, its effect is evident in other software aspects. The testability of the two systems is similar and suffers from the same problem. Meanwhile, mock helps assist software testing and improve testability. The effect of mock technique can be witnessed, especially when it is applied with other test techniques. Explanations and discussions on these findings are addressed in the paper.
136

Improving predictive models of software quality using search-based metric selection and decision trees

Vivanco, Rodrigo Antonio 10 September 2010 (has links)
Predictive models are used to identify potentially problematic components that decrease product quality. Design and source code metrics are used as input features for predictive models; however, there exist a large number of structural measures that capture different aspects of coupling, cohesion, inheritance, complexity and size. An important question to answer is: Which metrics should be used with a model for a particular predictive objective? Identifying a metric subset that improves the performance for the classifier may also provide insights into the structural properties that lead to problematic modules. In this work, a genetic algorithm (GA) is used as a search-based metric selection strategy. A comparative study has been carried out between GA, the Chidamber and Kemerer (CK) metrics suite, and principal component analysis (PCA) as metric selection strategies with different datasets. Program comprehension is important for programmers and the first dataset evaluated uses source code inspections as a subjective measure of cognitively complexity. Predicting the likely location of system failures is important in order to improve a system’s reliability. The second dataset uses an objective measure of faults found in system modules in order to predict fault-prone components. The aim of this research has been to advance the current state of the art in predictive models of software quality by exploring the efficacy of a search-based approach in selecting appropriate metrics subsets. Results show that GA performs well as a metric selection strategy when used with a linear discriminant analysis classifier. When predicting cognitive complex classes, GA achieved an F-value of 0.845 compared to an F-value of 0.740 using PCA, and 0.750 for the CK metrics. By examining the GA chosen metrics with a white box predictive model (decision tree classifier) additional insights into the structural properties of a system that degrade product quality were observed. Source code metrics have been designed for human understanding and program comprehension and predictive models for cognitive complexity perform well with just source code metrics. Models for fault prone modules do not perform as well when using only source metrics and need additional non-source code information, such module modification history or testing history.
137

Incorporating User Reviews as Implicit Feedback for Improving Recommender Systems

Heshmat Dehkordi, Yasamin 26 August 2014 (has links)
Recommendation systems have become extremely common in recent years due to the ubiquity of information across various applications. Online entertainment (e.g., Netflix), E-commerce (e.g., Amazon, Ebay) and publishing services such as Google News are all examples of services which use recommender systems. Recommendation systems are rapidly evolving in these years, but these methods have fallen short in coping with several emerging trends such as likes or votes on reviews. In this work we have proposed a new method based on collaborative filtering by considering other users' feedback on each review. To validate our approach we have used Yelp data set with more than 335,000 product and service category ratings and 70,817 real users. We present our results using comparative analysis with other well-known recommendation systems for particular categories of users and items. / Graduate / 0984 / 0800 / yheshmat@uvic.ca
138

The use of web metrics for online strategic decision-making

Weischedel, Birgit, n/a January 2005 (has links)
"I know but one freedom, and that is the freedom of the mind" Antoine de Saint-Exupery. Web metrics offer significant potential for online businesses to incorporate high-quality, real-time information into their strategic marketing decision-making (SDM) process. This SDM process is affected by the firm�s strategic direction, which is critical for web businesses. A review of the widely researched strategy and SDM literature identified that managers use extensive information to support and improve strategic decisions and make informed decisions. Offline SDM processes might be appropriate for the online environment but the limited literature on web metrics has not researched information needs for online SDM. Even though web metrics can be a valuable tool for web businesses to inform strategic marketing decisions, and their collection might be less expensive and easier than offline measures, virtually no published research has combined web metrics and SDM concepts into one research project. To address this gap in the literature, the thesis investigated the differences and commonalities of online and offline SDM process approaches, the use of web metrics categories for online SDM stages, and the issues encountered during that process through four research questions. A preliminary conceptual model based on the literature review was refined through preliminary research, which addressed the research questions and investigated the current state of web metrics. After investigating various methodologies, a multi-stage qualitative methodology was selected. The use of qualitative methods represents a contribution to knowledge regarding methodological approaches to online research. Four stages within the online SDM process were shown to benefit from the use of web metrics: the setting of priorities, the setting of objectives, the pretest stage and the review stage. The results identified the similarity of online and offline SDM processes; demonstrated that Traffic, Transactions, Customer Feedback and Consumer Behaviour categories provide basic metrics used by most companies; identified the Environment, Technology, Business Results and Campaigns categories as supplementary categories that are applied according to the marketing objectives; and investigated the results based on different types of companies (website classification, channel focus, size and cluster association). Three clusters were identified that relate to the strategic importance of the website and web metrics. Modifying the initial conceptual model, six issues were distinguished that affect the use of web metrics: the adoption and use of web metrics by managers; the integration of multiple sources of metrics; the establishment of industry benchmarks; data quality; the differences to offline measures; as well as resource constraints that interfere with the appropriate web metrics analysis. Links to offline marketing strategy literature and established business concepts were explored and explanations provided where the results confirmed or modified these concepts. Using qualitative methods, the research assisted in building theory of web metrics and online SDM processes. The results show that offline theories apply to the online environment and conventional concepts provide guidance for online processes. Dynamic aspects of strategy relate to the online environment, and qualitative research methods appear suitable for online research. Publications during this research project: Weischedel, B., Matear, S. and Deans, K. R. (2003) The Use of E-metrics in Strategic Marketing Decisions - A Preliminary Investigation. Business Excellence �03 - 1st International Conference on Performance Measures, Benchmarking and Best Practices in the New Economy, Guimaraes, Portugal; June 10-13, 2003. Weischedel, B., Deans, K. R. and Matear, S. (2004) Emetrics - An Empirical Study of Marketing Performance Measures for Web Businesses. Performance Measurement Association Conference 2004, Edinburgh, UK; July 28-30, 2004. Weischedel, B., Matear, S. and Deans, K. R. (2005) "A Qualitative Approach to Investigating Online Strategic Decision-Making" Qualitative Market Research, Vol. 8 No 1, pp. 61-76. Weischedel, B., Matear, S. and Deans, K. R. (2005) "The Use of Emetrics in Strategic Marketing Decisions - A Preliminary Investigation" International Journal of Internet Marketing and Advertising, Vol. 2 Nos 1/2, p. 109-125.
139

Metrics of environmental sustainability, social equity and economic efficiency in cities

Doust, Kenneth Harold, Civil & Environmental Engineering, Faculty of Engineering, UNSW January 2008 (has links)
This thesis explores the concept of sustainability in the context of the community expectation for sustainability in cities. Effective sustainability performance requires all three pillars of environmental sustainability (stewardship), social equity and economic efficiency to achieve complementary outcomes rather than simply individual outcomes. For cities, one challenge of sustainability is centred on urban form, transport characteristics and the interactions between these and the communities they support. Better understanding of these dynamics is an important step in a meaningful interpretation of sustainability performance of cities. Reviews of methodological gaps in sustainability performance of cities are framed into a statement of problem. Gaps include a holistic assessment framework, methodologies to better understand urban dynamics, the drivers that produce sustainability performance and to objectively measure the performance of all three pillars of sustainability. The common transport planning and land-use planning methods are identified as suitable building blocks for improvements in sustainability assessment, and accessibility is established as an important part of sustainability. In a new approach to sustainability analysis, a sustainability framework is formulated. A concept of "environmental sustainability - accessibility space" is introduced as a novel visualisation of sustainability performance. Propositions are formed that a city's sustainability performance can be analytically quantified and simply visualised in terms of the three pillars of sustainability. Sydney, a global city with a history of planning, is the case study to empirically test the propositions, with the sustainability framework providing the conceptual reference points. Having developed a picture of the urban dynamics in the Sydney case study, the proposed sustainability metrics are developed and the propositions tested. Sustainability metrics consisting of three typologies are shown to indicate the sustainability performance characteristics for the three pillars of sustainability in terms of data set shape, frequency and spread in the "environmental sustainability accessibility space". The visualisations although built from many thousands of pieces of data provided a simple representation giving a holistic view of the sustainability characteristics and trends. Collectively, the sustainability framework, sustainability metrics, companion urban dynamics metrics, and urban system measures are demonstrated as a meaningful methodology in assessing city sustainability performance.
140

Studies on the salient properties of digital imagery that impact on human target acquisition and the implications for image measures.

Ewing, Gary John January 1999 (has links)
Electronically displayed images are becoming increasingly important as an interface between man and information systems. Lengthy periods of intense observation are no longer unusual. There is a growing awareness that specific demands should be made on displayed images in order to achieve an optimum match with the perceptual properties of the human visual system. These demands may vary greatly, depending on the task for which the displayed image is to be used and the ambient conditions. Optimal image specifications are clearly not the same for a home TV, a radar signal monitor or an infrared targeting image display. There is, therefore, a growing need for means of objective measurement of image quality, where "image quality" is used in a very broad sense and is defined in the thesis, but includes any impact of image properties on human performance in relation to specified visual tasks. The aim of this thesis is to consolidate and comment on the image measure literatures, and to find through experiment the salient properties of electronically displayed real world complex imagery that impacts on human performance. These experiments were carried out for well specified visual tasks (of real relevance), and the appropriate application of image measures to this imagery, to predict human performance, was considered. An introduction to certain aspects of image quality measures is given, and clutter metrics are integrated into this concept. A very brief and basic introduction to the human visual system (HVS) is given, with some basic models. The literature on image measures is analysed, with a resulting classification of image measures, according to which features they were attempting to quantify. A series of experiments were performed to evaluate the effects of image properties on human performance, using appropriate measures of performance. The concept of image similarity was explored, by objectively measuring the subjective perception of imagery of the same scene, as obtained through different sensors, and which underwent different luminance transformations. Controlled degradations were introduced, by using image compression. Both still and video compression were used to investigate both spatial and temporal aspects of HVS processing. The effects of various compression schemes on human target acquisition performance were quantified. A study was carried out to determine the "local" extent, to which the clutter around a target, affects its detectability. It was found in this case, that the excepted wisdom, of setting the local domain (support of the metric) to twice the expected target size, was incorrect. The local extent of clutter was found to be much greater, with this having implications for the application of clutter metrics. An image quality metric called the gradient energy measure (GEM), for quantifying the affect of filtering on Nuclear Medicine derived images, was developed and evaluated. This proved to be a reliable measure of image smoothing and noise level, which in preliminary studies agreed with human perception. The final study discussed in this thesis determined the performance of human image analysts, in terms of their receiver-operating characteristic, when using Synthetic Aperture Radar (SAR) derived images in the surveillance context. In particular, the effects of target contrast and background clutter on human analyst target detection performance were quantified. In the final chapter, suggestions to extend the work of this thesis are made, and in this context a system to predict human visual performance, based on input imagery, is proposed. This system intelligently uses image metrics based on the particular visual task and human expectations and human visual system performance parameters. / Thesis (Ph.D.)--Medical School; School of Computer Science, 1999.

Page generated in 0.0652 seconds