• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 476
  • 281
  • 75
  • 64
  • 35
  • 15
  • 10
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1161
  • 243
  • 174
  • 162
  • 160
  • 151
  • 145
  • 131
  • 108
  • 98
  • 97
  • 95
  • 87
  • 87
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Heat Maps : En metod för att uvärdera banor

Moregård, Daniel January 2012 (has links)
Denna rapport har undersökt om game metrics genom heat maps kan användas för att hitta en choke point i en bana gjord till spelet Team Fortress 2. Game metrics och kvantitativa  metoder  erbjuder  ett  objektivt  och  nästan  automatiserat  alternativ  till kvalitativa metoder när det kommer till balansering. En bana har konstruerats med en choke point och har speltestats för att generera en heat map. För att undersöka om det går att hitta en choke point med hjälp av en heat map så har en enkät gjorts där respondenter   bads   hitta   choke   pointen   med   hjälp   av   den   heat   mapen   som genererades från speltestningen av banan. Alla respondenter lyckades hitta mitten av choke  pointen  med  hjälp  av  heat  mapen.  I  framtiden  skulle  arbetet  kunna  utökas genom  att  undersöka  om  användandet  av  bottar  eventuellt  skulle  helt  kunna automatisera  balanseringsprocessen.  Det  skulle  också  gå  att  undersöka  hur  olika klasser rör sig i en bana.
122

Perceptual Criteria on Image Compression

Moreno Escobar, Jesús Jaime 01 July 2011 (has links)
Hoy en día las imágenes digitales son usadas en muchas areas de nuestra vida cotidiana, pero estas tienden a ser cada vez más grandes. Este incremento de información nos lleva al problema del almacenamiento de las mismas. Por ejemplo, es común que la representación de un pixel a color ocupe 24 bits, donde los canales rojo, verde y azul se almacenen en 8 bits. Por lo que, este tipo de pixeles en color pueden representar uno de los 224 ¼ 16:78 millones de colores. Así, una imagen de 512 £ 512 que representa con 24 bits un pixel ocupa 786,432 bytes. Es por ello que la compresión es importante. Una característica importante de la compresión de imágenes es que esta puede ser con per didas o sin ellas. Una imagen es aceptable siempre y cuando dichas perdidas en la información de la imagen no sean percibidas por el ojo. Esto es posible al asumir que una porción de esta información es redundante. La compresión de imágenes sin pérdidas es definida como deco dificar matemáticamente la misma imagen que fue codificada. En la compresión de imágenes con pérdidas se necesita identificar dos características: la redundancia y la irrelevancia de in formación. Así la compresión con pérdidas modifica los datos de la imagen de tal manera que cuando estos son codificados y decodificados, la imagen recuperada es lo suficientemente pare cida a la original. Que tan parecida es la imagen recuperada en comparación con la original es definido previamente en proceso de codificación y depende de la implementación a ser desarrollada. En cuanto a la compresión con pérdidas, los actuales esquemas de compresión de imágenes eliminan información irrelevante utilizando criterios matemáticos. Uno de los problemas de estos esquemas es que a pesar de la calidad numérica de la imagen comprimida es baja, esta muestra una alta calidad visual, dado que no muestra una gran cantidad de artefactos visuales. Esto es debido a que dichos criterios matemáticos no toman en cuenta la información visual percibida por el Sistema Visual Humano. Por lo tanto, el objetivo de un sistema de compresión de imágenes diseñado para obtener imágenes que no muestren artefactos, aunque su calidad numérica puede ser baja, es eliminar la información que no es visible por el Sistema Visual Humano. Así, este trabajo de tesis doctoral propone explotar la redundancia visual existente en una imagen, reduciendo frecuencias imperceptibles para el sistema visual humano. Por lo que primeramente, se define una métrica de calidad de imagen que está altamente correlacionada con opiniones de observadores. La métrica propuesta pondera el bien conocido PSNR por medio de una modelo de inducción cromática (CwPSNR). Después, se propone un algoritmo compresor de imágenes, llamado Hi-SET, el cual explota la alta correlación de un vecindario de pixeles por medio de una función Fractal. Hi-SET posee las mismas características que tiene un compresor de imágenes moderno, como ser una algoritmo embedded que permite la transmisión progresiva. También se propone un cuantificador perceptual(½SQ), el cual es una modificación a la clásica cuantificación Dead-zone. ½SQes aplicado a un grupo entero de pixelesen una sub-banda Wavelet dada, es decir, se aplica una cuantificación global. A diferencia de lo anterior, la modificación propuesta permite hacer una cuantificación local tanto directa como inversa pixel-por-pixel introduciéndoles una distorsión perceptual que depende directamente de la información espacial del entorno del pixel. Combinando el método ½SQ con Hi-SET, se define un compresor perceptual de imágenes, llamado ©SET. Finalmente se presenta un método de codificación de areas de la Región de Interés, ½GBbBShift, la cual pondera perceptualmente los pixeles en dichas areas, en tanto que las areas que no pertenecen a la Región de Interés o el Fondo sólo contendrán aquellas que perceptualmente sean las más importantes. Los resultados expuestos en esta tesis indican que CwPSNR es el mejor indicador de calidad de imagen en las distorsiones más comunes de compresión como son JPEG y JPEG2000, dado que CwPSNR posee la mejor correlación con la opinión de observadores, dicha opinión está sujeta a los experimentos psicofísicos de las más importantes bases de datos en este campo, como son la TID2008, LIVE, CSIQ y IVC. Además, el codificador de imágenes Hi-SET obtiene mejores resultados que los obtenidos por JPEG2000 u otros algoritmos que utilizan el fractal de Hilbert. Así cuando a Hi-SET se la aplica la cuantificación perceptual propuesta, ©SET, este incrementa su eficiencia tanto objetiva como subjetiva. Cuando el método ½GBbBShift es aplicado a Hi-SET y este es comparado contra el método MaxShift aplicado al estándar JPEG2000 y a Hi-SET, se obtienen mejores resultados perceptuales comparando la calidad subjetiva de toda la imagen de dichos métodos. Tanto la cuantificación perceptual propuesta ½SQ como el método ½GBbBShift son algoritmos generales, los cuales pueden ser aplicados a otros algoritmos de compresión de imágenes basados en Transformada Wavelet tales como el mismo JPEG2000, SPIHT o SPECK, por citar algunos ejemplos. / Nowadays, digital images are used in many areas in everyday life, but they tend to be big. This increases amount of information leads us to the problem of image data storage. For example, it is common to have a representation a color pixel as a 24-bit number, where the channels red, green, and blue employ 8 bits each. In consequence, this kind of color pixel can specify one of 224 ¼ 16:78 million colors. Therefore, an image at a resolution of 512 £ 512 that allocates 24 bits per pixel, occupies 786,432 bytes. That is why image compression is important. An important feature of image compression is that it can be lossy or lossless. A compressed image is acceptable provided these losses of image information are not perceived by the eye. It is possible to assume that a portion of this information is redundant. Lossless Image Compression is defined as to mathematically decode the same image which was encoded. In Lossy Image Compression needs to identify two features inside the image: the redundancy and the irrelevancy of information. Thus, lossy compression modifies the image data in such a way when they are encoded and decoded, the recovered image is similar enough to the original one. How similar is the recovered image in comparison to the original image is defined prior to the compression process, and it depends on the implementation to be performed. In lossy compression, current image compression schemes remove information considered irrelevant by using mathematical criteria. One of the problems of these schemes is that although the numerical quality of the compressed image is low, it shows a high visual image quality, e.g. it does not show a lot of visible artifacts. It is because these mathematical criteria, used to remove information, do not take into account if the viewed information is perceived by the Human Visual System. Therefore, the aim of an image compression scheme designed to obtain images that do not show artifacts although their numerical quality can be low, is to eliminate the information that is not visible by the Human Visual System. Hence, this Ph.D. thesis proposes to exploit the visual redundancy existing in an image by reducing those features that can be unperceivable for the Human Visual System. First, we define an image quality assessment, which is highly correlated with the psychophysical experiments performed by human observers. The proposed CwPSNR metrics weights the well-known PSNR by using a particular perceptual low level model of the Human Visual System, e.g. the Chromatic Induction Wavelet Model (CIWaM). Second, we propose an image compression algorithm (called Hi-SET), which exploits the high correlation and self-similarity of pixels in a given area or neighborhood by means of a fractal function. Hi-SET possesses the main features that modern image compressors have, that is, it is an embedded coder, which allows a progressive transmission. Third, we propose a perceptual quantizer (½SQ), which is a modification of the uniform scalar quantizer. The ½SQ is applied to a pixel set in a certain Wavelet sub-band, that is, a global quantization. Unlike this, the proposed modification allows to perform a local pixel-by-pixel forward and inverse quantization, introducing into this process a perceptual distortion which depends on the surround spatial information of the pixel. Combining ½SQ method with the Hi-SET image compressor, we define a perceptual image compressor, called ©SET. Finally, a coding method for Region of Interest areas is presented, ½GBbBShift, which perceptually weights pixels into these areas and maintains only the more important perceivable features in the rest of the image. Results presented in this report show that CwPSNR is the best-ranked image quality method when it is applied to the most common image compression distortions such as JPEG and JPEG2000. CwPSNR shows the best correlation with the judgement of human observers, which is based on the results of psychophysical experiments obtained for relevant image quality databases such as TID2008, LIVE, CSIQ and IVC. Furthermore, Hi-SET coder obtains better results both for compression ratios and perceptual image quality than the JPEG2000 coder and other coders that use a Hilbert Fractal for image compression. Hence, when the proposed perceptual quantization is introduced to Hi-SET coder, our compressor improves its numerical and perceptual e±ciency. When ½GBbBShift method applied to Hi-SET is compared against MaxShift method applied to the JPEG2000 standard and Hi-SET, the images coded by our ROI method get the best results when the overall image quality is estimated. Both the proposed perceptual quantization and the ½GBbBShift method are generalized algorithms that can be applied to other Wavelet based image compression algorithms such as JPEG2000, SPIHT or SPECK.
123

Bank Credit Risk Measurement --- Application and Empirical of Markov Model

Yang, Tsung-Hsien 27 July 2004 (has links)
none
124

Software Engineering Process Improvement

Sezer, Bulent 01 April 2007 (has links) (PDF)
This thesis presents a software engineering process improvement study. The literature on software process improvement is reviewed. Then the current design verification process at one of the Software Engineering Departments of the X Company, Ankara, T&uuml / rkiye (SED) is analyzed. Static software development process metrics have been calculated for the SED based on a recently proposed approach. Some improvement suggestions have been made based on the metric values calculated according to the proposals of that study. Besides, the author&#039 / s improvement suggestions have been discussed with the senior staff at the department and then final version of the improvements has been gathered. Then, a discussion has been made comparing these two approaches. Finally, a new software design verification process model has been proposed. Some of the suggestions have already been applied and preliminary results have been obtained.
125

Dental analysis of Classic period population variability in the Maya area

Scherer, Andrew Kenneth 17 February 2005 (has links)
In this dissertation I examine population history and structure in the Maya area during the Classic period (A.D. 250-900). Within the Maya area, archaeologists have identified regional variation in material culture between archaeological zones. These cultural differences may correspond to biological differences between Classic Maya populations. I test the hypothesis that Classic Maya population structure followed an isolation by distance model. I collected dental nonmetric and metric traits on 977 skeletons, from 18 Classic period sites, representing seven different archaeological zones. I corrected the data for intraobserver error. For the dental nonmetric data, I developed a Maya-specific trait dichotomization scheme and controlled for sex bias. I tested the dental metric data for normality and age affects. I imputed missing dental metric data for some traits and the remaining set of traits was Q-mode transformed to control for allometric factors. I analyzed the dental nonmetric and metric datasets with both univariate and multivariate tests. I found, with a log likelihood ratio, that 50% of the nonmetric traits exhibited statistically significant differences between Maya sites. I performed a Mean Measure of Divergence analysis of the dental nonmetric dataset and found that majority of the resulting pairwise distance values were significant. Using cluster analysis and multidimensional scaling, I found that the dental nonmetric data do not support an isolation by distance organization of Classic Maya population structure. In the ANOVA and MANOVA tests, I did not find major statistically significant differences in dental metrics between Maya sites. Using principal components analysis, a Mahalanobis Distance test, and R matrix analysis, I found a generally similar patterning of the dental metric data. The dental metric data to not support an isolation by distance model for Classic Maya population structure. However, the geographically outlying sites from Kaminaljuyu and the Pacific Coast repeatedly plotted as biological outliers. R matrix analysis indicates that gene flow, not genetic drift, dominated Classic Maya population structure. Based on the results of the dental nonmetric and metric analyses, I reject the hypothesis that isolation by distance is a valid model for Classic Maya population structure. From the multivariate analyses of the dental nonmetric and metric data, a few notable observations are made. The major sites of Tikal and Calakmul both demonstrate substantial intrasite biological heterogeneity, with some affinity to other sites but with little to one another. Piedras Negras demonstrates some evidence for genetic isolation from the other lowland Maya sites. In the Pasión Zone, Seibal and Altar de Sacrificios demonstrate some affinity to one another, though Dos Pilas is an outlier. The R matrix analysis found evidence of Classic period immigration into Seibal from outside the network of sites tested. The Belize Zone exhibited substantial heterogeneity among its sites, with the site of Colha showing some affinity to the Central Zone. Copan, despite being a geographic outlier, demonstrates genetic affinity with the rest of the Maya area. Kaminaljuyu and the Pacific Coast were both found to be outliers. These results indicate that dental nonmetric and metric data are a useful tool for investigating ancient biological variability in the Maya area and contribute to our expanding understanding of population history in that region.
126

VizzAnalyzer goes Eclipse!

Ruiz de Azua Nieto, David January 2007 (has links)
<p>The VizzAnalyzer Framework is a stand-alone tool for analyzing and visualizing the structures of large software systems. Today, it has its own limited Swing based GUI lacking a professional look & feel. Furthermore, the effort needed to extend the VizzAnalyzer with new features like automatic update, progress monitoring, help system, and integration of the Eclipse Java and C/C++ AST API is high.</p><p>In order to improve current limitations and ease the future maintenance effort we refactored the VizzAnalyzer to be a plug-in to the Eclipse platform. We removed the burden of GUI development from the authors of the VizzAnalyzer replacing the Swing GUI with a SWT based GUI, which utilizes the rich feature set provided by the Eclipse Platform. Furthermore, the we did not only provide existing features of the VizzAnalyzer as loading and binding graphs, a complex system to load dynamic plug-ins functionalities for analysis, retrieval and visualization. We implemented an update and help manager, allowed for an easy use of third party plug-ins, which are available for Eclipse, and provided product branding.</p><p>We propose that the newly created VizzAnalyzer 2.0 solved the aforementioned limitations and provides a good foundation for the future evolution of the VizzAnalyzer tool.</p><p>This master thesis documents our how the VizzAnalyzer 2.0 has been developed and implemented for the Eclipse platform, and how developers shall use the new VizzAnalyzer version.</p>
127

The metrics of spacecraft design reusability and cost analysis as applied to CubeSats

Brumbaugh, Katharine Mary 07 June 2012 (has links)
The University of Texas at Austin (UT-Austin) Satellite Design Lab (SDL) is currently designing two 3U CubeSat spacecraft – Bevo-2 and ARMADILLO – which serve as the foundation for the design reusability and cost analysis of this thesis. The thesis explores the reasons why a small satellite would want to incorporate a reusable design and the processes needed in order for this reusable design to be implemented for future projects. Design and process reusability reduces the total cost of the spacecraft, as future projects need only alter the components or documents necessary in order to create a new mission. The thesis also details a grassroots approach to determining the total cost of a 3U CubeSat satellite development project and highlights the costs which may be considered non-recurring and recurring in order to show the financial benefit of reusability. The thesis then compares these results to typical models used for cost analysis in industry applications. The cost analysis determines that there is a crucial gap in the cost estimating of nanosatellites which may be seen by comparing two widely-used cost models, the Small Satellite Cost Model (SSCM <100 kg) and the NASA/Air Force Cost Model (NAFCOM), as they apply to a 3U CubeSat project. While each of these models provides a basic understanding of the elements which go into cost estimating, the Cost Estimating Relationships (CERs) do not have enough historical data of picosatellites and nanosatellites (<50 kg) to accurately reflect mission costs. Thus, the thesis documents a discrepancy between widely used industry spacecraft cost models and the needs of the picosatellite and nanosatellite community, specifically universities, to accurately predict their mission costs. It is recommended to develop a nanosatellite/CubeSat cost model with which university and industry developers alike can determine their mission costs during the designing, building and operational stages. Because cost models require the use of many missions to form a database, it is important to start this process now at the beginning of the nanosatellite/CubeSat boom. / text
128

A project plan for improving the performance measurement process : a usability case study

Vasquez, Roberto Mario 21 February 2011 (has links)
Many good software practices are often discarded because of the syndrome “there is not enough time, do it later”, or “it is in our head and there is no time to write it down.” As a consequence, projects are late, time frames to complete software modules are unrealistic and miscalculated, and traceability to required documents and their respective stakeholders do not exist. It is not until the release of the application that it is determined the functionalities do not meet the expectations of the end users and stakeholders. The effect of this can be detrimental to the individuals of the development team and the organization. Associating measurement and metrics to internal software processes and tasks, followed by analysis and continual evaluation, are key elements to close many of the repeated gaps in the life cycle of software engineering, regardless of the software methodology. This report presents a usability case study of a customized application during its development. The application contains internal indicator modules for performance measurement processes captured at the level of a Request System application within a horizontal organizational group. The main goals for the usability surveys and case study were (1st) to identify, define and evaluate the current gaps in the system and (2nd) find new approaches and strategies with the intent to move the project in the right direction. Gaps identified throughout the development process are included as indicators for process improvement. The result of the usability case study creates new goals and gives clear direction to the project. Goal-driven measurements and the creation of a new centralized collaborative web system for communication with other teams are parts of the solution. The processes and techniques may provide benefits to companies interested in applying similar tactics to improve their own software project processes. / text
129

Capturing Evolving Visit Behavior in Clickstream Data

Moe, Wendy W., Fader, Peter S. 01 1900 (has links)
Many online retailers monitor visitor traffic as a measure of their storesâ success. However, summary measures such as the total number of visits per month provide little insight about individual-level shopping behavior. Additionally, behavior may evolve over time, especially in a changing environment like the Internet. Understanding the nature of this evolution provides valuable knowledge that can influence how a retail store is managed and marketed. This paper develops an individual-level model for store visiting behavior based on Internet clickstream data. We capture cross-sectional variation in store-visit behavior as well as changes over time as visitors gain experience with the store. That is, as someone makes more visits to a site, her latent rate of visit may increase, decrease, or remain unchanged as in the case of static, mature markets. So as the composition of the customer population changes (e.g., as customers mature or as large numbers of new and inexperienced Internet shoppers enter the market), the overall degree of visitor heterogeneity that each store faces may shift. We also examine the relationship between visiting frequency and purchasing propensity. Previous studies suggest that customers who shop frequently may be more likely to make a purchase on any given shopping occasion. As a result, frequent shoppers often comprise the preferred target segment. We find evidence supporting the fact that people who visit a store more frequently are more likely to buy. However, we also show that changes (i.e., evolution) in an individualâ s visit frequency over time provides further information regarding which customer segments are more likely to buy. Rather than simply targeting all frequent shoppers, our results suggest that a more refined segmentation approach that incorporates how much an individualâ s behavior is changing could more efficiently identify a profitable target segment.
130

Discriminating Meta-Search: A Framework for Evaluation

Chignell, Mark, Gwizdka, Jacek, Bodner, Richard January 1999 (has links)
DOI: 10.1016/S0306-4573(98)00065-X / There was a proliferation of electronic information sources and search engines in the 1990s. Many of these information sources became available through the ubiquitous interface of the Web browser. Diverse information sources became accessible to information professionals and casual end users alike. Much of the information was also hyperlinked, so that information could be explored by browsing as well as searching. While vast amounts of information were now just a few keystrokes and mouseclicks away, as the choices multiplied, so did the complexity of choosing where and how to look for the electronic information. Much of the complexity in information exploration at the turn of the twenty-first century arose because there was no common cataloguing and control system across the various electronic information sources. In addition, the many search engines available differed widely in terms of their domain coverage, query methods, and efficiency. Meta-search engines were developed to improve search performance by querying multiple search engines at once. In principle, meta-search engines could greatly simplify the search for electronic information by selecting a subset of first-level search engines and digital libraries to submit a query to based on the characteristics of the user, the query/topic, and the search strategy. This selection would be guided by diagnostic knowledge about which of the first-level search engines works best under what circumstances. Programmatic research is required to develop this diagnostic knowledge about first-level search engine performance. This paper introduces an evaluative framework for this type of research and illustrates its use in two experiments. The experimental results obtained are used to characterize some properties of leading search engines (as of 1998). Significant interactions were observed between search engine and two other factors (time of day, and Web domain). These findings supplement those of earlier studies, providing preliminary information about the complex relationship between search engine functionality and performance in different contexts. While the specific results obtained represent a time-dependent snapshot of search engine performance in 1998, the evaluative framework proposed should be generally applicable in the future.

Page generated in 0.0388 seconds