• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61
  • 11
  • 11
  • 9
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 116
  • 92
  • 67
  • 38
  • 36
  • 25
  • 23
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Predicting basketball performance based on draft pick : A classification analysis

Harmén, Fredrik January 2022 (has links)
In this thesis, we will look to predict the performance of a basketball player coming into the NBA depending on where the player was picked in the NBA draft. This will be done by testing different machine learning models on data from the previous 35 NBA drafts and then comparing the models in order to see which model had the highest accuracy of classification. The machine learning methods used are Linear Discriminant Analysis, K-Nearest Neighbors, Support Vector Machines and Random Forests. The results show that the method with the highest accuracy of classification was Random Forests, with an accuracy of 42%.
42

IMPROVING THE UTILIZATION AND PERFORMANCE OF SPECIALIZED GPU CORES

Aaron M Barnes (20767127) 26 February 2025 (has links)
<p dir="ltr">Specialized hardware accelerators are becoming increasingly common to provide application performance gain despite the slowing trend of transistor scaling. Accelerators can adapt to the compute and data dependency patterns of an application to fully exploit the parallelism of the application and reduce data movement. However, specialized hardware is often limited by the application it was tailored to, which can lead to idle or inactive silicon in computations that do not match the exact patterns it was designed for. In this work I study two cases of GPU specialization and techniques that can be used to improve performance in a broader domain of applications. </p><p dir="ltr">First, I examine the effects of GPU core partitioning, a trend in contemporary GPUs to sub-divide core components to reduce area and energy overheads. Core partitioning is essentially a specialization of the hardware towards balanced applications, wherein the intra-core connectivity provides minimal benefit but takes up valuable on-chip area. I identify four orthogonal performance effects of GPU core sub-division, two of which have significant impact in practice: a bottleneck in the read operand stage caused by the reduced number of collector units and register banks allocated to each sub-core, and an instruction issue imbalance across sub-core schedulers caused by a simple round robin assignment of threads to sub-cores. To alleviate these issues I propose a Register Bank Aware (RBA) warp scheduler, which uses feedback from current register bank contention to inform thread scheduling decisions, and a hashed sub-core work scheduler to prevent pathological issue imbalances caused by round robin scheduling. I rigorously evaluate these designs in simulation and show they are able to capture 81% of the performance lost due to core subdivision. Further, I evaluate my techniques using synthesis tools and find that RBA is able to achieve performance equivalent to doubling the number of operand Collector Units (CUs) per sub-core with only a 1% increase in area and power.</p><p dir="ltr">Second, I study the inclusion of specialized ray tracing accelerator cores on GPUs. Specialized ray-tracing acceleration units have become a common feature in GPU hardware, enabling real-time ray-tracing of complex scenes for the first time. The ray-tracing unit accelerates the traversal of a hierarchical tree data structure called a bounding volume hierarchy to determine whether rays have intersected triangle primitives. Hierarchical search algorithms are a fundamental software pattern common in many important domains, such as recommendation systems and point cloud registration, but are difficult for GPUs to accelerate because they are characterized by extensive branching and recursion. The ray-tracing unit overcomes these limitations with specialized hardware to traverse hierarchical data structures efficiently, but is mired by a highly specialized graphics API, which is not readily adaptable to general-purpose computation. In this work I present the Hierarchical Search Unit (HSU), a flexible datapath to accelerate a more general class of hierarchical search algorithms, of which ray-tracing is one. I synthesize a baseline ray-intersection datapath and maximize functional unit reuse while extending the ray-tracing unit to support additional computations and a more general set of instructions. I demonstrate that the unit can improve the performance of three hierarchical search data structures in approximate nearest neighbors search algorithms and a B-tree key-value store index. For a minimal extension to the existing unit, HSU improves the state-of-the-art GPU approximate nearest neighbor implementation by an average of 24.8% using the GPU's general computing interface.</p>
43

台灣地震散群之研究

吳東陽 Unknown Date (has links)
九二一地震是台灣數十年來傷亡最大的地震,根據中央氣象局的研究發現九二一地震之後半年至一年內發生的地震,大多數都是由其引發的餘震,然而一個地震屬於主震、或是某個地震的餘震又該如何判斷呢?本文是以統計資料分析之觀點來區分主震與餘震,而不是利用相關地震學理論來區分主震與餘震,本文主要研究的是比較四種區分主震與餘震的方法:整體距離(Global Distance)、負相關(Negative Correlation)、最近鄰區(Nearest Neighbors)、視窗(Window)。四種地震散群方法所需要給定的參數:時間與空間參數,要如何選取與決定,本文則是利用台灣自1991年1月 1日至2003年12月31日之地震規模大於5.0以上的資料,定義地震減少比例(decreasing earthquake percent)來選取參數,以求出最適當的模型參數。套用選取得到的模型參數,利用電腦模擬地震來驗證比較方法的優劣,依據誤判主震(False Positive)、誤判餘震(False Negative)、分錯比例(Overall Error Rate)等準則比較各種地震散群方法的優劣,研究發現四種方法各有其優劣之處。 關鍵詞:主震、餘震、空間統計、最近鄰區、電腦模擬 / The Chi-Chi earthquake resulted in one of the greatest casualties for the past 100 years in Taiwan. According to the Central Weather Bureau in Taiwan, most of the earthquakes that occurred 6 months to 12 months after the Chi-Chi earthquake were the aftershocks. But in general, how do we classify if a certain earthquake is a main earthquake or aftershock? In this study, our interest is on the statistical methods for detecting whether an earthquake is a main earthquake. Four declustering methods are considered: Global Distance, Negative Correlation, Nearest Neighbors and Window. Taiwan earthquake data, with magnitude larger than 5 occurring between 1991 and 2003, were used to determine the parameters used in these four methods. Finally, a computer simulation is used to evaluate the performance of four methods, based on the results such as false positive and false negative, and overall Error Rate. Key Words: Decluster, Aftershock, Spatial Statistics, Nearest Neighbors, Simulation
44

Geopolitický význam Turecka: perspektivy zahraniční politiky / Geopolitical role of Turkey: foreign policy perspectives

Avdic, Adisa January 2011 (has links)
In the last decade, there has been a significant shift in Turkish foreign policy. The AKP government promotes a new approach to foreign policy (Strategic depth), that aims to use Turkey's potential and expand its sphere of influence. This paper examines the AKP ideology, Ahmet Davutoglu's concept of strategic depth as well as Turkey relations with neighboring countries (Iraq, Syria, Iran, Armenia) western allies (USA and Israel) and the Caspian region.
45

Aplicação de classificadores para determinação de conformidade de biodiesel / Attesting compliance of biodiesel quality using classification methods

LOPES, Marcus Vinicius de Sousa 26 July 2017 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-09-04T17:47:07Z No. of bitstreams: 1 MarcusLopes.pdf: 2085041 bytes, checksum: 14f6f9bbe0d5b050a23103874af8c783 (MD5) / Made available in DSpace on 2017-09-04T17:47:07Z (GMT). No. of bitstreams: 1 MarcusLopes.pdf: 2085041 bytes, checksum: 14f6f9bbe0d5b050a23103874af8c783 (MD5) Previous issue date: 2017-07-26 / The growing demand for energy and the limitations of oil reserves have led to the search for renewable and sustainable energy sources to replace, even partially, fossil fuels. Biodiesel has become in last decades the main alternative to petroleum diesel. Its quality is evaluated by given parameters and specifications which vary according to country or region like, for example, in Europe (EN 14214), US (ASTM D6751) and Brazil (RANP 45/2014), among others. Some of these parameters are intrinsically related to the composition of fatty acid methyl esters (FAMEs) of biodiesel, such as viscosity, density, oxidative stability and iodine value, which allows to relate the behavior of these properties with the size of the carbon chain and the presence of unsaturation in the molecules. In the present work four methods for direct classification (support vector machine, K-nearest neighbors, decision tree classifier and artificial neural networks) were optimized and compared to classify biodiesel samples according to their compliance to viscosity, density, oxidative stability and iodine value, having as input the composition of fatty acid methyl esters, since those parameters are intrinsically related to composition of biodiesel. The classifi- cations were carried out under the specifications of standards EN 14214, ASTM D6751 and RANP 45/2014. A comparison between these methods of direct classification and empirical equations (indirect classification) distinguished positively the direct classification methods in the problem addressed, especially when the biodiesel samples have properties values very close to the limits of the considered specifications. / A demanda crescente por fontes de energia renováveis e como alternativa aos combustíveis fósseis tornam o biodiesel como uma das principais alternativas para substituição dos derivados do petróleo. O controle da qualidade do biodiesel durante processo de produção e distribuição é extremamente importante para garantir um combustível com qualidade confiável e com desempenho satisfatório para o usuário final. O biodiesel é caracterizado pela medição de determinadas propriedades de acordo com normas internacionais. A utilização de métodos de aprendizagem de máquina para a caracterização do biodiesel permite economia de tempo e dinheiro. Neste trabalho é mostrado que para a determinação da conformidade de um biodiesel os classificadores SVM, KNN e Árvore de decisões apresentam melhores resultados que os métodos de predição de trabalhos anteriores. Para as propriedades de viscosidade densidade, índice de iodo e estabilidade oxidativa (RANP 45/2014, EN14214:2014 e ASTM D6751-15) os classificadores KNN e Árvore de decisões apresentaram-se como melhores opções. Estes resultados mostram que os classificadores podem ser aplicados de forma prática visando economia de tempo, recursos financeiros e humanos.
46

Deteção de extra-sístoles ventriculares

Silva, Aurélio Filipe de Sousa e January 2012 (has links)
Tese de mestrado integrado. Bioengenharia. Área de Especialização de Engenharia Biomédica. Faculdade de Engenharia. Universidade do Porto. 2012
47

Learning prototype-based classification rules in a boosting framework: application to real-world and medical image categorization

Piro, Paolo 18 January 2010 (has links) (PDF)
Résumé en français non disponible
48

Application, Comparison, And Improvement Of Known Received Signal Strength Indication (rssi) Based Indoor Localization And Tracking Methods Using Active Rfid Devices

Ozkaya, Bora 01 February 2011 (has links) (PDF)
Localization and tracking objects or people in real time in indoor environments have gained great importance. In the literature and market, many different location estimation and tracking solutions using received signal strength indication (RSSI) are proposed. But there is a lack of information on the comparison of these techniques revealing their weak and strong behaviors over each other. There is a need for the answer to the question / &ldquo / which localization/tracking method is more suitable to my system needs?&rdquo / . So, one purpose of this thesis is to seek the answer to this question. Hence, we investigated the behaviors of commonly proposed localization methods, mainly nearest neighbors based methods, grid based Bayesian filtering and particle filtering methods by both simulation and experimental work on the same test bed. The other purpose of this thesis is to propose an improved method that is simple to install, cost effective and moderately accurate to use for real life applications. Our proposed method uses an improved type of sampling importance resampling (SIR) filter incorporating automatic calibration of propagation model parameters of logv distance path loss model and RSSI measurement noise by using reference tags. The proposed method also uses an RSSI smoothing algorithm exploiting the RSSI readings from the reference tags. We used an active RFID system composed of 3 readers, 1 target tag and 4 reference tags in a home environment of two rooms with a total area of 36 m&sup2 / . The proposed method yielded 1.25 m estimation RMS error for tracking a mobile target.
49

Simple, Faster Kinetic Data Structures

Rahmati, Zahed 28 August 2014 (has links)
Proximity problems and point set embeddability problems are fundamental and well-studied in computational geometry and graph drawing. Examples of such problems that are of particular interest to us in this dissertation include: finding the closest pair among a set P of points, finding the k-nearest neighbors to each point p in P, answering reverse k-nearest neighbor queries, computing the Yao graph, the Semi-Yao graph and the Euclidean minimum spanning tree of P, and mapping the vertices of a planar graph to a set P of points without inducing edge crossings. In this dissertation, we consider so-called kinetic version of these problems, that is, the points are allowed to move continuously along known trajectories, which are subject to change. We design a set of data structures and a mechanism to efficiently update the data structures. These updates occur at critical, discrete times. Also, a query may arrive at any time. We want to answer queries quickly without solving problems from scratch, so we maintain solutions continuously. We present new techniques for giving kinetic solutions with better performance for some these problems, and we provide the first kinetic results for others. In particular, we provide: • A simple kinetic data structure (KDS) to maintain all the nearest neighbors and the closest pair. Our deterministic kinetic approach for maintenance of all the nearest neighbors improves the previous randomized kinetic algorithm. • An exact KDS for maintenance of the Euclidean minimum spanning tree, which improves the previous KDS. • The first KDS's for maintenance of the Yao graph and the Semi-Yao graph. • The first KDS to consider maintaining plane graphs on moving points. • The first KDS for maintenance of all the k-nearest neighbors, for any k ≥ 1. • The first KDS to answer the reverse k-nearest neighbor queries, for any k ≥ 1 in any fixed dimension, on a set of moving points. / Graduate
50

Construction of the Intensity-Duration-Frequency (IDF) Curves under Climate Change

2014 December 1900 (has links)
Intensity-Duration-Frequency (IDF) curves are among the standard design tools for various engineering applications, such as storm water management systems. The current practice is to use IDF curves based on historical extreme precipitation quantiles. A warming climate, however, might change the extreme precipitation quantiles represented by the IDF curves, emphasizing the need for updating the IDF curves used for the design of urban storm water management systems in different parts of the world, including Canada. This study attempts to construct the future IDF curves for Saskatoon, Canada, under possible climate change scenarios. For this purpose, LARS-WG, a stochastic weather generator, is used to spatially downscale the daily precipitation projected by Global Climate Models (GCMs) from coarse grid resolution to the local point scale. The stochastically downscaled daily precipitation realizations were further disaggregated into ensemble hourly and sub-hourly (as fine as 5-minute) precipitation series, using a disaggregation scheme developed using the K-nearest neighbor (K-NN) technique. This two-stage modeling framework (downscaling to daily, then disaggregating to finer resolutions) is applied to construct the future IDF curves in the city of Saskatoon. The sensitivity of the K-NN disaggregation model to the number of nearest neighbors (i.e. window size) is evaluated during the baseline period (1961-1990). The optimal window size is assigned based on the performance in reproducing the historical IDF curves by the K-NN disaggregation models. Two optimal window sizes are selected for the K-NN hourly and sub-hourly disaggregation models that would be appropriate for the hydrological system of Saskatoon. By using the simulated hourly and sub-hourly precipitation series and the Generalized Extreme Value (GEV) distribution, future changes in the IDF curves and associated uncertainties are quantified using a large ensemble of projections obtained for the Canadian and British GCMs (CanESM2 and HadGEM2-ES) based on three Representative Concentration Pathways; RCP2.6, RCP4.5, and RCP8.5 available from CMIP5 – the most recent product of the Intergovernmental Panel on Climate Change (IPCC). The constructed IDF curves are then compared with the ones constructed using another method based on a genetic programming technique. The results show that the sign and the magnitude of future variations in extreme precipitation quantiles are sensitive to the selection of GCMs and/or RCPs, and the variations seem to become intensified towards the end of the 21st century. Generally, the relative change in precipitation intensities with respect to the historical intensities for CMIP5 climate models (e.g., CanESM2: RCP4.5) is less than those for CMIP3 climate models (e.g., CGCM3.1: B1), which may be due to the inclusion of climate policies (i.e., adaptation and mitigation) in CMIP5 climate models. The two-stage downscaling-disaggregation method enables quantification of uncertainty due to natural internal variability of precipitation, various GCMs and RCPs, and downscaling methods. In general, uncertainty in the projections of future extreme precipitation quantiles increases for short durations and for long return periods. The two-stage method adopted in this study and the GP method reconstruct the historical IDF curves quite successfully during the baseline period (1961-1990); this suggests that these methods can be applied to efficiently construct IDF curves at the local scale under future climate scenarios. The most notable precipitation intensification in Saskatoon is projected to occur with shorter storm duration, up to one hour, and longer return periods of more than 25 years.

Page generated in 0.0436 seconds