• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 8
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Simulation systems for statistical tests

Aziz, A. M. A. H. January 1987 (has links)
No description available.
2

3D Modeling of Indoor Environments

Dahlin, Johan January 2013 (has links)
With the aid of modern sensors it is possible to create models of buildings. These sensorstypically generate 3D point clouds and in order to increase interpretability and usability,these point clouds are often translated into 3D models.In this thesis a way of translating a 3D point cloud into a 3D model is presented. The basicfunctionality is implemented using Matlab. The geometric model consists of floors, wallsand ceilings. In addition, doors and windows are automatically identified and integrated intothe model. The resulting model also has an explicit representation of the topology betweenentities of the model. The topology is represented as a graph, and to do this GraphML isused. The graph is opened in a graph editing program called yEd.The result is a 3D model that can be plotted in Matlab and a graph describing the connectivitybetween entities. The GraphML file is automatically generated in Matlab. An interfacebetween Matlab and yEd allows the user to choose which rooms should be plotted.
3

Correlated Sample Synopsis on Big Data

Wilson, David S. 12 December 2018 (has links)
No description available.
4

Image Recognition Techniques for Optical Head Mounted Displays

Kondreddy, Mahendra 21 February 2017 (has links) (PDF)
The evolution of technology has led the research into new emerging wearable devices such as the Smart Glasses. This technology provides with new visualization techniques. Augmented Reality is an advanced technology that could significantly ease the execution of much complex operations. Augmented Reality is a combination of both Virtual and Actual Reality, making accessible to the user new tools to safeguard in the transfer of knowledge in several environments and for several processes. This thesis explores the development of an android based image recognition application. The feature point detectors and descriptors are used as they can deal great with the correspondence problems. The selection of best image recognition technique on the smart glasses is chosen based on the time taken to retrieve the results and the amount of power consumed in the process. As the smart glasses are equipped with the limited resources, the selected approach should use low computation on it by making the device operations uninterruptable. The effective and efficient method for detection and recognition of the safety signs from images is selected. The ubiquitous SIFT and SURF feature detectors consume more time and are computationally complex and require very high-level hardware components for processing. The binary descriptors are taken into account as they are light weight and can support low power devices in a much effective style. A comparative analysis is being done on the working of binary descriptors like BRIEF, ORB, AKAZE, FREAK, etc., on the smart glasses based on their performance and the requirements. ORB is the most efficient among the binary descriptors and has been more effective for the smart glasses in terms of time measurements and low power consumption.
5

Road Surface Modeling using Stereo Vision / Modellering av Vägyta med hjälp av Stereokamera

Lorentzon, Mattis, Andersson, Tobias January 2012 (has links)
Modern day cars are often equipped with a variety of sensors that collect information about the car and its surroundings. The stereo camera is an example of a sensor that in addition to regular images also provides distances to points in its environment. This information can, for example, be used for detecting approaching obstacles and warn the driver if a collision is imminent or even automatically brake the vehicle. Objects that constitute a potential danger are usually located on the road in front of the vehicle which makes the road surface a suitable reference level from which to measure the object's heights. This Master's thesis describes how an estimate of the road surface can be found to in order to make these height measurements. The thesis describes how the large amount of data generated by the stereo camera can be scaled down to a more effective representation in the form of an elevation map. The report discusses a method for relating data from different instances in time using information from the vehicle's motion sensors and shows how this method can be used for temporal filtering of the elevation map. For estimating the road surface two different methods are compared, one that uses a RANSAC-approach to iterate for a good surface model fit and one that uses conditional random fields for modeling the probability of different parts of the elevation map to be part of the road. A way to detect curb lines and how to use them to improve the road surface estimate is shown. Both methods for road classification show good results with a few differences that are discussed towards the end of the report. An example of how the road surface estimate can be used to detect obstacles is also included.
6

Exporting knitted apparel : a study of the determinants of exporting performance in the UK knitted apparel sector

Murphy, Owen Patrick January 2008 (has links)
As the globalisation process accelerates there is a growing need for individual countries to understand the bases for effective performance in international trade. Because it makes up such a large share of world trade, it is especially important to understand what determines effectiveness in exporting. Despite much empirical research, especially over recent decades, the state of knowledge on this topic remains fragmented, unclear and unsatisfactory. The motivation for the present study was therefore twofold: dissatisfaction with the present state of knowledge in this vital area and the importance to the UK economy of improving its export performance in a world of increasing competition. Its aim was to contribute to the resolution of both. In addition to finding what appeared to be quite serious methodological problems in a group of earlier studies, our review of the literature indicated that the best prospects for identifying the determinants of effective exporting were to be found, not at national or sectoral level but at that of the individual firm. Accordingly, an empirical survey research project was developed. To minimise unquantifiable inter-sectoral variability, it was focused on a single sector of industry. For a range of reasons, including the limited amount of information available about its current export activity and prospects, the UK knitted apparel industry was chosen. Special care having been taken to assemble the fullest possible sampling frame and to develop a suitable instrument (which included an export performance model), a mail survey in the form of a stratified random sample of exporting UK manufacturers of knitted apparel was carried through from late 2000. Persistent follow-up by mail and telephone generated a response rate of 70 per cent, comprising close to half of the sampling frame, that was representative of all company size bands, levels of exporting and products. The overall quality of the responses was good; tests of non-response did not find any indications of non-response bias. Data analysis, designed to test thoroughly our 10 export-determinants hypotheses, relied primarily on Pearsonian correlation at the bivariate level then sequentially on Multiple Regression Analysis, Canonical Correlation Analysis and Partial Least Squares. A perhaps slightly novel aspect of the research was that it was not solely cross-sectional in format; a longitudinal element was provided by drawing on the researcher's earlier surveys ; and a panel element by following-up, in 2007, the main 2000 field survey. Where possible, these data were drawn upon in the analysis and interpretation. There did not appear to be any conflict between the three multivariate techniques employed and indeed their findings were not dissimilar. The outcome of the data analysis was to uphold, to varying degrees, most of our hypotheses about the determinants of effective or successful exporting. Those that did not find support were three: firm size, product adaptation, and price determination method. Most strongly supported as determinants were promotional intensity, serving many markets and visits to trade fairs/exhibitions; others which were statistically significant, included management commitment, special staff skills and the use of Commission Agents. While the conclusions must remain a bit tentative they are encouraging.
7

自身迴歸模型最佳子集之選擇

吳佩芳, WU, PEI-FANG Unknown Date (has links)
全部論文共分五章,第一章緒論,指出研究範圍及方法;第二章分析對線性迴歸式中 最佳子集之Mallows 方法;第三章探討時間數列資料之自身迴歸模型;第四章以目前 台灣實例作研究;第五章為結論。 對於一組隨機樣本,其自變數之最佳子集之選擇問題,首要乃在兩個相對子集中選擇 準則的發展,其二是降低計量的效果。本論文即著重降低計量之效果,用Mallows 的 CP 統計量做為比較兩迴歸式之基本準據,發展出一套程序,能以最少的計算過程判 斷出好的迴歸。 在本文中吾且將研究時間數列資料之自身迴歸模型,由Hocking 和Leslie的迴歸模型 之子集選擇方法改編,適用在自身迴歸模型,再發展一演算法,由事先固定之最大落 後期數k 而有最小殘差變異中,發現k 期落後之自身迴歸模型。此種演算法達到最小 計算效果,幾乎不必再檢核此(kκ)個可能子集,吾將再利用Akaike ′的抉擇函數 ,由2K -1個可能子集中選出最後模型。
8

BetaSAC et OABSAC, deux nouveaux 'echantillonnages conditionnels pour RANSAC

Méler, Antoine 31 January 2013 (has links) (PDF)
L'algorithme RANSAC est l'approche la plus commune pour l'estimation robuste des paramètres d'un modèle en vision par ordinateur. C'est principalement sa capacité à traiter des données contenant potentiellement plus d'erreurs que d'information utile qui fait son succès dans ce domaine où les capteurs fournissent une information très riche mais très difficilement exploitable. Depuis sa création, il y a trente ans, de nombreuses modifications ont été proposées pour améliorer sa vitesse, sa précision ou sa robustesse. Dans ce travail, nous proposons d'accélérer la résolution d'un problème par RANSAC en utilisant plus d'information que les approches habituelles. Cette information, calculée à partir des données elles-même ou provenant de sources complémentaires de tous types, nous permet d'aider RANSAC à générer des hypothèses plus pertinentes. Pour ce faire, nous proposons de distinguer quatre degrés de qualité d'une hypothèse: la "non contamination", la "cohésion", la "cohérence" et enfin la "pertinence". Puis nous montrons à quel point une hypothèse non contaminée par des données erronées est loin d'être pertinente dans le cas général. Dès lors, nous nous attachons à concevoir un algorithme original qui, contrairement aux méthodes de l'état de l'art, se focalise sur la génération d'échantillons "pertinents" plutôt que simplement "non contaminés". Notre approche consiste à commencer par proposer un modèle probabiliste unifiant l'ensemble des méthodes de réordonnancement de l'échantillonnage de RANSAC. Ces méthodes assurent un guidage du tirage aléatoire des données tout en se prémunissant d'une mise en échec de RANSAC. Puis, nous proposons notre propre algorithme d'ordonnancement, BetaSAC, basé sur des tris conditionnels partiels. Nous montrons que la conditionnalité du tri permet de satisfaire des contraintes de cohérence des échantillons formés, menant à une génération d'échantillons pertinents dans les premières itérations de RANSAC, et donc à une résolution rapide du problème. L'utilisation de tris partiels plutôt qu'exhaustifs, quant à lui, assure la rapidité et la randomisation, indispensable à ce type de méthodes. Dans un second temps, nous proposons une version optimale de notre méthode, que l'on appelle OABSAC (pour Optimal and Adaptative BetaSAC), faisant intervenir une phase d'apprentissage hors ligne. Cet apprentissage a pour but de mesurer les propriétés caractéristiques du problème spécifique que l'on souhaite résoudre, de façon à établir automatiquement le paramétrage optimal de notre algorithme. Ce paramétrage est celui qui doit mener à une estimation suffisamment précise des paramètres du modèle recherché en un temps (en secondes) le plus court. Les deux méthodes proposées sont des solutions très générales qui permettent d'intégrer dans RANSAC tout type d'information complémentaire utile à la résolution du problème. Nous montrons l'avantage de ces méthodes pour le problème de l'estimation d'homographies et de géométries épipolaires entre deux photographies d'une même scène. Les gains en vitesse de résolution du problème peuvent atteindre un facteur cent par rapport à l'algorithme RANSAC classique.
9

Ανάπτυξη τεχνικών αντιστοίχισης εικόνων με χρήση σημείων κλειδιών

Γράψα, Ιωάννα 17 September 2012 (has links)
Ένα σημαντικό πρόβλημα είναι η αντιστοίχιση εικόνων με σκοπό τη δημιουργία πανοράματος. Στην παρούσα εργασία έχουν χρησιμοποιηθεί αλγόριθμοι που βασίζονται στη χρήση σημείων κλειδιών. Αρχικά στην εργασία βρίσκονται σημεία κλειδιά για κάθε εικόνα που μένουν ανεπηρέαστα από τις αναμενόμενες παραμορφώσεις με την βοήθεια του αλγορίθμου SIFT (Scale Invariant Feature Transform). Έχοντας τελειώσει αυτή τη διαδικασία για όλες τις εικόνες, προσπαθούμε να βρούμε το πρώτο ζευγάρι εικόνων που θα ενωθεί. Για να δούμε αν δύο εικόνες μπορούν να ενωθούν, ακολουθεί ταίριασμα των σημείων κλειδιών τους. Όταν ένα αρχικό σετ αντίστοιχων χαρακτηριστικών έχει υπολογιστεί, πρέπει να βρεθεί ένα σετ που θα παράγει υψηλής ακρίβειας αντιστοίχιση. Αυτό το πετυχαίνουμε με τον αλγόριθμο RANSAC, μέσω του οποίου βρίσκουμε το γεωμετρικό μετασχηματισμό ανάμεσα στις δύο εικόνες, ομογραφία στην περίπτωσή μας. Αν ο αριθμός των κοινών σημείων κλειδιών είναι επαρκής, δηλαδή ταιριάζουν οι εικόνες, ακολουθεί η ένωσή τους. Αν απλώς ενώσουμε τις εικόνες, τότε θα έχουμε σίγουρα κάποια προβλήματα, όπως το ότι οι ενώσεις των δύο εικόνων θα είναι πολύ εμφανείς. Γι’ αυτό, για την εξάλειψη αυτού του προβλήματος, χρησιμοποιούμε τη μέθοδο των Λαπλασιανών πυραμίδων. Επαναλαμβάνεται η παραπάνω διαδικασία μέχρι να δημιουργηθεί το τελικό πανόραμα παίρνοντας κάθε φορά σαν αρχική την τελευταία εικόνα που φτιάξαμε στην προηγούμενη φάση. / Stitching multiple images together to create high resolution panoramas is one of the most popular consumer applications of image registration and blending. At this work, feature-based registration algorithms have been used. The first step is to extract distinctive invariant features from every image which are invariant to image scale and rotation, using SIFT (Scale Invariant Feature Transform) algorithm. After that, we try to find the first pair of images in order to stitch them. To check if two images can be stitched, we match their keypoints (the results from SIFT). Once an initial set of feature correspondences has been computed, we need to find the set that is will produce a high-accuracy alignment. The solution at this problem is RANdom Sample Consensus (RANSAC). Using this algorithm (RANSAC) we find the motion model between the two images (homography). If there is enough number of correspond points, we stitch these images. After that, seams are visible. As solution to this problem is used the method of Laplacian Pyramids. We repeat the above procedure using as initial image the ex panorama which has been created.
10

Technology and political speech : commercialisation, authoritarianism and the supposed death of the Internet's democratic potential

Bolsover, Gillian January 2017 (has links)
The Internet was initially seen as a metaphor for democracy itself. However, commercialisation, incorporation into existing hierarchies and patterns of daily life and state control and surveillance appear to have undermined these utopian dreams. The vast majority of online activity now takes place in a handful of commercially owned spaces, whose business model rests on the collection and monetisation of user data. However, the upsurge of political action in the Middle East and North Africa in 2010 and 2011, which many argued was facilitated by social media, raised the question of whether these commercial platforms that characterise the contemporary Internet might provide better venues for political speech than previous types of online spaces, particularly in authoritarian states. This thesis addresses the question of how the commercialisation of online spaces affects their ability to provide a venue for political speech in different political systems through a mixed-methods comparison of the U.S. and China. The findings of this thesis support the hypotheses drawn from existing literature: commercialisation is negative for political speech but it is less negative, even potentially positive, in authoritarian systems. However, this research uncovers a surprising explanation for this finding. The greater positivity of commercialisation for political speech in authoritarian systems seems to occur not despite the government but because of it. The Chinese state's active stance in monitoring, encouraging and crafting ideas about political speech has resisted its negative repositioning as a commercial product. In contrast, in the U.S., online political speech has been left to the market that sells back the dream of an online public sphere to users as part of its commercial model. There is still hope that the Internet can provide a venue for political speech but power, particularly over the construction of what it means to be a political speaker in modern society, needs to be taken back from the market.

Page generated in 0.03 seconds