• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 31
  • 29
  • 13
  • 12
  • 10
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 408
  • 158
  • 59
  • 58
  • 57
  • 57
  • 55
  • 52
  • 49
  • 45
  • 42
  • 41
  • 39
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Eκτίμηση της συνάρτησης πυκνότητας πιθανότητας παραμέτρων που προέρχονται από σήματα πηγών ακουστικής εκπομπής

Γρενζελιάς, Αναστάσιος 25 June 2009 (has links)
Στη συγκεκριμένη εργασία ασχολήθηκα με την εκτίμηση της συνάρτησης πυκνότητας πιθανότητας παραμέτρων που προέρχονται από σήματα πηγών ακουστικής εκπομπής που επεξεργάστηκα. Στο θεωρητικό κομμάτι το μεγαλύτερο ενδιαφέρον παρουσίασαν ο Μη Καταστροφικός Έλεγχος και η Ακουστική Εκπομπή, καθώς και οι εφαρμογές τους. Τα δεδομένα που επεξεργάστηκα χωρίζονται σε δύο κατηγορίες: σε εκείνα που μου δόθηκαν έτοιμα και σε εκείνα που λήφθηκαν μετά από μετρήσεις. Στην επεξεργασία των πειραματικών δεδομένων χρησιμοποιήθηκε ο αλγόριθμος πρόβλεψης-μεγιστοποίησης, τον οποίο μελέτησα θεωρητικά και με βάση τον οποίο εξάχθηκαν οι παράμετροι για κάθε σήμα. Έχοντας βρει τις παραμέτρους, προχώρησα στην ταξινόμηση των σημάτων σε κατηγορίες με βάση τη θεωρία της αναγνώρισης προτύπων. Στο τέλος της εργασίας παρατίθεται το παράρτημα με τα αναλυτικά αποτελέσματα, καθώς και η βιβλιογραφία που χρησιμοποίησα. / In this diploma paper the subject was the calculation of the probability density function of parameters which come from signals of sources of acoustic emission. In the theoritical part, the chapters with the greatest interest were Non Destructive Control and Acoustic Emission and their applications. The data which were processed are divided in two categories: those which were given without requiring any laboratory research and those which demanded laboratory research. The expectation-maximization algorithm, which was used in the process of the laboratory data, was the basis for the calculation of the parameters of each signal. Having calculated the parameters, the signals were classified in categories according to the theory of pattern recognition. In the end of the paper, the results and the bibliography which was used are presented.
272

La faiblesse de volonté : conceptions classiques et dynamiques

Labonté, Jean-François 09 1900 (has links)
La présente thèse expose, analyse et critique les positions classiques et modernes à l’égard de la nature et des causes de la faiblesse de volonté. L’identification du problème par Platon et Aristote a donné lieu à l’explicitation de principes et propositions portant sur la rationalité pratique en général et la motivation en particulier. Une discussion de ces principes et propositions est faite dans la mesure où ils ont conservé une certaine pertinence pour les théories modernes. Ce qui est devenu la conception standard de la stricte akrasie ainsi que son caractère prétendument paradoxal sont mis de l’avant. Nous argumentons qu’une position sceptique à l’égard de la stricte akrasie ne peut pas reposer sur une version ou une autre de la théorie des préférences révélées et montrons qu’une description du processus décisionnel est nécessaire pour attribuer une préférence synthétique ou un meilleur jugement. Nous abordons le débat philosophique qui oppose une conception internaliste du lien entre le meilleur jugement et la décision à une conception externaliste, et soutenons, sur la base de résultats expérimentaux en psychologie cognitive et en neuroscience, que cette dernière conception est plus robuste, bien qu’imparfaite. Ces résultats ne vont pas toutefois à l’encontre de l’hypothèse que les agents sont des maximisateurs dans la satisfaction de leur préférence, laquelle hypothèse continue de justifier une forme de scepticisme à l’égard de la stricte akrasie. Nous exposons, par contre, des arguments solides à l’encontre de cette hypothèse et montrons pourquoi la maximisation n’est pas nécessairement requise pour le choix rationnel et que nous devons, par conséquent, réviser la conception standard de la stricte akrasie. Nous discutons de l’influente théorie de Richard Holton sur la faiblesse de volonté non strictement akratique. Bien que compatible avec une conception non maximisante, sa théorie réduit trop les épisodes de faiblesse de volonté à des cas d’irrésolution. Nous exposons finalement la théorie du choix intertemporel. Cette théorie est plus puissante parce qu’elle décrit et explique, à partir d’un même schème conceptuel, autant la stricte akrasie que l’akrasie tout court. Ce schème concerne les propriétés des distributions temporelles des conséquences des décisions akratiques et les attitudes prospectives qui motivent les agents à les prendre. La structure de ces distributions, couplée à la dévaluation à l’égard du futur, permet également d’expliquer de manière simple et élégante pourquoi la faiblesse de volonté est irrationnelle. Nous discutons de l’hypothèse qu’une préférence temporelle pure est à la source d’une telle dévaluation et mentionnons quelques éléments critiques et hypothèses concurrentes plus conformes à une approche cognitiviste du problème. / This thesis explains, analyses and examines the classical and modern positions on the nature and causes of the weakness of will. Since Plato and Aristotle’s identification of the problem, many principles and propositions on the subject of practical rationality in general and motivation in particular have been examined in details. These principles and propositions are being discussed on the basis that they are still somewhat relevant to modern theories. An emphasis is made on what is now known as the standard conception of strict akrasia and its supposedly paradoxical nature. We argue that a skeptical position toward strict akrasia cannot be based on one version or another of the preference-revealed theory and we demonstrate that a description of the decision process is necessary to assign an overall preference or a better judgment. We discuss the philosophical debate on internalist and externalist conceptions of the connection between better judgment and decision. We then argue that, based on experimental results in cognitive psychology and neuroscience, the externalist conception, although imperfect, is stronger. But these experimental results are not incompatible with the hypothesis that agents are maximizers when it comes to the satisfaction of their preference. This hypothesis continues to justify a form of skepticism toward strict akrasia. However, we strongly argue against this hypothesis and we demonstrate why maximization is not absolutely necessary to rational choice; therefore, we have to revise the standard conception of strict akrasia. We then discuss Richard Holton’s influential theory on non-strictly akratic weakness of will. Although compatible with a non-maximizing conception, Holton’s theory tends to reduce episodes of weakness of will to irresolution cases. Lastly, we introduce the theory of intertemporal choice, a more potent theory that describes and explains, with the same conceptual schema, both strict and non-strict akrasia. This schema concerns the properties of temporal distribution of akratic decision’s consequences and the prospective attitudes that motivate agents to make those decisions. Also, the structure of these distributions, along with the devaluation of the future, allows us to explain, clearly and simply, why weakness of will is irrational. We discuss the hypothesis that this devaluation of the future is due to a pure temporal preference and we mention a number of critical elements and rival hypothesis more in keeping with a cognitive approach to the problem.
273

Multimedia Delivery over Heterogeneous Wireless Networks

Xing, Min 29 April 2015 (has links)
There is an increasing demand for multimedia services in heterogeneous wireless networks. Considering the highly dynamic wireless channels and the relatively large size of the multimedia data, how to support efficient and reliable multimedia delivery is a pressing issue. In this dissertation, we investigate the multimedia delivery algorithms in heterogeneous wireless networks from three different aspects. First, we study the single-flow rate adaptation of video streaming algorithm over multiple wireless interfaces. In order to maintain high video streaming quality while reducing the wireless service cost, the optimal video streaming process with multiple links is formulated as a Markov Decision Process (MDP). The reward function is designed to consider the quality of service (QoS) requirements for video traffic, such as the startup latency, playback fluency, average playback quality, playback smoothness and wireless service cost. To solve the MDP in real time, we propose an adaptive, best-action search algorithm to obtain a sub-optimal solution. To evaluate the performance of the proposed adaptation algorithm, we implemented a testbed using the Android mobile phone and the Scalable Video Coding (SVC) codec and conducted experiments with real video flow. Then, with the multiple multimedia flows competing for limited wireless resources, we propose a utility-based scheduling algorithm for multimedia transmission in Drive-thru Internet. A utility model is devised to map the throughput to user's satisfaction level in terms of multimedia data quality, such as Peak Signal-to-Noise Ratio (PSNR) of video. The objective of the scheduling problem is to maximize the total utility. Then the optimization problem is formulated as a finite-state decision problem with the assumption that future arrival information is known, and it is solved by a searching algorithm as the benchmark. To obtain a real-time solution, a practical heuristic algorithm based on the concept of utility potential is devised. We further implemented the solution and conducted extensive simulations using NS-3. Finally, the multimedia dissemination problem in large-scale VANETs is investigated. We first utilize a hybrid-network framework to address the mobility and scalability issues in large-scale VANETs content distribution. Then, we formulate a utility-based maximization problem to find the best delivery strategy and select an optimal path for the multimedia data dissemination, where the utility function has taken the delivery delay, the Quality of Services (QoS) and the storage cost into consideration. We obtain the closed-form of the utility function, and then obtain the optimal solution of the problem with the convex optimization theory. Finally, we conducted extensive trace-driven simulations to evaluate the performance of the proposed algorithm with real traces collected by taxis in Shanghai. In summary, the research outcomes of the dissertation can contribute to three different aspects of multimedia delivery in heterogeneous wireless networks. First, we have proposed a real-time rate adaptation algorithm for video streaming with multiple wireless interfaces, to maintain the high quality while reducing the wireless services cost. Second, we have presented an optimal scheduling algorithm which can maximize the total satisfaction for multimedia transmission in Drive-thru Internet. Third, we have derived the theoretical analysis of the utility functions including delivery delay, QoS and the storage cost, and have obtained an optimal solution for multimedia data dissemination in large-scale VANETs to achieve the highest utility. / Graduate / 0984 / 0544
274

Motion Capture of Deformable Surfaces in Multi-View Studios

Cagniart, Cedric 16 July 2012 (has links) (PDF)
In this thesis we address the problem of digitizing the motion of three-dimensional shapes that move and deform in time. These shapes are observed from several points of view with cameras that record the scene's evolution as videos. Using available reconstruction methods, these videos can be converted into a sequence of three-dimensional snapshots that capture the appearance and shape of the objects in the scene. The focus of this thesis is to complement appearance and shape with information on the motion and deformation of objects. In other words, we want to measure the trajectory of every point on the observed surfaces. This is a challenging problem because the captured videos are only sequences of images, and the reconstructed shapes are built independently from each other. While the human brain excels at recreating the illusion of motion from these snapshots, using them to automatically measure motion is still largely an open problem. The majority of prior works on the subject has focused on tracking the performance of one human actor, and used the strong prior knowledge on the articulated nature of human motion to handle the ambiguity and noise inherent to visual data. In contrast, the presented developments consist of generic methods that allow to digitize scenes involving several humans and deformable objects of arbitrary nature. To perform surface tracking as generically as possible, we formulate the problem as the geometric registration of surfaces and deform a reference mesh to fit a sequence of independently reconstructed meshes. We introduce a set of algorithms and numerical tools that integrate into a pipeline whose output is an animated mesh. Our first contribution consists of a generic mesh deformation model and numerical optimization framework that divides the tracked surface into a collection of patches, organizes these patches in a deformation graph and emulates elastic behavior with respect to the reference pose. As a second contribution, we present a probabilistic formulation of deformable surface registration that embeds the inference in an Expectation-Maximization framework that explicitly accounts for the noise and in the acquisition. As a third contribution, we look at how prior knowledge can be used when tracking articulated objects, and compare different deformation model with skeletal-based tracking. The studies reported by this thesis are supported by extensive experiments on various 4D datasets. They show that in spite of weaker assumption on the nature of the tracked objects, the presented ideas allow to process complex scenes involving several arbitrary objects, while robustly handling missing data and relatively large reconstruction artifacts.
275

Mixture model analysis with rank-based samples

Hatefi, Armin January 2013 (has links)
Simple random sampling (SRS) is the most commonly used sampling design in data collection. In many applications (e.g., in fisheries and medical research) quantification of the variable of interest is either time-consuming or expensive but ranking a number of sampling units, without actual measurement on them, can be done relatively easy and at low cost. In these situations, one may use rank-based sampling (RBS) designs to obtain more representative samples from the underlying population and improve the efficiency of the statistical inference. In this thesis, we study the theory and application of the finite mixture models (FMMs) under RBS designs. In Chapter 2, we study the problems of Maximum Likelihood (ML) estimation and classification in a general class of FMMs under different ranked set sampling (RSS) designs. In Chapter 3, deriving Fisher information (FI) content of different RSS data structures including complete and incomplete RSS data, we show that the FI contained in each variation of the RSS data about different features of FMMs is larger than the FI contained in their SRS counterparts. There are situations where it is difficult to rank all the sampling units in a set with high confidence. Forcing rankers to assign unique ranks to the units (as RSS) can lead to substantial ranking error and consequently to poor statistical inference. We hence focus on the partially rank-ordered set (PROS) sampling design, which is aimed at reducing the ranking error and the burden on rankers by allowing them to declare ties (partially ordered subsets) among the sampling units. Studying the information and uncertainty structures of the PROS data in a general class of distributions, in Chapter 4, we show the superiority of the PROS design in data analysis over RSS and SRS schemes. In Chapter 5, we also investigate the ML estimation and classification problems of FMMs under the PROS design. Finally, we apply our results to estimate the age structure of a short-lived fish species based on the length frequency data, using SRS, RSS and PROS designs.
276

Aspects technologiques et économiques de la qualité de service dans les alliances de fournisseurs de services

AMIGO, Maria Isabel 12 July 2013 (has links) (PDF)
Providing end-to-end quality-assured services implies many challenges, which go beyond technical ones, involving as well economic and even cultural or political issues. In this thesis we first focus on a technical problem and then intent a more holistic regard to the whole problem, considering at the same time Network Service Providers (NSPs), stakeholders and buyers' behaviour and satisfaction. One of the most important problems when deploying interdomain path selection with Quality of Service (QoS) requirements is being able to rely the computations on metrics that hold for a long period of time. Our proposal for solving that problem is to compute bounds on the metrics, taking into account the uncertainty on the traffic demands. We then move to a NSP-alliance scenario, where we propose a complete framework for selling interdomain quality-assured services, and subsequently distributing revenues. At the end of the thesis we adopt a more holistic approach and consider the interactions with the monitoring plane and the buyers' behaviour. We propose a simple pricing scheme and study it in detail, in order to use QoS monitoring information as feedback to the business plane, with the ultimate objective of improving the seller's revenue.
277

Design of robust blind detector with application to watermarking

Anamalu, Ernest Sopuru 14 February 2014 (has links)
One of the difficult issues in detection theory is to design a robust detector that takes into account the actual distribution of the original data. The most commonly used statistical detection model for blind detection is Gaussian distribution. Specifically, linear correlation is an optimal detection method in the presence of Gaussian distributed features. This has been found to be sub-optimal detection metric when density deviates completely from Gaussian distributions. Hence, we formulate a detection algorithm that enhances detection probability by exploiting the true characterises of the original data. To understand the underlying distribution function of data, we employed the estimation techniques such as parametric model called approximated density ratio logistic regression model and semiparameric estimations. Semiparametric model has the advantages of yielding density ratios as well as individual densities. Both methods are applicable to signals such as watermark embedded in spatial domain and outperform the conventional linear correlation non-Gaussian distributed.
278

Received signal strength calibration for wireless local area network localization

Felix, Diego 11 August 2010 (has links)
Terminal localization for indoor Wireless Local Area Networks (WLAN) is critical for the deployment of location-aware computing inside of buildings. The purpose of this research work is not to develop a novel WLAN terminal location estimation technique or algorithm, but rather to tackle challenges in survey data collection and in calibration of multiple mobile terminal Received Signal Strength (RSS) data. Three major challenges are addressed in this thesis: first, to decrease the influence of outliers introduced in the distance measurements by Non-Line-of-Sight (NLoS) propagation when a ultrasonic sensor network is used for data collection; second, to obtain high localization accuracy in the presence of fluctuations of the RSS measurements caused by multipath fading; and third, to determine an automated calibration method to reduce large variations in RSS levels when different mobile devices need to be located. In this thesis, a robust window function is developed to mitigate the influence of outliers in survey terminal localization. Furthermore, spatial filtering of the RSS signals to reduce the effect of the distance-varying portion of noise is proposed. Two different survey point geometries are tested with the noise reduction technique: survey points arranged in sets of tight clusters and survey points uniformly distributed over the network area. Finally, an affine transformation is introduced as RSS calibration method between mobile devices to decrease the effect of RSS level variation and an automated calibration procedure based on the Expectation-Maximization (EM) algorithm is developed. The results show that the mean distance error in the survey terminal localization is well within an acceptable range for data collection. In addition, when the spatial averaging noise reduction filter is used the location accuracy improves by 16% and by 18% when the filter is applied to a clustered survey set as opposed to a straight-line survey set. Lastly, the location accuracy is within 2m when an affine function is used for RSS calibration and the automated calibration algorithm converged to the optimal transformation parameters after it was iterated for 11 locations.
279

Graphical Models for Robust Speech Recognition in Adverse Environments

Rennie, Steven J. 01 August 2008 (has links)
Robust speech recognition in acoustic environments that contain multiple speech sources and/or complex non-stationary noise is a difficult problem, but one of great practical interest. The formalism of probabilistic graphical models constitutes a relatively new and very powerful tool for better understanding and extending existing models, learning, and inference algorithms; and a bedrock for the creative, quasi-systematic development of new ones. In this thesis a collection of new graphical models and inference algorithms for robust speech recognition are presented. The problem of speech separation using multiple microphones is first treated. A family of variational algorithms for tractably combining multiple acoustic models of speech with observed sensor likelihoods is presented. The algorithms recover high quality estimates of the speech sources even when there are more sources than microphones, and have improved upon the state-of-the-art in terms of SNR gain by over 10 dB. Next the problem of background compensation in non-stationary acoustic environments is treated. A new dynamic noise adaptation (DNA) algorithm for robust noise compensation is presented, and shown to outperform several existing state-of-the-art front-end denoising systems on the new DNA + Aurora II and Aurora II-M extensions of the Aurora II task. Finally, the problem of speech recognition in speech using a single microphone is treated. The Iroquois system for multi-talker speech separation and recognition is presented. The system won the 2006 Pascal International Speech Separation Challenge, and amazingly, achieved super-human recognition performance on a majority of test cases in the task. The result marks a significant first in automatic speech recognition, and a milestone in computing.
280

Design of robust blind detector with application to watermarking

Anamalu, Ernest Sopuru 14 February 2014 (has links)
One of the difficult issues in detection theory is to design a robust detector that takes into account the actual distribution of the original data. The most commonly used statistical detection model for blind detection is Gaussian distribution. Specifically, linear correlation is an optimal detection method in the presence of Gaussian distributed features. This has been found to be sub-optimal detection metric when density deviates completely from Gaussian distributions. Hence, we formulate a detection algorithm that enhances detection probability by exploiting the true characterises of the original data. To understand the underlying distribution function of data, we employed the estimation techniques such as parametric model called approximated density ratio logistic regression model and semiparameric estimations. Semiparametric model has the advantages of yielding density ratios as well as individual densities. Both methods are applicable to signals such as watermark embedded in spatial domain and outperform the conventional linear correlation non-Gaussian distributed.

Page generated in 0.499 seconds