• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 530
  • 232
  • 68
  • 48
  • 28
  • 25
  • 20
  • 17
  • 13
  • 12
  • 8
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1178
  • 1032
  • 202
  • 193
  • 173
  • 161
  • 155
  • 147
  • 123
  • 121
  • 106
  • 96
  • 90
  • 84
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Rizikové modely annuitních škod z neživotního pojištění / Risk models of annuity damages in non-life insurance

Šmarda, Tomáš January 2017 (has links)
This thesis is focused on practical application of two methods used in non-life insurance, Nested Monte Carlo and Least squares Monte Carlo. Best estimate and 99.5% quantile was calculated using both methods and results was compared. Both methods are similar in estimates and therefore can be used for computation of capital requirement. Least squares Monte Carlo seem more favourable, because it significantly reduces computation time.
402

Investigation of wireless local area network facilitated angle of arrival indoor location

Wong, Carl Monway 11 1900 (has links)
As wireless devices become more common, the ability to position a wireless device has become a topic of importance. Accurate positioning through technologies such as the Global Positioning System is possible for outdoor environments. Indoor environments pose a different challenge, and research continues to position users indoors. Due to the prevalence of wireless local area networks (WLANs) in many indoor spaces, it is prudent to determine their capabilities for the purposes of positioning. Signal strength and time based positioning systems have been studied for WLANs. Direction or angle of arrival (AOA) based positioning will be possible with multiple antenna arrays, such as those included with upcoming devices based on the IEEE 802.11n standard. The potential performance of such a system is evaluated. The positioning performance of such a system depends on the accuracy of the AOA estimation as well as the positioning algorithm. Two different maximum-likelihood (ML) derived algorithms are used to determine the AOA of the mobile user: a specialized simple ML algorithm, and the space- alternating generalized expectation-maximization (SAGE) channel parameter estimation algorithm. The algorithms are used to determine the error in estimating AOAs through the use of real wireless signals captured in an indoor office environment. The statistics of the AOA error are used in a positioning simulation to predict the positioning performance. A least squares (LS) technique as well as the popular extended Kalman filter (EKF) are used to combine the AOAs to determine position. The position simulation shows that AOA- based positioning using WLANs indoors has the potential to position a wireless user with an accuracy of about 2 m. This is comparable to other positioning systems previously developed for WLANs. / Applied Science, Faculty of / Engineering, School of (Okanagan) / Graduate
403

Méthode numérique d'estimation du mouvement des masses molles / Numerical method for soft tissue motion assessment

Thouzé, Arsène 18 December 2013 (has links)
Le mouvement des masses molles est à la fois une source d'erreur en analyse cinématique et une source d'information en analyse de la dynamique articulaire. Leur effet sur la cinématique peut être numériquement minimisé et leur dynamique estimée seulement par la simulation car aucune méthode numérique ne permet de distinguer la cinématique des masses molles de celle de l'os. Le travail présenté dans ce mémoire propose de développer une méthode numérique pour distinguer ces deux cinématiques. Une méthode d'optimisation locale a d'abord été utilisée pour évaluer le mouvement des masses molles et comparée à l'os pour valider celle-ci. Les résultats ont montré une inadaptation de la méthode locale à évaluer quantitativement et analyser le mouvement des masses molles. L'incapacité de cette méthode vient du fait qu'elle ne prend pas en compte l'ensemble des composantes du mouvement des masses molles. Un modèle numérique du membre inférieur a été développé dans la seconde étude pour considérer l'ensemble de ces composantes. Ce modèle assure le calcul de la cinématique articulaire du membre inférieur et estime un plus grand mouvement des masses molles à partir du déplacement total des marqueurs. Ce déplacement de marqueur est plus le fait d'une composante à l'unisson que d'une composante propre du mouvement des masses molles. Cette composante à l'unisson induit un mouvement commun des marqueurs par rapport à l'os. Ce mouvement commun permet ainsi de déduire la cinématique des masses molles autour des axes anatomiques des os modélisés. Cette méthode numérique permet ainsi de distinguer la cinématique de l'os de celle des masses molles offre une perspective d'étudier leur dynamique. / The movement of wobbling mass is the major source of error in kinematic analysis and a source of information in joint kinetic analysis. The effect on joint kinematic can numerically be minimized and their kinetic estimated using numerical model because there is no numerical method able to distinguish the wobbling masses kinematic from bones kinematic. The work presented in this thesis aims to develop a numerical method to distinguish those two kinematics. Firstly, a local optimisation method was used to assess the movement of wobbling mass and was compared to the bone in order to validate this numerical method. Results show maladjustment of the local method to assess quantitavely and analyze the movement of wobbling mass. The inability of this method is caused by the fact it cannot take in account all component of the movement of wobbling mass. A numerical model has been developing in the second part in order to consider all these components. This model insure similar joint kinematics, provides a bigger estimate of the movement of wobbling mass from marker displacements. This marker displacements is more induced by an unison component rather a own component of the movement of wobbling mass. The in unison component induce a common displacement of markers relative to the bones. This common movement allows to infer the kinematic of the wobbling mass in regard to anatomical axes of the modelled bones” This numerical method allows to distinguish the kinematic of bones and the kinematic of wobbling mass, and offers a perspective to investigate their kinetic.
404

Výběr modelu na základě penalizované věrohodnosti / Variable selection based on penalized likelihood

Chlubnová, Tereza January 2016 (has links)
Selection of variables and estimation of regression coefficients in datasets with the number of variables exceeding the number of observations consti- tutes an often discussed topic in modern statistics. Today the maximum penalized likelihood method with an appropriately selected function of the parameter as the penalty is used for solving this problem. The penalty should evaluate the benefit of the variable and possibly mitigate or nullify the re- spective regression coefficient. The SCAD and LASSO penalty functions are popular for their ability to choose appropriate regressors and at the same time estimate the parameters in a model. This thesis presents an overview of up to date results in the area of characteristics of estimates obtained by using these two methods for both small number of regressors and multidimensional datasets in a normal linear model. Due to the fact that the amount of pe- nalty and therefore also the choice of the model is heavily influenced by the tuning parameter, this thesis further discusses its selection. The behavior of the LASSO and SCAD penalty functions for different values and possibili- ties for selection of the tuning parameter is tested with various numbers of regressors on simulated datasets.
405

[en] AN ALGORITHM FOR CURVE RECONSTRUCTION FROM SPARSE POINTS / [pt] UM ALGORITMO PARA RECONSTRUÇÃO DE CURVAS A PARTIR DE PONTOS ESPARSOS

CRISTIANE AZEVEDO FERREIRA 23 January 2004 (has links)
[pt] A reconstrução de curvas e superfícies a partir de pontos esparsos é um problema que tem recebido bastante atenção ultimamente. A não-estruturação dos pontos (ou seja, desconhecimento das relações de vizinhança e proximidade) e a presença de ruído são dois fatores que tornam este problema complexo. Para resolver este problema, várias técnicas podem ser utilizadas, como triangulação de Delaunay, reconstrução de iso-superfícies através de Marching Cubes e algoritmos baseados em avanço de fronteira. O algoritmo proposto consiste de quatro etapas principais: a primeira etapa é a clusterização dos pontos de amostragem de acordo com sua localização espacial. A clusterização fornece uma estrutura espacial para os pontos, e consiste em dividir o espaço em células retangulares de mesma dimensão, classificando as células em cheias (caso possuam pontos de amostragem em seu interior) ou vazias (caso não possuam pontos de amostragem em seu interior). A estrutura de dados gerada nesta etapa permite também obter o conjunto dos pontos de amostragem de cada uma das células. A segunda etapa é o processamento dos pontos através de projeções MLS. A etapa de pré- processameno visa reduzir ruído dos pontos de amostragem, bem como adequar a densidade de pontos ao nível de detalhe esperado, adicionando ou removendo pontos do conjunto inicial. A terceira etapa parte do conjunto das células que possuem pontos de amostragem em seu interior (células cheias) e faz a esqueletonização deste conjunto de células, obtendo, assim, uma aproximação digital para a curva a ser reconstruída. Este esqueleto é encontrado através do afinamento topológico das células que possuem pontos. A implementação do algoritmo de afinamento é feita de modo que o número de pontos em cada célula seja levado em consideração, removendo primeiro sempre as células com menor número de pontos. Na quarta etapa, a reconstrução da curva é finalmente realizada. Para tal, parte-se do esqueleto obtido na terceira etapa e constrói-se uma curva linear por partes, onde cada vértice é obtido a partir da projeção MLS do ponto médio de cada célula do esqueleto. / [en] Curve and surface reconstruction from sparse data has been recognized as an important problem in computer graphics. Non structured data points (i.e., a set of points with no knowledge of connectivity and proximity) together with the existence of noise make this problem quite difficult. In order to solve it, several techniques have been proposed, such as, some of them are based on Delaunay triangulation, other are based on implicit surface reconstruction or on the advancing front techniques. Our algorithm consists basically in four steps. In the first step, a clustering procedure is performed in order to group the sample points according to their spatial location. This procedure obtains an spatial structure for the points by subdividing uniformly the plane in rectangular cells, and classifying them into two categories: empty (when the cell contains no point inside) or not empty (otherwise). At this stage, a data structure is built in such way that it is possible to query the set of sample points that belong to a given rectangular cell. The second step processes the point through the Moving Least Squares method. Its objective is not only to reduce the noise on the data, but also to adapt the number of point to the desired level, by adding or removing points from the initial set. The third step builds the skeleton of the set of cells that have sample point on its interior. Such skeleton is in fact a digital approximation for the curve that will be reconstructed. It is obtained by the use of a topological thinning algorithm, and its implementation is done in such a way that the number of points in each cell is considered, for example, the cells with less number of points are not considered for the thinning. In the last step, the curve is finally reconstructed To do so, the skeleton obtained in the third step is used to construct a piecewise-linear approximation for the curve, where each vertex is obtained from the MLS projection on the middle point of the skeleton rectangular cell.
406

L’adoption des innovations technologiques par les clients et son impact sur la relation client - Cas de la banque mobile - / The adoption of technological innovations by customers and its impact on customer relations - Case of mobile banking -

Cheikho, Avin 04 November 2015 (has links)
Au cours de ces dernières années, les technologies mobiles ont créé des conditions de marché très concurrentielles. Face à cette nouvelle conjoncture, les banques ont lancé la banque mobile, une innovation technologique en milieu bancaire comme une nouvelle opportunité à saisir. Cette étude pose une question liée au cœur des principaux problèmes rencontrés dans le domaine bancaire : qui investit le plus dans les TIC, et qui vise à développer des relations à long terme avec ses clients. Afin de produire une valeur ajoutée sur les investissements technologiques, il devient important pour les banques d’assurer l’adoption de ces services par leurs clients dans un premier temps et d’assurer la survie de ces services (la continuité de l’utilisation) par le développement des relations durables et rentables avec les clients dans un deuxième temps. Ceci signifie que la compréhension des comportements des clients nécessite deux phases : la phase « adoption » et la phase « post-adoption ». La thèse vise, d’une part, à explorer les facteurs influençant l’adoption de la banque mobile par les clients et, d’autre part, à formuler un cadre explicatif de l’effet de ces facteurs pour établir et améliorer des relations entre les banques et leurs clients. L’analyse des données recueillies par questionnaire administré en face à face auprès de 282 répondants, identifie trois segments de clients : non utilisateurs, utilisateurs et adopteurs. L'analyse explicative réalisée par la méthode PLS relève le rôle important joué par quatre facteurs : l’utilité perçue, le risque perçu, la sécurité perçue et l’effort attendu dans les deux phases. / In recent years, mobile technologies have created very competitive market conditions. Facing this new environment, banks have launched mobile banking, a technological innovation in banking sector, as a new opportunity to seize. This study raises a question related to the heart of the main problems in banking: who invests the most in ICT, and who aims to develop long-term relationships with its clients. To produce added value on technological investments, it becomes important for banks to ensure the adoption of these services by their clients at first time and ensure the survival of these services (continuity of use) through the development of sustainable and profitable customer relationships in a second time. This means that the understanding of customer behavior requires two phases: the "adoption" phase and the "post-adoption" phase. The thesis aims, first, to explore the factors influencing the adoption of mobile banking by customers and, second, to formulate an explanatory framework of the effect of these factors to establish and improve relations between banks and their customers.The analysis of data collected by questionnaires administered face to face with 282 respondents identifies three customer segments: non-users, users and adopters. The explanatory analysis by the PLS method highlights the important role played by four factors: perceived usefulness, perceived risk, perceived safety and the expected effort in the two phases.
407

Automatic Pain Assessment from Infants’ Crying Sounds

Pai, Chih-Yun 01 November 2016 (has links)
Crying is infants utilize to express their emotional state. It provides the parents and the nurses a criterion to understand infants’ physiology state. Many researchers have analyzed infants’ crying sounds to diagnose specific diseases or define the reasons for crying. This thesis presents an automatic crying level assessment system to classify infants’ crying sounds that have been recorded under realistic conditions in the Neonatal Intensive Care Unit (NICU) as whimpering or vigorous crying. To analyze the crying signal, Welch’s method and Linear Predictive Coding (LPC) are used to extract spectral features; the average and the standard deviation of the frequency signal and the maximum power spectral density are the other spectral features which are used in classification. For classification, three state-of-the-art classifiers, namely K-nearest Neighbors, Random Forests, and Least Squares Support Vector Machine are tested in this work, and the experimental result achieves the highest accuracy in classifying whimper and vigorous crying using the clean dataset is 90%, which is sampled with 10 seconds before scoring and 5 seconds after scoring and uses K-nearest neighbors as the classifier.
408

Improved measure of orbital stability of rhythmic motions

Khazenifard, Amirhosein 30 November 2017 (has links)
Rhythmic motion is ubiquitous in nature and technology. Various motions of organisms like the heart beating and walking require stable periodic execution. The stability of the rhythmic execution of human movement can be altered by neurological or orthopedic impairment. In robotics, successful development of legged robots heavily depends on the stability of the controlled limit-cycle. An accurate measure of the stability of rhythmic execution is critical to the diagnosis of several performed tasks like walking in human locomotion. Floquet multipliers have been widely used to assess the stability of a periodic motion. The conventional approach to extract the Floquet multipliers from actual data depends on the least squares method. We devise a new way to measure the Floquet multipliers with reduced bias and estimate orbital stability more accurately. We show that the conventional measure of the orbital stability has bias in the presence of noise, which is inevitable in every experiment and observation. Compared with previous method, the new method substantially reduces the bias, providing acceptable estimate of the orbital stability with fewer cycles even with different noise distributions or higher or lower noise levels. The new method can provide an unbiased estimate of orbital stability within a reasonably small number of cycles. This is important for experiments with human subjects or clinical evaluation of patients that require effective assessment of locomotor stability in planning rehabilitation programs. / Graduate / 2018-11-22
409

Magnetic Rendering: Magnetic Field Control for Haptic Interaction

Zhang, Qi January 2015 (has links)
As a solution to mid-air haptic actuation with strong and continuous tactile force, Magnetic Rendering is presented as an intuitive haptic display method applying an electromagnet array to produce a magnetic field in mid-air where the force field can be felt as magnetic repulsive force exerted on the hand through the attached magnet discs. The magnetic field is generated by a specifically designed electromagnet array driven by direct current. By attaching small magnet discs on the hand, the tactile sensation can be perceived by the user. This method can provide a strong tactile force on multiple points covering user’s hand and avoid cumbersome attachments with wires, thus it is suitable for a co-located visual and haptic display. In my work, the detailed design of the electromagnet array for haptic rendering purposes is introduced, which is modelled and tested using Finite Element Method simulations. The model is characterized mathematically, and three methods for controlling the magnetic field are applied accordingly: direct control, system identification and adaptive control. The performance of the simulated model is evaluated in terms of magnetic field distribution, force strength, operation distance and force stiffness. The control algorithms are implemented and tested on a 3-by-3 and a 15-by-15 model, respectively. Simulations are performed on a 15-by-15 model to generate a haptic human face, which results in a smooth force field and accurate force exertion on the control points.
410

Výnosnost zemědělské půdy v závislosti na vybraných faktorech - ekonometrický model / The Productivity of Farmland depending on Chosen Elements

Partynglová, Soňa January 2010 (has links)
This thesis is focused on analysis of the factors that influence the yields of the wheat. This thesis is divided into three parts. The first part opens the problem of wheat cultivation. The second one concerns the methodologies of creating the econometrics models and the third one solves the problem as a whole. Considering a large data file I have a need to reduce it by the factor analysis. I estimate relevant econometric model by application different econometrics methods. This model will show the influences of technological, soil and climatic factors on the yields of wheat. At the end I confront the observed variables with predicted ones by the graininess of soil, climate and the year of the crops.

Page generated in 0.0575 seconds