• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 39
  • 32
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 196
  • 24
  • 21
  • 19
  • 18
  • 17
  • 14
  • 13
  • 13
  • 13
  • 12
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Utilisation d'ontologies comme support à la recherche et à la navigation dans une collection de documents / ONTOLOGY BASED INFORMATION RETRIEVAL

Sy, Mohameth François 11 December 2012 (has links)
Les ontologies offrent une modélisation des connaissances d'un domaine basée sur une hiérarchie des concepts clefs de ce domaine. Leur utilisation dans le cadre des Systèmes de Recherche d'Information (SRI), tant pour indexer les documents que pour exprimer une requête, permet notamment d'éviter les ambiguïtés du langage naturel qui pénalisent les SRI classiques. Les travaux de cette thèse portent essentiellement sur l'utilisation d'ontologies lors du processus d'appariement durant lequel les SRI ordonnent les documents d'une collection en fonction de leur pertinence par rapport à une requête utilisateur. Nous proposons de calculer cette pertinence à l'aide d'une stratégie d'agrégation de scores élémentaires entre chaque document et chaque concept de la requête. Cette agrégation, simple et intuitive, intègre un modèle de préférences dépendant de l'utilisateur et une mesure de similarité sémantique associée à l'ontologie. L'intérêt majeur de cette approche est qu'elle permet d'expliquer à l'utilisateur pourquoi notre SRI, OBIRS, estime que les documents qu'il a sélectionnés sont pertinents. Nous proposons de renforcer cette justification grâce à une visualisation originale où les résultats sont représentés par des pictogrammes, résumant leurs pertinences élémentaires, puis disposés sur une carte sémantique en fonction de leur pertinence globale. La Recherche d'Information étant un processus itératif, il est nécessaire de permettre à l'utilisateur d'interagir avec le SRI, de comprendre et d'évaluer les résultats et de le guider dans sa reformulation de requête. Nous proposons une stratégie de reformulation de requêtes conceptuelles basée sur la transposition d'une méthode éprouvée dans le cadre de SRI vectoriels. La reformulation devient alors un problème d'optimisation utilisant les retours faits par l'utilisateur sur les premiers résultats proposés comme base d'apprentissage. Nous avons développé une heuristique permettant de s'approcher d'une requête optimale en ne testant qu'un sous-espace des requêtes conceptuelles possibles. Nous montrons que l'identification efficace des concepts de ce sous-espace découle de deux propriétés qu'une grande partie des mesures de similarité sémantique vérifient, et qui suffisent à garantir la connexité du voisinage sémantique d'un concept.Les modèles que nous proposons sont validés tant sur la base de performances obtenues sur des jeux de tests standards, que sur la base de cas d'études impliquant des experts biologistes. / Domain ontologies provide a knowledge model where the main concepts of a domain are organized through hierarchical relationships. In conceptual Information Retrieval Systems (IRS), where they are used to index documents as well as to formulate a query, their use allows to overcome some ambiguities of classical IRSs based on natural language processes.One of the contributions of this study consists in the use of ontologies within IRSs, in particular to assess the relevance of documents with respect to a given query. For this matching process, a simple and intuitive aggregation approach is proposed, that incorporates user dependent preferences model on one hand, and semantic similarity measures attached to a domain ontology on the other hand. This matching strategy allows justifying the relevance of the results to the user. To complete this explanation, semantic maps are built, to help the user to grasp the results at a glance. Documents are displayed as icons that detail their elementary scores. They are organized so that their graphical distance on the map reflects their relevance to a query represented as a probe. As Information Retrieval is an iterative process, it is necessary to involve the users in the control loop of the results relevancy in order to better specify their information needs. Inspired by experienced strategies in vector models, we propose, in the context of conceptual IRS, to formalize ontology based relevance feedback. This strategy consists in searching a conceptual query that optimizes a tradeoff between relevant documents closeness and irrelevant documents remoteness, modeled through an objective function. From a set of concepts of interest, a heuristic is proposed that efficiently builds a near optimal query. This heuristic relies on two simple properties of semantic similarities that are proved to ensure semantic neighborhood connectivity. Hence, only an excerpt of the ontology dag structure is explored during query reformulation.These approaches have been implemented in OBIRS, our ontological based IRS and validated in two ways: automatic assessment based on standard collections of tests, and case studies involving experts from biomedical domain.
102

Charakteristika interindividuálního vztahu (přítel vs. konkurent) jelena evropského a její vliv na agonistické chování a endokrinní zpětnou vazbu / Characteristics of inter-individual relationship (friend vs. rival) in red deer and its effect on agonistic behavior and endocrinological feedback

Peterka, Tomáš January 2014 (has links)
Red deer males aggregate during the period of antler growth to bachelor groups. Social position - Rank - is unstable in these groups. Previous experiments revealed that rank modulated by agonistic behaviour influence the antler growth and antler cycle timing. Antlers are the secondary sexual characteristics of the deer family and one of the fastest growing tissue in vertebrate taxa. Their development is modulated by androgenic hormone, testosterone. In our experiment, we observed agonistic behaviour of 19 males. They were equipped with GPS collar and observation lasted for two hours in the evening an in the morning, once or twice a week from the end of May to the end of August. Deer were handled regularly for blood samples and downloading the telemetrical data from collars. Base on a statistical analysis we found that in our bachelor group 13 stags kept similar interindividual distances which did not exceed the 22 metres level. These stags - the closest associates - differed in the sum of agonistic interactions. Those who reached 8 or less interactions were called Friends, while subgroup of the others reaching much more interactions were classified as Rivals. We found that number of interactions depended on average distance among males in groups (Friends and Rivals). Rivals with increasing distance...
103

Análise das emissões veiculares em trajetos urbanos curtos com localização por GPS / Vehicle emission analysis in urban short distances with GPS localization

Manzoli, Anderson 27 March 2009 (has links)
Estuda-se o problema da emissão de gases por veículos automotores movidos a gasolina em trajetos curtos ercorridos em cidades pequenas e médias. Nessa situação, o tipo de percurso que ocorre com mais frequência é curto, o que significa circulação de veículos com os motores ainda frios. Sabe-se que esta circunstância constitui a condição menos favorável no que se refere à emissão de gases poluentes. Tecnologias recentes, como GPS e analisadores de gases portáteis, foram usados para se obterem dados fundamentais para o trabalho, como velocidade, tempo, coordenada espacial, aceleração, mensuração da emissão dos poluentes pelo escapamento do veículo e temperatura do motor. Os testes foram feitos com o motor frio e quente para que fosse possível descrever o comportamento da emissão dos gases nas duas condições. Determinou-se experimentalmente a emissão de CO e HC em diversas condições e construiu-se um banco de dados sobre como esses parâmetros interferem na geração desses gases nos percursos estabelecidos, fornecendo uma previsão mais realista. Os resultados pretendem conscientizar os administradores públicos acerca da necessidade de se mensurar a real emissão de poluentes em qualquer cidade, pois o número reduzido de automóveis não significa diretamente a inexistência de problemas com a poluição. Especialmente no caso das cidades pequenas e médias, esse resultado pode subsidiar uma política preventiva, para que não se alcancem os níveis catastróficos que hoje são encontrados nas grandes cidades. / This work studies the problem of gas emission by automotive petrol moved vehicles in short distances travelled in small and medium towns. In this situation, most distances are short. It means that vehicle circulates commonly with still cold engine, what is known to be the least favorable condition concerned to pollutant gas emission. Recent technologies like GPS and portable gas analyzers were used to obtain fundamental data, like speed, time, spacial coordinate, acceleration, pollutant emission by vehicle leakage and engine temperature. The tests were made with cold and hot engine, so that it would be possible to describe the gas emission behavior in both conditions. CO and HC emission were determined experimentally in several conditions and a data base was built after how these parameters interfere in the gas emission in the established routes, providing a more realistic view. These data can help public administrators to think about the need to measure the real pollutants emission in any town, because the reduced number of automobiles isn\'t directly related to the inexistence of pollution problems. Specially related to short and medium towns, this result may supply preventive politics, so that the catastrophic levels found in big cities nowadays won\'t be held.
104

Quantificando as inomogeneidades da matéria com Supernovas e Gamma-Ray Bursts / Quantifying the Matter Inhomogeneities with Supernovae and Gamma-Ray Bursts

Busti, Vinicius Consolini 12 March 2009 (has links)
Nesta dissertação estudamos como os efeitos das inomogeneidades da matéria (escura e bariônica) modificam as distâncias e afetam a determinação dos parâmetros cosmológicos. As inomogeneidades são fenomenologicamente descritas pelo parâmetro de aglomeramento alpha e quantificadas pela equação da distância proposta por ZeldovichKantowskiDyer Roeder (ZKDR). Além disso, utilizando amostras de Supernovas e Gamma-Ray Bursts, aplicamos um teste chi quadrado para vincular os parâmetros de dois modelos cosmológicos distintos, a saber: o modelo LambdaCDM plano e o modelo com criação de matéria escura fria. Para o modelo LambdaCDM plano, vinculamos os parâmetros alpha e ­OmegaM considerando um prior gaussiano para a constante de Hubble. Realizamos também uma análise detalhada envolvendo duas calibrações distintas associadas aos dados de Gamma-Ray Bursts: uma calibração para o modelo LambdaCDM plano e outra para o modelo cardassiano. Verificamos que os resultados são fracamente dependentes da calibração adotada. Uma análise conjunta envolvendo Supernovas e Gamma-Ray Bursts permitiu quebrar a degenerescência entre o parâmetro de aglomeramento alpha e o parâmetro de densidade da matéria ­OmegaM. Considerando a calibração dos Gamma-Ray Bursts para o modelo LambdaCDM plano, o melhor ajuste obtido foi alpha = 1.0 e ­OmegaM = 0.30, com os parâmetros restritos ao intervalos 0.78 < alpha < · 1.0 e 0.26 < ­OmegaM < 0.36 (2sigma). Para o modelo com criação de matéria escura consideramos também um prior gaussiano para a constante de Hubble e as amostras de Supernovas e Gamma-Ray Bursts (calibrados para o modelo LambdaCDM plano). A degenerescência entre o parâmetro alpha e o parâmetro de criação gamma foi novamente quebrada através de uma análise conjunta das 2 amostras de dados. Para o melhor ajuste obtivemos alpha = 1.0 e gamma = 0.61, com os parâmetros restritos aos intervalos 0.85 < alpha < 1.0 e 0.56 < gamma < 0.66 (2sigma). / In this dissertation we study how the effects of matter (baryonic and dark) inhomogeneities modify the distances thereby affecting the determination of cosmological parameters. The inhomogeneities are phenomenologically described by the clumpiness parameter alpha and quantified through the equation distance proposed by ZeldovichKantowskiDyer Roeder (ZKDR). Further, by using Supernovae and Gamma-Ray Bursts separately, a chi-squared analysis was performed to constrain the parameter space for two distinct cosmological models, namely: the flat LambdaCDM model and the cold dark matter creation model. For the flat LambdaCDM model we have constrained the parameters alpha and ­OmegaM by considering a Gaussian prior for the Hubble parameter. A detailed analysis was also performed involving two different calibrations associated to the Gamma-Ray Bursts data: a calibration for the flat LambdaCDM model as well as for the cardassian model. We have verified that the results are weakly dependent on the adopted calibration. A joint analysis involving Supernovae and Gamma-Ray Bursts allowed us to break the degenerescence between the clumpiness parameter alpha and the matter density parameter ­OmegaM. By considering the calibration for the flat LambdaCDM model, the best fits obtained were equal to alpha = 1.0 and ­OmegaM = 0.30 with the parameters restricted on the intervals 0.78 < alpha < 1.0 and 0.26 < ­OmegaM < 0.36 (2sigma). For the dark matter creation model we have also adopted a Gaussian prior for the Hubble constant and the Supernovae and Gamma-Ray Bursts (calibrated for the flat LambdaCDM model) samples. The degenerescence between the clumpiness parameter alpha and the creation parameter gamma was again broken trough a joint analysis of the two data sample. For the best fits we have obtained alpha = 1.0 and gamma = 0.61 with the parameters restricted on the intervals 0.85 < alpha < 1.0 and 0.56 < gamma < 0.66 (2sigma).
105

Distance Measurement Error Modeling for Time-of-Arrival Based Indoor Geolocation

Alavi, Bardia 03 May 2006 (has links)
In spite of major research initiatives by DARPA and other research organizations, precise indoor geolocation still remains as a challenge facing the research community. The core of this challenge is to understand the cause of large ranging errors in estimating the time of arrival (TOA) of the direct path between the transmitter and the receiver. Results of wideband measurement in variety of indoor areas reveal that large ranging errors are caused by severe multipath conditions and frequent occurrence of undetected direct path (UDP) situations. Empirical models for the behavior of the ranging error, which we refer to as the distance measurement error (DME), its relation to the distance between the transmitter and the receiver and the bandwidth of the system is needed for development of localization algorithms for precise indoor geolocation. The main objective of this dissertation is to design a direct empirical model for the behavior of the DME. In order to achieve this objective we provide a framework for modeling of DME, which relates the error to the distance between the transmitter and the receiver and bandwidth of the system. Using this framework we first designed a set of preliminary models for the behavior of the DME based on the CWINS proprietary measurement calibrated ray-tracing simulation tool. Then, we collected a database of 2934 UWB channel impulse response measurements at 3-8GHz in four different buildings to incorporate a variety of building materials and architectures. This database was used for the design of more in depth and realistic models for the behavior of the DME. The DME is divided into two components, Multipath-DME (MDME) and UDP-DME (UDME). Based on the empirical data, models for the behavior of each of these components are developed. These models reflect the sensitivity to bandwidth and show that by increasing the bandwidth MDME decreases. However in UDME the behavior is complicated. At first it reduces as we increase the bandwidth but after a certain bandwidth it starts to increase. In addition to these models through an analysis on direct path power versus the total power the average probability of having a UDP was calculated.
106

Representation and Learning for Sign Language Recognition

Nayak, Sunita 17 January 2008 (has links)
While recognizing some kinds of human motion patterns requires detailed feature representation and tracking, many of them can be recognized using global features. The global configuration or structure of an object in a frame can be expressed as a probability density function constructed using relational attributes between low-level features, e.g. edge pixels that are extracted from the regions of interest. The probability density changes with motion, tracing a trajectory in the latent space of distributions, which we call the configuration space. These trajectories can then be used for recognition using standard techniques such as dynamic time warping. Can these frame-wise probability functions, which usually have high dimensionality, be embedded into a low-dimensional space so that we can still estimate various meaningful probabilistic distances in the new space? Given these trajectory-based representations, can one learn models of signs in an unsupervised manner? We address these two fundamental questions in this dissertation. Existing embedding approaches do not extend easily to preserve meaningful probabilistic distances between the samples. We present an embedding framework to preserve the probabilistic distances like Chernoff, Bhattacharya, Matusita, KL or symmetric-KL based on dot-products between points in this space. It results in computational savings. We experiment with the five different probabilistic distance measures and show the usefulness of the representation in three different contexts - sign recognition of 147 different signs (with large number of possible classes), gesture recognition with 7 different gestures performed by 7 different persons (with person variations) and classification of 8 different kinds of human-human interaction sequences (with segmentation problems). Currently, researchers in continuous sign language recognition assume that the training signs are already available and often those are manually selected from continuous sentences. It consumes a lot of human time and is tedious. We present an approach for automatically learning signs from multiple sentences by using a probabilistic framework to extract the parts of signs that are present in most of its occurrences, and are robust to variations produced by adjacent signs. We show results by learning 10 signs and 10 spoken words from 136 sign language sentences and 136 spoken sequences respectively.
107

Détection de la convergence de processus de Markov

Lachaud, Béatrice 14 September 2005 (has links) (PDF)
Notre travail porte sur le phénomène de cutoff pour des n-échantillons de processus de Markov, dans le but de l'appliquer à la détection de la convergence d'algorithmes parallélisés. Dans un premier temps, le processus échantillonné est un processus d'Ornstein-Uhlenbeck. Nous mettons en évidence le phénomène de cutoff pour le n-échantillon, puis nous faisons le lien avec la convergence en loi du temps d'atteinte par le processus moyen d'un niveau fixé. Dans un second temps, nous traitons le cas général où le processus échantillonné converge à vitesse exponentielle vers sa loi stationnaire. Nous donnons des estimations précises des distances entre la loi du n-échantillon et sa loi stationnaire. Enfin, nous expliquons comment aborder les problèmes de temps d'atteinte liés au phénomène du cutoff.
108

Evaluating Emerging Markets : Swedish MNCs and their Evaluation Behavior

Lundström, Fredrik, Andersson, Christofer January 2007 (has links)
<p>Country portfolio analysis, a commonly used tool among companies when evaluating potential target markets, only focus on potential sales instead of including cost and risk into the equation. However, some researchers today have become aware of the importance of taking these costs and risks into account. One of these researchers is Pankaj Ghemawat, who has developed a framework called CAGE which is supposed to be a complementary tool to the country portfolio analysis model. In this thesis we study if Swedish MNCs consider the factors suggested in the CAGE-framework when evaluating emerging markets. Furthermore, we suggest some adjustments to the evaluation process.</p><p>Data have been collected through a web-based questionnaire. The respondents were all headquarter managers in Swedish multinational corporations (MNCs). Our results show that the two most overlooked distances of the CAGE-framework are the cultural and the geographic distances. Hence, the two most considered were the economic and administrative distances. This is in partial accordance with Ghemawat’s theory, in which he states that the cultural distance is one of the two most overlooked distances. However, he presents administrative distance as the second most overlooked distance, which means that our thesis shows a somewhat different result than Ghemawat’s findings.</p><p>A company evaluating an entry into an emerging market needs to consider the CPA-model, but this is not enough. They also need to take other factors into account. These are previous as well as future growth of the market, predicted growth for the specific product or service in the market in question, and the competitive situation in the emerging market. A consideration of these factors gives the company a complete picture of a market regarding profit potential. Thereafter, this potential needs to be adjusted for the distances in the CAGE-framework.</p>
109

Evaluating Emerging Markets : Swedish MNCs and their Evaluation Behavior

Lundström, Fredrik, Andersson, Christofer January 2007 (has links)
Country portfolio analysis, a commonly used tool among companies when evaluating potential target markets, only focus on potential sales instead of including cost and risk into the equation. However, some researchers today have become aware of the importance of taking these costs and risks into account. One of these researchers is Pankaj Ghemawat, who has developed a framework called CAGE which is supposed to be a complementary tool to the country portfolio analysis model. In this thesis we study if Swedish MNCs consider the factors suggested in the CAGE-framework when evaluating emerging markets. Furthermore, we suggest some adjustments to the evaluation process. Data have been collected through a web-based questionnaire. The respondents were all headquarter managers in Swedish multinational corporations (MNCs). Our results show that the two most overlooked distances of the CAGE-framework are the cultural and the geographic distances. Hence, the two most considered were the economic and administrative distances. This is in partial accordance with Ghemawat’s theory, in which he states that the cultural distance is one of the two most overlooked distances. However, he presents administrative distance as the second most overlooked distance, which means that our thesis shows a somewhat different result than Ghemawat’s findings. A company evaluating an entry into an emerging market needs to consider the CPA-model, but this is not enough. They also need to take other factors into account. These are previous as well as future growth of the market, predicted growth for the specific product or service in the market in question, and the competitive situation in the emerging market. A consideration of these factors gives the company a complete picture of a market regarding profit potential. Thereafter, this potential needs to be adjusted for the distances in the CAGE-framework.
110

Combinatoire and Bio-informatique : Comparaison de structures d'ARN et calcul de distances intergénomiques

Blin, Guillaume 17 November 2005 (has links) (PDF)
Nous présentons un ensemble de résultats concernant deux types de problèmes biologiques: (1) la comparaison de structures de molécules d'ARN et (2) le calcul de distances intergénomiques en présence de gènes dupliqués. Dans ce manuscrit, nous déterminons la complexité algorithmique de certains problèmes liés soit à la comparaison de structures de molécules d'ARN (distance d'édition, problème APS, recherche de motifs de 2-intervalles, design d'ARN), soit aux réarrangements génomiques (distances de breakpoints et d'intervalles conservés). \\ L'approche adoptée pour l'ensemble de ces problèmes a été de déterminer, si possible, des algorithmes exacts et rapides répondants aux problèmes posés. Pour tout problème pour lequel cela ne semblait pas possible, nous avons essayé de prouver qu'il ne peut être résolu de fa\ccon rapide. Pour ce faire, nous démontrons que le problème en question est algorithmiquement difficile. Enfin, le cas échéant, nous poursuivons l'étude de ce problème en proposant, essentiellement, trois types de résultats: (1) Approximation, (2) Complexité paramétrée, (3) Heuristique. Nous utilisons, dans ce manuscrit, des notions d'optimisation combinatoire, de mathématique, de théorie des graphes et d'algorithmique.

Page generated in 0.0778 seconds