• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 14
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 68
  • 11
  • 10
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

An optimisation approach to improve the throughput in wireless mesh networks through network coding / van der Merwe C.

Van der Merwe, Corna January 2011 (has links)
In this study, the effect of implementing Network Coding on the aggregated throughput in Wireless Mesh Networks, was examined. Wireless Mesh Networks (WMNs) are multiple hop wireless networks, where routing through any node is possible. The implication of this characteristic, is that messages flow across the points where it would have been terminated in conventional wireless networks. User nodes in conventional wireless networks only transmit and receive messages from an Access Point (AP), and discard any messages not intended for them. The result is an increase in the volume of network traffic through the links of WMNs. Additionally, the dense collection of multiple RF signals propagating through a shared wireless medium, contributes to the situation where the links become saturated at levels below their capacity. The need exists to examine methods that will improve the utilisation of the shared wireless medium in WMNs. Network Coding is a coding and decoding technique at the network level of the OSI stack, aimed to improve the boundaries of saturated links. The technique implies that the bandwidth is simultaneously shared amongst separate message flows, by combining these flows at common intermediate nodes. The number of transmissions needed to convey information through the network, is decreased by Network Coding. The result is in an improvement of the aggregated throughput. The research approach followed in this dissertation, includes the development of a model that investigates the aggregated throughput performance of WMNs. The scenario of the model, followed a typical example of indoors WMN implementations. Therefore, the physical environment representation of the network elements, included an indoors log–distance path loss channel model, to account for the different effects such as: power absorption through walls; and shadowing. Network functionality in the model was represented through a network flow programming problem. The problem was concerned with determining the optimal amount of flow represented through the links of the WMN, subject to constraints pertaining to the link capacities and mass balance at each node. The functional requirements of the model stated that multiple concurrent sessions were to be represented. This condition implied that the network flow problem had to be a multi–commodity network flow problem. Additionally, the model requirements stated that each session of flow should remain on a single path. This condition implied that the network flow problem had to be an integer programming problem. Therefore, the network flow programming problem of the model was considered mathematically equivalent to a multi–commodity integer programming problem. The complexity of multi–commodity integer programming problems is NP–hard. A heuristic solving method, Simulated Annealing, was implemented to solve the goal function represented by the network flow programming problem of the model. The findings from this research provide evidence that the implementation of Network Coding in WMNs, nearly doubles the level of the calculated aggregated throughput values. The magnitude of this throughput increase, can be further improved by additional manipulation of the network traffic dispersion. This is achieved by utilising link–state methods, rather than distance vector methods, to establish paths for the sessions of flow, present in the WMNs. / Thesis (M.Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2012.
52

An optimisation approach to improve the throughput in wireless mesh networks through network coding / van der Merwe C.

Van der Merwe, Corna January 2011 (has links)
In this study, the effect of implementing Network Coding on the aggregated throughput in Wireless Mesh Networks, was examined. Wireless Mesh Networks (WMNs) are multiple hop wireless networks, where routing through any node is possible. The implication of this characteristic, is that messages flow across the points where it would have been terminated in conventional wireless networks. User nodes in conventional wireless networks only transmit and receive messages from an Access Point (AP), and discard any messages not intended for them. The result is an increase in the volume of network traffic through the links of WMNs. Additionally, the dense collection of multiple RF signals propagating through a shared wireless medium, contributes to the situation where the links become saturated at levels below their capacity. The need exists to examine methods that will improve the utilisation of the shared wireless medium in WMNs. Network Coding is a coding and decoding technique at the network level of the OSI stack, aimed to improve the boundaries of saturated links. The technique implies that the bandwidth is simultaneously shared amongst separate message flows, by combining these flows at common intermediate nodes. The number of transmissions needed to convey information through the network, is decreased by Network Coding. The result is in an improvement of the aggregated throughput. The research approach followed in this dissertation, includes the development of a model that investigates the aggregated throughput performance of WMNs. The scenario of the model, followed a typical example of indoors WMN implementations. Therefore, the physical environment representation of the network elements, included an indoors log–distance path loss channel model, to account for the different effects such as: power absorption through walls; and shadowing. Network functionality in the model was represented through a network flow programming problem. The problem was concerned with determining the optimal amount of flow represented through the links of the WMN, subject to constraints pertaining to the link capacities and mass balance at each node. The functional requirements of the model stated that multiple concurrent sessions were to be represented. This condition implied that the network flow problem had to be a multi–commodity network flow problem. Additionally, the model requirements stated that each session of flow should remain on a single path. This condition implied that the network flow problem had to be an integer programming problem. Therefore, the network flow programming problem of the model was considered mathematically equivalent to a multi–commodity integer programming problem. The complexity of multi–commodity integer programming problems is NP–hard. A heuristic solving method, Simulated Annealing, was implemented to solve the goal function represented by the network flow programming problem of the model. The findings from this research provide evidence that the implementation of Network Coding in WMNs, nearly doubles the level of the calculated aggregated throughput values. The magnitude of this throughput increase, can be further improved by additional manipulation of the network traffic dispersion. This is achieved by utilising link–state methods, rather than distance vector methods, to establish paths for the sessions of flow, present in the WMNs. / Thesis (M.Ing. (Computer and Electronical Engineering))--North-West University, Potchefstroom Campus, 2012.
53

Spatial heterogeneity in ecology

Mealor, Michael A. January 2005 (has links)
This project predominantly investigated the implications of spatial heterogeneity in the ecological processes of competition and infection. Empirical analysis of spatial heterogeneity was carried out using the lepidopteran species Plodia interpunctella. Using differently viscous food media, it was possible to alter the movement rate of larvae. Soft Foods allow the movement rate of larvae to be high, so that individuals can disperse through the environment and avoid physical encounters with conspecifics. Harder foods lower the movement rate of larvae, restricting the ability of individuals to disperse away from birth sites and avoid conspecifics encounters. Increasing food viscosity and lowering movement rate therefore has the effect of making uniform distributed larval populations more aggregated and patchy. Different spatial structures changed the nature of intraspecific competition, with patchy populations characterised by individuals experiencing lower growth rates and greater mortality because of the reduced food and space available within densely packed aggregations. At the population scale, the increased competition for food individuals experience in aggregations emerges as longer generational cycles and reduced population densities. Aggregating individuals also altered the outcome of interspecific competition between Plodia and Ephestia cautella. In food media that allowed high movement rates, Plodia had a greater survival rate than Ephestia because the larger movement rate of Plodia allowed it to more effectively avoid intraspecific competition. Also the faster growth rate, and so larger size, of Plodia allowed it to dominate interspecific encounters by either predating or interfering with the feeding of Ephestia. In food that restricts movement, the resulting aggregations cause Plodia to experience more intraspecific encounters relative to interspecific, reducing its competitive advantage and levelling the survival of the two species. Spatial structure also affected the dynamics of a Plodia-granulosis virus interaction and the evolution of virus infectivity. Larval aggregation forced transmission to become limited to within host patches, making the overall prevalence of the virus low. However potentially high rates of cannibalism and multiple infections within overcrowded host aggregations caused virus-induced mortality to be high, as indicated by the low host population density when virus is presented. Also aggregated host populations cause the evolution of lower virus infectivity, where less infective virus strains maintain more susceptible hosts within the aggregation and so possess a greater transmission rate. The pattern of variation in resistance of Plodia interpunctella towards its granulosis virus was found using two forms of graphical analysis. There was a bimodal pattern of variation, with most individuals exhibiting either low or high levels of resistance. This pattern was related to a resistance mechanism that is decreasingly costly to host fitness.
54

Global warehouse management : a methodology to determine an integrated performance measurement / Gestion globale des entrepôts logistiques : une méthodologie pour mesurer la performance de façon agrégée / Gerenciamento global de armazéns : uma metodologia para mensurar o desempenho de forma agregada

Hedler, Francielly 15 October 2015 (has links)
La complexité croissante des opérations dans les entrepôts a conduit les entreprises à adopter un grand nombre d'indicateurs de performances, ce qui rend leur gestion de plus en plus difficile. De plus, comme ces nombreux indicateurs sont souvent interdépendants, avec des objectifs différents, parfois contraires (par exemple, le résultat d'un indicateur de coût doit diminuer, tandis qu'un indicateur de qualité doit être maximisé), il est souvent très difficile pour un manager d'évaluer la performance globale des systèmes logistiques, comprenant l'entrepôt. Dans ce contexte, cette thèse développe une méthodologie pour atteindre une mesure agrégée de la performance de l'entrepôt. Elle comprend quatre étapes principales: (i) le développement d'un modèle analytique d'indicateurs de performance habituellement utilisés pour la gestion de l'entrepôt; (ii) la définition de relations entre les indicateurs, de façon analytique et statistique ; (iii) l'agrégation de ces indicateurs dans un modèle intégré; (iv) la proposition d'une échelle pour suivre l'évolution de la performance de l'entrepôt au fil du temps, selon les résultats du modèle agrégé.La méthodologie est illustrée sur un entrepôt théorique pour démontrer son applicabilité. Les indicateurs utilisés pour évaluer la performance de l'entrepôt proviennent de la littérature, et une base de données est générée pour permettre l'utilisation des outils mathématiques. La matrice jacobienne est utilisée pour définir de façon analytique les relations entre les indicateurs, et une analyse en composantes principales est faite pour agréger les indicateurs de façon statistique. Le modèle agrégé final comprend 33 indicateurs, répartis en six composants différents, et l'équation de l'indicateur de performance globale est obtenue à partir de la moyenne pondérée de ces six composants. Une échelle est développée pour l'indicateur de performance globale en utilisant une approche d'optimisation pour obtenir ses limites supérieure et inférieure. L'utilisation du modèle intégré est testée sur deux situations différentes de performance de l'entrepôt, et des résultats intéressants sur la performance finale de l'entrepôt sont discutés. Par conséquent, nous concluons que la méthodologie proposée atteint son objectif en fournissant un outil d'aide à la décision pour les managers afin qu'ils puissent être plus efficaces dans la gestion globale de la performance de l'entrepôt, sans négliger des informations importantes fournis par les indicateurs. / The growing warehouse operation complexity has led companies to adopt a large number of indicators, making its management increasingly difficult. It may be hard for managers to evaluate the overall performance of the logistic systems, including the warehouse, because the assessment of the interdependence of indicators with distinct objectives is rather complex (e.g. the level of a cost indicator shall decrease, whereas a quality indicator level shall be maximized). This fact could lead to biases in the analysis executed by the manager in the evaluation of the global warehouse performance.In this context, this thesis develops a methodology to achieve an integrated warehouse performance measurement. It encompasses four main steps: (i) the development of an analytical model of performance indicators usually used for warehouse management; (ii) the definition of indicator relationships analytically and statistically; (iii) the aggregation of these indicators in an integrated model; (iv) the proposition of a scale to assess the evolution of the warehouse performance over time according to the integrated model results.The methodology is applied to a theoretical warehouse to demonstrate its application. The indicators used to evaluate the warehouse come from the literature and the database is generated to perform the mathematical tools. The Jacobian matrix is used to define indicator relationships analytically, and the principal component analysis to achieve indicator's aggregation statistically. The final aggregated model comprehends 33 indicators assigned in six different components, which compose the global performance indicator equation by means of component's weighted average. A scale is developed for the global performance indicator using an optimization approach to obtain its upper and lower boundaries.The usability of the integrated model is tested for two different warehouse performance situations and interesting insights about the final warehouse performance are discussed. Therefore, we conclude that the proposed methodology reaches its objective providing a decision support tool for managers so that they can be more efficient in the global warehouse performance management without neglecting important information from indicators. / A crescente complexidade das operações em armazéns tem levado as empresasa adotarem um grande número de indicadores de desempenho, o que tem dificultadocada vez mais o seu gerenciamento. Além do volume de informações, os indicadores normalmentepossuem interdependências e objetivos distintos, as vezes até opostos (por exemplo,o indicador de custo deve ser reduzido enquanto o indicador de qualidade deve sempre seraumentado), tornando complexo para o gestor avaliar o desempenho logístico global dosistema, incluindo o armazém.Dentro deste contexto, esta tese desenvolve uma metodologia para obter uma medidaagregada do desempenho global do armazém. A metodologia é composta de quatro etapasprincipais: (i) o desenvolvimento de um modelo analítico dos indicadores de desempenhojá utilizados para o gerenciamento do armazém; (ii) a definição das relações entre os indicadoresde forma analítica e estatística; (iii) a agregação destes indicadores em um modelointegrado; (iv) a proposição de uma escala para avaliar a evolução do desempenho globaldo armazém ao longo do tempo, de acordo com o resultado do modelo integrado.A metodologia é aplicada em um armazém teórico para demonstrar sua aplicabilidade.Os indicadores utilizados para avaliar o desempenho do armazém são provenientesda literatura, e uma base de dados é gerada para permitir a utilização de ferramentasmatemáticas. A matriz jacobiana é utilizada para definir de forma analítica as relaçõesentre os indicadores, e uma análise de componentes principais é realizada para agregaros indicadores de forma estatística. O modelo agregado final compreende 33 indicadores,divididos em seis componentes diferentes, e a equação do indicador de desempenho globalé obtido a partir da média ponderada dos seis componentes. Uma escala é desenvolvidapara o indicador de desempenho global utilizando um modelo de otimização para obter oslimites superior e inferior da escala.Depois de testes com o modelo integrado, pôde-se concluir que a metodologia propostaatingiu seu objetivo ao fornecer uma ferramenta de ajuda à decisão para os gestores, permitindoque eles sejam mais eficazes no gerenciamento global do armazém sem negligenciarinformações importantes que são fornecidas pelos indicadores.
55

Exclusão social: gestão estratégica de pessoas em duas subsidiárias de uma empresa multinacional

Schubert, André de Paula January 2006 (has links)
Made available in DSpace on 2009-11-18T19:01:28Z (GMT). No. of bitstreams: 1 Schubert.pdf: 1815206 bytes, checksum: dcc9ffbbe4d4279235dcc90fbc7e1600 (MD5) Previous issue date: 2006 / The present study of case empirically investigates the existence of indicators that suggest the social exclusion preoccupation from the organizations with a strategic management of human resources focus. The objects of study are two subsidiaries of a multinational enterprise in emergency and first-aid services area. One of them is Portuguese and the other is a Brazilian one. This exploratory research has used a sectional way with a longitudinal perspective, since it has considered a specific verified data referring to 2004 and 2005 years, beyond the deeper interviews with actual managers to an evaluation of these studied perception and its authentication. Our indicators identification sources were principally the individual and social rights and duties broaching and the fundamental guarantees disposed in the Brazilian and Portuguese Constitutions as such as the European Constitution project. The results appoint to great differences of management between both subsidiaries, being the Brazilian one closer to our research proposes, as such as suggest us that the human resource areas, still acting in an instrumentalist way, establish a great barrier to better practices in social inclusion and they would be unprepared for a management with the focus in the employees. Although our study has been realized in a specific activity enterprise, we believe that our results can stimulate the realization of other investigations with the same objectives. In this way, we contribute to a better comprehension of the social exclusion causes and the organizations participations in this process. / O presente estudo de caso investiga, empiricamente, a existência de indicadores que sugiram a preocupação com a exclusão social a partir das organizações, com o foco em gestão estratégica de pessoas. Os objetos de estudos são duas subsidiárias de uma empresa multinacional da área de serviços de assistência emergencial, uma portuguesa e uma brasileira. Tal pesquisa exploratória utilizou um corte seccional com perspectiva longitudinal, já que considerou dados levantados referentes aos anos de 2004 e 2005, além de entrevistas de profundidade com gestores atuais, para a avaliação da percepção do tema em estudo e respectiva validação. Utilizaram-se como fonte de identificação dos indicadores principalmente as abordagens sobre os direitos e deveres individuais e sociais, e direitos e garantias fundamentais dispostas nas constituições brasileira e portuguesa, assim como no projeto de constituição européia. Os resultados apontam para diferenças de gestão entre ambas as subsidiárias, estando a brasileira mais alinhada às propostas da pesquisa, assim como sugerem que as áreas de recursos humanos das empresas, ainda agindo de uma forma instrumentalista, constituem uma grande barreira para as melhores práticas da inclusão social, e que estariam despreparadas para uma gestão com o foco nos funcionários. Embora tal estudo tenha sido realizado em uma empresa de uma área de atividade muito específica, acreditamos que seus resultados possam servir de estímulo para que outras pesquisas com os mesmos objetivos possam ser realizadas, de forma a contribuir para uma melhor compreensão das causas da exclusão social e da participação das organizações neste processo.
56

Avaliação microeconômica do comportamento de investidores frente às alterações de condições de mercado: os determinantes da não racionalidade dos investidores no mercado de fundos brasileiros

Fernandez Gonzalez, Ramon Francisco 25 May 2015 (has links)
Submitted by Ramon Francisco Fernandez Gonzalez (ragonzalez82@hotmail.com) on 2016-04-29T01:08:08Z No. of bitstreams: 1 Versão Completa - Dissertação Ramon F F Gonzalez - Os determinantes da não racionalidade dos investidores no mercado de fundos brasileiros.pdf: 1256743 bytes, checksum: 8aee8712ff228f642b076f195caf2fce (MD5) / Approved for entry into archive by GILSON ROCHA MIRANDA (gilson.miranda@fgv.br) on 2016-05-02T13:22:46Z (GMT) No. of bitstreams: 1 Versão Completa - Dissertação Ramon F F Gonzalez - Os determinantes da não racionalidade dos investidores no mercado de fundos brasileiros.pdf: 1256743 bytes, checksum: 8aee8712ff228f642b076f195caf2fce (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2016-05-06T20:17:47Z (GMT) No. of bitstreams: 1 Versão Completa - Dissertação Ramon F F Gonzalez - Os determinantes da não racionalidade dos investidores no mercado de fundos brasileiros.pdf: 1256743 bytes, checksum: 8aee8712ff228f642b076f195caf2fce (MD5) / Made available in DSpace on 2016-05-09T12:18:21Z (GMT). No. of bitstreams: 1 Versão Completa - Dissertação Ramon F F Gonzalez - Os determinantes da não racionalidade dos investidores no mercado de fundos brasileiros.pdf: 1256743 bytes, checksum: 8aee8712ff228f642b076f195caf2fce (MD5) Previous issue date: 2015-05-25 / In this paper we seek to identify the determinants of demand for mutual funds in Brazil through the logit model, which is widely used in the theory of industrial organizations. Whenever possible we perform 'links' with the main concepts of behavioral finance. Thus, we clarify the main variables that impact variations of 'market share' in the mutual funds industry. We conclude that the main indicators observed by investors at the time of decision-making, are the CDI, inflation, the real interest rate, the variation of the dollar and the stock market, on the other hand the accumulated return of the last three months is factor decisive for investors to apply or redeem an investment fund. Risk variables and expected return we thought to have a strong impact, not significant for variations of 'share'. / Neste trabalho buscamos identificar os principais determinantes da demanda por fundos de investimento no Brasil através do modelo Logit, que é bastante utilizado na teoria das organizações industriais. Sempre que possível realizamos 'links' com os principais conceitos de finanças comportamentais. Assim, conseguimos aclarar as principais variáveis que impactam as variações de 'market-share' na indústria de fundos de investimento. Concluímos que os principais indicadores observados pelos investidores no momento de tomada de decisão são o CDI, a inflação, a taxa real de juros, a variação do dólar e da bolsa de valores, por outro lado a rentabilidade acumulada dos últimos três meses é fator decisivo para que o investidor aplique ou resgate um fundo de investimento. Variáveis de risco e de retorno esperado que imaginávamos ter forte impacto, não se mostraram significativas para as variações de 'share'. / En este trabajo buscamos identificar los determinantes de la demanda de los principales fondos de inversión en Brasil através del modelo Logit, que es ampliamente utilizado en la teoría de las organizaciones industriales. Siempre que posible hemos realizado 'links' con los principales conceptos de las finanzas comportamentales. Por lo tanto, fue posible aclarar las principales variables a que las variaciones de impacto de 'cuota de mercado' en la industria de fondos de inversión. Llegamos a la conclusión de que los principales indicadores observados por los inversores en el momento de la toma de decisiones, es el CDI, la inflación, la tasa de interés real, la variación del dólar y el mercado de valores, por otro lado, la rentabilidad acumulada de los últimos tres meses es un factor decisiva para que los inversionistas invirtan o salgan de un fondo de inversión. Las variables de riesgo y rendimiento esperado que pensabamos tener un impacto fuerte, no se demonstraran significativas para las variaciones de las cuotas de mercado.
57

Datalagerstruktur inom psykiatrin : En analys av vårdens data på ettuniversitetssjukhus psykiatriavdelningar / Data warehouse structure in psychiatry : An analysis of healthcare data in a universityhospitals psychiatric departments

Holmqvist, Oskar January 2020 (has links)
Denna rapports frågeställning är: hur lämpar sig psykiatrins data för en datalagerstruktur och inleds med en bakgrund där en litteraturstudie genomförs. Efter detta kommer en metod del och en genomförande del där intervjuer med avdelningschefer, verksamhetschefen, samt en verksamhetsutvecklare på ett svenskt universitetssjukhus psykiatriavdelning beskrivs. Genom observationer och analys av sjukhusets data samt svaren från respondenterna sammanställdes ett resultat som visar emot att detta sjukhus inte har en god grund för en datalagerstruktur, men genom omprioriteringar kan detta förbättras. Resultatet baserades på forskning gjord av Inmon (2005) och fyra av de kategorier som han anser är mest relevanta för en datalagerstruktur, samt insamlad data från universitetssjukhuset. Det framkom även att alla svenska sjukhus verkar ha problem med sina system och att hela svenska vården är i en uppgraderingsfas där det vid en sådan investering bör prioriteras att skapa fungerande datalagerstrukturer för att kunna analysera vilka resultat vården ger. Just nu fattas beslut i blindo och detta kommer inte förändras om inte förändring sker. / This report's question is: how does psychiatry's data fit into a data warehouse structure and starts with a background in which a literature study is carried out. After this, a method part and an implementation part will be described in which interviews with department heads, the operations manager, as well as an operations developer at a Swedish university hospital's psychiatry department are described. Through observations and analysis of the hospital data and the responses of the respondents, a result was compiled which shows that this hospital does not have a good basis for a data warehouse structure, but through re-prioritization this can be improved. The result was based on research done by Inmon (2005) and four of the categories that he considers to be most relevant to a data warehouse structure, as well as data collected from the university hospital. It also emerged that all Swedish hospitals seem to have problems with their systems and that the entire Swedish healthcare system is in an upgrade phase where it should be prioritized in such an investment to create functioning data warehouse structures in order to be able to analyze the results the health care gives. Right now, decisions are being made blindly and this will not change unless change is made.
58

Search and Aggregation in Big Graphs / Recherche et agrégation dans les graphes massifs

Habi, Abdelmalek 26 November 2019 (has links)
Ces dernières années ont connu un regain d'intérêt pour l'utilisation des graphes comme moyen fiable de représentation et de modélisation des données, et ce, dans divers domaines de l'informatique. En particulier, pour les grandes masses de données, les graphes apparaissent comme une alternative prometteuse aux bases de données relationnelles. Plus particulièrement, le recherche de sous-graphes s'avère être une tâche cruciale pour explorer ces grands jeux de données. Dans cette thèse, nous étudions deux problématiques principales. Dans un premier temps, nous abordons le problème de la détection de motifs dans les grands graphes. Ce problème vise à rechercher les k-meilleures correspondances (top-k) d'un graphe motif dans un graphe de données. Pour cette problématique, nous introduisons un nouveau modèle de détection de motifs de graphe nommé la Simulation Relaxée de Graphe (RGS), qui permet d’identifier des correspondances de graphes avec un certain écart (structurel) et ainsi éviter le problème de réponse vide. Ensuite, nous formalisons et étudions le problème de la recherche des k-meilleures réponses suivant deux critères, la pertinence (la meilleure similarité entre le motif et les réponses) et la diversité (la dissimilarité entre les réponses). Nous considérons également le problème des k-meilleures correspondances diversifiées et nous proposons une fonction de diversification pour équilibrer la pertinence et la diversité. En outre, nous développons des algorithmes efficaces basés sur des stratégies d’optimisation en respectant le modèle proposé. Notre approche est efficiente en terme de temps d’exécution et flexible en terme d'applicabilité. L’analyse de la complexité des algorithmes et les expérimentations menées sur des jeux de données réelles montrent l’efficacité des approches proposées. Dans un second temps, nous abordons le problème de recherche agrégative dans des documents XML. Pour un arbre requête, l'objectif est de trouver des motifs correspondants dans un ou plusieurs documents XML et de les agréger dans un seul agrégat. Dans un premier temps nous présentons la motivation derrière ce paradigme de recherche agrégative et nous expliquons les gains potentiels par rapport aux méthodes classiques de requêtage. Ensuite nous proposons une nouvelle approche qui a pour but de construire, dans la mesure du possible, une réponse cohérente et plus complète en agrégeant plusieurs résultats provenant de plusieurs sources de données. Les expérimentations réalisées sur plusieurs ensembles de données réelles montrent l’efficacité de cette approche en termes de pertinence et de qualité de résultat. / Recent years have witnessed a growing renewed interest in the use of graphs as a reliable means for representing and modeling data. Thereby, graphs enable to ensure efficiency in various fields of computer science, especially for massive data where graphs arise as a promising alternative to relational databases for big data modeling. In this regard, querying data graph proves to be a crucial task to explore the knowledge in these datasets. In this dissertation, we investigate two main problems. In the first part we address the problem of detecting patterns in larger graphs, called the top-k graph pattern matching problem. We introduce a new graph pattern matching model named Relaxed Graph Simulation (RGS), to identify significant matches and to avoid the empty-set answer problem. We formalize and study the top-k matching problem based on two classes of functions, relevance and diversity, for ranking the matches according to the RGS model. We also consider the diversified top-k matching problem, and we propose a diversification function to balance relevance and diversity. Moreover, we provide efficient algorithms based on optimization strategies to compute the top-k and the diversified top-k matches according to the proposed model. The proposed approach is optimal in terms of search time and flexible in terms of applicability. The analyze of the time complexity of the proposed algorithms and the extensive experiments on real-life datasets demonstrate both the effectiveness and the efficiency of these approaches. In the second part, we tackle the problem of graph querying using aggregated search paradigm. We consider this problem for particular types of graphs that are trees, and we deal with the query processing in XML documents. Firstly, we give the motivation behind the use of such a paradigm, and we explain the potential benefits compared to traditional querying approaches. Furthermore, we propose a new method for aggregated tree search, based on approximate tree matching algorithm on several tree fragments, that aims to build, the extent possible, a coherent and complete answer by combining several results. The proposed solutions are shown to be efficient in terms of relevance and quality on different real-life datasets
59

Prédiction des taux de décomposition des litières végétales par les traits fonctionnels agrégés / Using the biomass-ratio hypothesis to predict mixed-species litter decomposition

Tardif, Antoine January 2014 (has links)
Sommaire : Comprendre le fonctionnement des écosystèmes est un enjeu crucial, en particulier dans un contexte de changements globaux. Afin de mieux prédire les processus écosystémiques, j’ai testé la précision et les limites des hypothèses du biomass-ratio de Grime (HBMR) et de l’annulation idiosyncratique (HAI), cette dernière étant une hypothèse originale de cette thèse. Pour cela, j’ai appliqué le principe du biomass-ratio aux traits fonctionnels, en employant la méthode des traits agrégés en communauté, pour estimer la réponse globale des espèces en mélange. La décomposition des litières plurispécifiques constitue un bon modèle biologique, pour lequel je me suis posé les questions suivantes : (1) est-ce que l’HBMR prédit bien les taux de décomposition en mélanges plurispécifiques ? ; (2) est-ce que le degré de variabilité de ces taux diminue pour des raisons biologiques avec l’augmentation de la richesse spécifique (RS) des mélanges (HAI) ? ; (3) est-ce que la variabilité des taux entre mélanges diminue quand les conditions abiotiques du site deviennent plus limitantes ? ; (4) considérant que les mélanges plus contrastés fonctionnellement sont susceptibles de développer plus d’interactions, est-ce que la déviation à la prédiction augmente avec la dispersion fonctionnelle des mélanges (« FDis », Laliberté & Legendre 2010) ? Cette thèse inclut deux expériences de décomposition en sachets à litières : (1) à Sherbrooke (QC, Canada) avec des microcosmes, impliquant des litières de six espèces d’arbres, décomposant seules et en mélanges et (2) sur trois sites au climat contrasté dans la région de Clermont-Ferrand (France) avec des litières de quatre espèces d’herbacées, décomposant seules et en mélanges. Les résultats montrent des déviations positives et négatives par rapport aux taux prédits, mais l’HBMR décrit bien la réponse moyenne des litières plurispécifiques. Bien que l’HAI ait été rejetée, les résultats montrent une convergence des taux observés vers les taux prédits quand (1) la RS des mélanges augmente, (2) l’échelle spatiale augmente et (3) le climat est plus limitant pour la décomposition. Enfin, malgré des corrélations entre FDis et interactions entre espèces dans les litières, cette relation n’est pas généralisable et l’hypothèse de corrélation positive entre FDis et déviation à l’HBMR a été rejetée. // Abstract : Understanding ecosystem functioning is a key goal in ecology, especially in the context of global changes. To better predict ecosystem processes, I tested the accuracy and the limits of Grime’s biomass-ratio (BMRH) hypothesis and a novel idiosyncratic annulment (IAH) hypothesis. I applied the biomass-ratio to functional traits, using the community-weighted means (CWM) to estimate the global response of species in mixtures. I studied the decomposition of litter species mixtures as a biological model and asked the following questions : (1) does the BMRH predict well the decomposition rates of mixed species litters? ; (2) does the degree of variability of these rates decrease with increasing species richness (SR) beyond that expected from purely mathematical causes (IAH)? ; (3) does the variability of rates between mixtures decrease with less favourable abiotic conditions for decomposition? ; (4) as more functionally contrasted mixtures are expected to develop more interactions, does the deviation from prediction increase with increasing functional dispersion in mixtures (« FDis », Laliberté & Legendre 2010)? This study involves two decomposition experiments using litterbags: (1) at Sherbrooke (QC, Canada), in microcosms, involving litters from six tree species, decomposed alone and in mixtures and (2) in three climatically contrasted sites in the region of Clermont-Ferrand (France) with litters from four herbaceous species, decomposed alone and in mixtures. Despite both positive and negative deviations from expectation occurring at all levels of SR, the BMRH well described the average response of mixed species litters. Although I rejected the IAH, the results showed a convergence to the predicted values based on CWM with (1) increasing the SR in mixtures, (2) increasing the spatial scale of the study and (3) a less favourable climate to decomposition. Finally, although there was a correlation between litter interactions and functional divergence, this relationship was not generalizable and I rejected the hypothesis of a positive correlation between FDis and the deviations from BMRH.
60

Improving Knowledge of Truck Fuel Consumption Using Data Analysis

Johnsen, Sofia, Felldin, Sarah January 2016 (has links)
The large potential of big data and how it has brought value into various industries have been established in research. Since big data has such large potential if handled and analyzed in the right way, revealing information to support decision making in an organization, this thesis is conducted as a case study at an automotive manufacturer with access to large amounts of customer usage data of their vehicles. The reason for performing an analysis of this kind of data is based on the cornerstones of Total Quality Management with the end objective of increasing customer satisfaction of the concerned products or services. The case study includes a data analysis exploring how and if patterns about what affects fuel consumption can be revealed from aggregated customer usage data of trucks linked to truck applications. Based on the case study, conclusions are drawn about how a company can use this type of analysis as well as how to handle the data in order to turn it into business value. The data analysis reveals properties describing truck usage using Factor Analysis and Principal Component Analysis. Especially one property is concluded to be important as it appears in the result of both techniques. Based on these properties the trucks are clustered using k-means and Hierarchical Clustering which shows groups of trucks where the importance of the properties varies. Due to the homogeneity and complexity of the chosen data, the clusters of trucks cannot be linked to truck applications. This would require data that is more easily interpretable. Finally, the importance for fuel consumption in the clusters is explored using model estimation. A comparison of Principal Component Regression (PCR) and the two regularization techniques Lasso and Elastic Net is made. PCR results in poor models difficult to evaluate. The two regularization techniques however outperform PCR, both giving a higher and very similar explained variance. The three techniques do not show obvious similarities in the models and no conclusions can therefore be drawn concerning what is important for fuel consumption. During the data analysis many problems with the data are discovered, which are linked to managerial and technical issues of big data. This leads to for example that some of the parameters interesting for the analysis cannot be used and this is likely to have an impact on the inability to get unanimous results in the model estimations. It is also concluded that the data was not originally intended for this type of analysis of large populations, but rather for testing and engineering purposes. Nevertheless, this type of data still contains valuable information and can be used if managed in the right way. From the case study it can be concluded that in order to use the data for more advanced analysis a big-data plan is needed at a strategic level in the organization. The plan summarizes the suggested solution for the managerial issues of the big data for the organization. This plan describes how to handle the data, how the analytic models revealing the information should be designed and the tools and organizational capabilities needed to support the people using the information.

Page generated in 0.0841 seconds