• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 371
  • 356
  • 40
  • 34
  • 34
  • 32
  • 30
  • 28
  • 8
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 1077
  • 1077
  • 331
  • 274
  • 193
  • 136
  • 117
  • 101
  • 93
  • 92
  • 77
  • 76
  • 76
  • 72
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
841

Anatomy of the SIFT method / L'Anatomie de la méthode SIFT

Rey Otero, Ives 26 September 2015 (has links)
Cette thèse est une analyse approfondie de la méthode SIFT, la méthode de comparaison d'images la plus populaire. En proposant un échantillonnage du scale-space Gaussien, elle est aussi la première méthode à mettre en pratique la théorie scale-space et faire usage de ses propriétés d'invariance aux changements d'échelles.SIFT associe à une image un ensemble de descripteurs invariants aux changements d'échelle, invariants à la rotation et à la translation. Les descripteurs de différentes images peuvent être comparés afin de mettre en correspondance les images. Compte tenu de ses nombreuses applications et ses innombrables variantes, étudier un algorithme publié il y a une décennie pourrait surprendre. Il apparaît néanmoins que peu a été fait pour réellement comprendre cet algorithme majeur et établir de façon rigoureuse dans quelle mesure il peut être amélioré pour des applications de haute précision. Cette étude se découpe en quatre parties. Le calcul exact du scale-space Gaussien, qui est au cœur de la méthode SIFT et de la plupart de ses compétiteurs, est l'objet de la première partie.La deuxième partie est une dissection méticuleuse de la longue chaîne de transformations qui constitue la méthode SIFT. Chaque paramètre y est documenté et son influence analysée. Cette dissection est aussi associé à une publication en ligne de l'algorithme. La description détaillée s'accompagne d'un code en C ainsi que d'une plateforme de démonstration permettant l'analyse par le lecteur de l'influence de chaque paramètre. Dans la troisième partie, nous définissons un cadre d'analyse expérimental exact dans le but de vérifier que la méthode SIFT détecte de façon fiable et stable les extrema du scale-space continue à partir de la grille discrète. En découlent des conclusions pratiques sur le bon échantillonnage du scale-space Gaussien ainsi que sur les stratégies de filtrage de points instables. Ce même cadre expérimental est utilisé dans l'analyse de l'influence de perturbations dans l'image (aliasing, bruit, flou). Cette analyse démontre que la marge d'amélioration est réduite pour la méthode SIFT ainsi que pour toutes ses variantes s'appuyant sur le scale-space pour extraire des points d'intérêt. L'analyse démontre qu'un suréchantillonnage du scale-space permet d'améliorer l'extraction d'extrema et que se restreindre aux échelles élevées améliore la robustesse aux perturbations de l'image.La dernière partie porte sur l'évaluation des performances de détecteurs de points. La métrique de performance la plus généralement utilisée est la répétabilité. Nous démontrons que cette métrique souffre pourtant d'un biais et qu'elle favorise les méthodes générant des détections redondantes. Afin d'éliminer ce biais, nous proposons une variante qui prend en considération la répartition spatiale des détections. A l'aide de cette correction nous réévaluons l'état de l'art et montrons que, une fois la redondance des détections prise en compte, la méthode SIFT est meilleure que nombre de ses variantes les plus modernes. / This dissertation contributes to an in-depth analysis of the SIFT method. SIFT is the most popular and the first efficient image comparison model. SIFT is also the first method to propose a practical scale-space sampling and to put in practice the theoretical scale invariance in scale space. It associates with each image a list of scale invariant (also rotation and translation invariant) features which can be used for comparison with other images. Because after SIFT feature detectors have been used in countless image processing applications, and because of an intimidating number of variants, studying an algorithm that was published more than a decade ago may be surprising. It seems however that not much has been done to really understand this central algorithm and to find out exactly what improvements we can hope for on the matter of reliable image matching methods. Our analysis of the SIFT algorithm is organized as follows. We focus first on the exact computation of the Gaussian scale-space which is at the heart of SIFT as well as most of its competitors. We provide a meticulous dissection of the complex chain of transformations that form the SIFT method and a presentation of every design parameter from the extraction of invariant keypoints to the computation of feature vectors. Using this documented implementation permitting to vary all of its own parameters, we define a rigorous simulation framework to find out if the scale-space features are indeed correctly detected by SIFT, and which sampling parameters influence the stability of extracted keypoints. This analysis is extended to see the influence of other crucial perturbations, such as errors on the amount of blur, aliasing and noise. This analysis demonstrates that, despite the fact that numerous methods claim to outperform the SIFT method, there is in fact limited room for improvement in methods that extract keypoints from a scale-space. The comparison of many detectors proposed in SIFT competitors is the subject of the last part of this thesis. The performance analysis of local feature detectors has been mainly based on the repeatability criterion. We show that this popular criterion is biased toward methods producing redundant (overlapping) descriptors. We therefore propose an amended evaluation metric and use it to revisit a classic benchmark. For the amended repeatability criterion, SIFT is shown to outperform most of its more recent competitors. This last fact corroborates the unabating interest in SIFT and the necessity of a thorough scrutiny of this method.
842

Essays on accounting and incentives in Chinese equity markets

Zhu, Yin January 2015 (has links)
In this thesis, I exploit accounting issues in the Chinese context with a particular focus on the role of government. The thesis consists of three empirical essays, examining how the state coordinates among the state-owned enterprises in executive compensation (essay 1), how the government regulates the dividend payouts of listed firms (essay 2) and how the delisting regulation influences the accounting choices of listed firms (essay 3).The first essay examines relative performance evaluation (RPE) in China. Previous studies of RPE for executive compensations in Western developed markets have produced mixed findings. This is partly because the dispersion of share ownership in Western capital markets does not closely correspond with the single-principal/multi-agent theoretical setting assumed by Holmstrom (1982). In this study, I exploit the existence of a large number of state-owned enterprises (SOEs) in China to examine RPE in a setting closer to the theoretical assumption. I find that SOEs are more likely to use RPE for executive compensation than non-SOEs. This is consistent with better cross-firm coordination in executive contracting among SOEs under a common “state” principal than among non-SOEs with dispersed principals similar to Western firms. Furthermore, I find a more pronounced RPE effect among SOEs that are larger or have poorer past performance. This implies that the state principal has greater incentives to monitor strategically important firms or those in distress. The second essay examines the market reaction to and earnings management choices around changes in the regulations requiring a higher minimum dividend payout in China to shed new light on the determinants of dividend payout policy. I find that the market reaction is more positive for firms that paid less than the new required minimum payout than for those that paid more than the new required minimum, consistent with agency cost explanations of dividend payout. In addition, I find that low dividend payers exhibit a greater tendency to manage their earnings downwards to comply with the earnings-based threshold, and investors can “see through” such earnings management behaviors. My findings support the view of DeAngelo, DeAngelo and Skinner (2009) that agency costs of free cash flow retention are an important part of the dividend payout story. The third essay explores the earnings-based delisting rule in China that provides particularly strong motivation to manage earnings above the loss/profit threshold. I identify two groups of firms that successfully avoid being ST-ed, i.e. firms with a one-year loss before returning to profit, and firms with consecutive small profits. I provide a comprehensive examination of earnings management in terms of accruals management, real earnings management and non-operating income, to investigate whether Chinese firms manage earnings either to avoid reporting a loss or to avoid reporting two consecutive losses. Though there are mixed results sensitive to the research design for earnings management pattern in the two groups of firms, this study provides insights into earnings management induced by a government regulation.
843

Fuzzy State Reservoir Operation Models For Irrigation

Kumari, Sangeeta 18 July 2016 (has links) (PDF)
Efficient management of limited water resources in an irrigation reservoir system is necessary to increase crop productivity. To achieve this, a reservoir release policy should be integrated with an optimal crop water allocation. Variations in hydrologic variables such as reservoir inflow, soil moisture, reservoir storage, rainfall and evapotranspiration must be considered in the reservoir operating policy model. Uncertainties due to imprecision, subjectivity, vagueness and lack of adequate data can be handled using the fuzzy set theory. A fuzzy stochastic dynamic programming (FSDP) model with reservoir storage and soil moisture of the crops as fuzzy state variables and inflow as a stochastic variable, is developed to obtain a steady state reservoir operating policy. The model integrates the reservoir operating policy with the crop water allocation decisions by maintaining the storage continuity and the soil moisture balance. The reservoir release decisions are made in the model in 10-day periods and water is allocated to the crops on a daily basis. On comparison with the classical stochastic dynamic programming (SDP) model and a conceptual operation policy model, it is observed that the FSDP model, in general, results in lower release from the reservoir while maintaining lower soil moisture stress. However the steady state reservoir operation policy obtained using the FSDP model may not perform well in a short-term reservoir simulation. A fuzzy state short-term reservoir operation policy model with storage and soil moistures of the crops as fuzzy variables, is developed to obtain a real time release policy using forecasted inflow and forecasted rainfall. The distinguishing features of the model are accounting for (a) spatial variation of soil moisture and rainfall using gridded rainfall forecasts and (b) ponding depth requirement of the Paddy. On comparison with a conceptual operation policy model, the fuzzy state real time operation model is found most suitable for the application of the short term real time operation for irrigation. The real time operation model maintains high storage in the reservoir during most of the 10-day time periods of a year and results in a slightly lower annual releases as compared to the conceptual operation policy model. The effect of inflow forecast uncertainty is examined using different sets of forecasted inflows, and it is observed that the system performance is quite sensitive to inflow forecast uncertainties. The use of the satellite based gridded soil moisture in the real time operation model shows consideration of realistic situations. Further, three performance measures, viz., fuzzy reliability, fuzzy resiliency and fuzzy vulnerability are developed to evaluate the performance of the irrigation reservoir system under a specified operating policy. A fuzzy set with an appropriate membership function is defined to describe the working and failed states to account for the system being in partly working and partly failed state. The degree of failure of the irrigation reservoir system is defined based on the evapotranspiration deficit in a period. Inclusion of fuzziness in the performance measures enables realistic representation of uncertainties in the state of the system. A case study of Bhadra reservoir system in Karnataka, India is chosen for demonstrating the model applications.
844

Sur des modèles pour l’évaluation de performance des caches dans un réseau cœur et de la consommation d’énergie dans un réseau d’accès sans-fil / On models for performance analysis of a core cache network and power save of a wireless access network

Choungmo Fofack, Nicaise Éric 21 February 2014 (has links)
Internet est un véritable écosystème. Il se développe, évolue et s’adapte aux besoins des utilisateurs en termes de communication, de connectivité et d’ubiquité. Dans la dernière décennie, les modèles de communication ont changé passant des interactions machine-à-machine à un modèle machine-à-contenu. Cependant, différentes technologies sans-fil et de réseaux (tels que les smartphones et les réseaux 3/4G, streaming en ligne des médias, les réseaux sociaux, réseaux-orientés contenus) sont apparues pour améliorer la distribution de l’information. Ce développement a mis en lumière les problèmes liés au passage à l’échelle et à l’efficacité énergétique; d’où la question: Comment concevoir ou optimiser de tels systèmes distribués qui garantissent un accès haut débit aux contenus tout en (i) réduisant la congestion et la consommation d’énergie dans le réseau et (ii) s’adaptant à la demande des utilisateurs dans un contexte connectivité quasi-permanente? Dans cette thèse, nous nous intéressons à deux solutions proposées pour répondre à cette question: le déploiement des réseaux de caches et l’implantation des protocoles économes en énergie. Précisément, nous proposons des modèles analytiques pour la conception de ces réseaux de stockage et la modélisation de la consommation d’énergie dans les réseaux d’accès sans fil. Nos études montrent que la prédiction de la performance des réseaux de caches réels peut être faite avec des erreurs relatives absolues de l’ordre de 1% à 5% et qu’une proportion importante soit 70% à 90% du coût de l’énergie dans les cellules peut être économisée au niveau des stations de base et des mobiles sous des conditions réelles de trafic. / Internet is a real ecosystem. It grows, evolves and adapts to the needs of users in terms of communication, connectivity and ubiquity of users. In the last decade, the communication paradigm has shifted from traditional host-to-host interactions to the recent host-to-content model; while various wireless and networking technologies (such as 3/4G smartphones and networks, online media streaming, social networks, clouds, Big-Data, information-centric networks) emerged to enhance content distribution. This development shed light on scalability and energy efficiency issues which can be formulated as follows. How can we design or optimize such large scale distributed systems in order to achieve and maintain high-speed access to contents while (i) reducing congestion and energy consumption in the network and (ii) adapting to the temporal locality of users demand in a continuous connectivity paradigm? In this thesis we focus on two solutions proposed to answer this question: In-network caching and Power save protocols for scalability and energy efficiency issues respectively. Precisely, we propose analytic models for designing core cache networks and modeling energy consumption in wireless access networks. Our studies show that the prediction of the performance of general core cache networks in real application cases can be done with absolute relative errors of order of 1%–5%; meanwhile, dramatic energy save can be achieved by mobile devices and base stations, e.g., as much as 70%–90% of the energy cost in cells with realistic traffic load and the considered parameter settings.
845

Talent Management v organizaci / Talent management in company

Pecháčková, Marcela January 2012 (has links)
This master thesis focuses on actual topic of talent management. The thesis is divided into two parts, theoretic and practical one. The first one covers basic terms of talent management, evaluation, development and motivation, career management and retention of talented employees within company. In order to obtain necessary information secondary data were used. In second part there is an analysis of current state of human resources management in the company. The analysis is based on empiric research which consists of interviews with concerned people in company management and investigation of internal methodical materials. As a next step there is an analysis of employee's satisfaction which was realised through questionnaires. Thank to those data I suggested possible corrective measures to improve level of particular activities of human resources management in order to set up talent management, which is the main goal of my thesis. Based on previous findings and recommendations I also suggested complex solution of talent management's set-up in the company. Final part sums up results and recommendations for the company.
846

Avaliação externa baseada no pluralismo epistemológico : um estudo sobre o tema "Ser Humano" e "Saúde" no estado de Sergipe

Souza, Sanny Santos de 31 March 2016 (has links)
Science education is marked by conceptual plurality, fundamental aspect for learning activities. However, the evaluation is scarcely associated with these theoretical perspectives. Most exams verify student’s knowledge by one parameter, by an answer considered correct. This study, therefore, aims to construct and validate questions for an instrument focused in evaluating school performance for the elementary school in Sergipe over the axis of “Human Being” and “Health” inspired in the epistemological diversity of knowledge. Among the theories developed based on the theoretical perspective, alternative conceptions guide the present work. The resource being developed comprehends formation in natural sciences in elementary school, therefore applied in a sample of students of the 9o (ninth) year of state public education of Sergipe. A reference matrix was built to guide the instrument. The reference matrix considers the topics of Anatomy, Physiology, Diseases, Life Quality and Prophylactic Measurements. The goals of their descriptors were elaborated from the National Curriculum Parameters (NCP). The performance test elaborated from matrix has 10 questions, each one of them presenting four phrases. The Thurstone scale inspires the instrument. Question alternatives oscillate between scholar education and common sense, where some questions present all correct answers, and other present conceptual mistakes. The alternatives differ from scientific knowledge differently. Students respond to the question that seems more correct to them, being thus possible to estimate which knowledge is used in order to solve problems. If the knowledge is closer to the scientific or if in daily use common sense, dominate their activities. In order to ensure credibility, the test was validated qualitatively and through commented application by judges and quantitatively analyzed by descriptive statistics on Software Package for Social Science (SPSS) 18.0. / O ensino de ciências é marcado pela pluralidade conceitual, aspecto este fundamental para a aprendizagem. Entretanto, a avaliação é pouco associada à essas perspectivas teóricas. A maioria dos exames verifica o conhecimento dos alunos através de um parâmetro, de uma resposta considerada correta. Faltam testes que considerem a diversidade de saberes, sua existência e influência para a aprendizagem. Objetiva-se portanto, com este estudo, construir e validar questões para um instrumento de avaliação do desempenho escolar para o ensino fundamental em Sergipe sobre o eixo “Ser Humano” e “Saúde” inspirado na diversidade epistemológica do conhecimento. Dentre as teorias desenvolvidas com base nessa perspectiva teórica, as concepções alternativas balizam o presente trabalho. O recurso em desenvolvimento compreende a formação em ciências naturais no ensino fundamental, portanto aplicado em uma amostra de alunos do 9º (nono) ano da rede pública estadual de Sergipe. Uma matriz de referência foi construída para nortear o instrumento. A matriz de referência construída aborda os conteúdos de Anatomia, Fisiologia, Doenças, Qualidade de Vida e Medidas Profiláticas. Os objetivos dos seus descritores foram elaborados segundo os Parâmetros Curriculares Nacionais (PCN). O teste de desempenho elaborado a partir da matriz conta com dez questões, cada questão com quatro sentenças. O instrumento é inspirado na escala de Thurstone. As alternativas das perguntas oscilam entre o conhecimento escolar e o conhecimento comum, podem ocorrer questões nas quais todas as alternativas estão corretas, outras podem conter erros conceituais. As alternativas estão a diferentes distâncias do conhecimento científico. Os alunos responderam com a resposta que lhe parece mais correta, sendo possível assim, aferir qual conhecimento é manifestado na hora de resolver problemas. Se o conhecimento dos alunos está mais próximo do senso científico, ou se no seu dia-a-dia o senso comum domina suas atividades. Para garantir sua credibilidade o teste foi validado qualitativamente por juízes e aplicação comentada e quantitativamente analisada por estatística descritiva no Software Package for Social Science (SPSS) 18.0.
847

Contribuições para a análise da efetividade do Programa Município VerdeAzul no âmbito da gestão ambiental paulista / Contributions for an analysis of the effectiveness of the \"Município VerdeAzul\" Program in the context of São Paulo Environmental Management

Lílian Fernandes Machado 11 June 2014 (has links)
A autonomia dos municípios resultantes da descentralização permite aos governos locais legislar, administrar e executar ações ambientais em seus territórios, entretanto, ainda existe diversas questões que comprometem este processo, como o baixo orçamento destinado à área ambiental e a falta de corpo técnico capacitado. No Estado de São Paulo o Programa Município VerdeAzul (PMVA) tem sido executado com o objetivo de implementar uma agenda ambiental mínima a todos os municípios. Esta agenda auxilia os municípios nas questões ambientais por meio do compartilhamento da responsabilidade da qualidade ambiental entre Estado e municípios; da promoção da descentralização da política ambiental e do fortalecimento da gestão ambiental local. O PMVA acompanha e certifica o desempenho dos municípios através do Índice de Avaliação Ambiental (IAA). Contudo, este acompanhamento acaba sendo focado em uma pontuação geral e na posição alcançada no Ranking Ambiental Paulista, não foi verificado mecanismos para analisar o desempenho ambiental de cada ação e a efetividade do PMVA. Sendo assim, esta pesquisa teve por objetivo desenvolver um conjunto de indicadores para a Avaliação de Desempenho Ambiental (ADA) do PMVA e analisar aplicabilidade deste conjunto para a análise de efetividade do programa. Para isto, foram utilizadas, principalmente, as pesquisa bibliográfica e a documental. A construção dos Indicadores de Desempenho Ambiental (IDAs) foi baseada na ISO 14031 e a coleta de dados foi efetuada nos Planos de Ação dos municípios da UGRHI 13. A analise de efetividade foi realizada a partir dos resultados dos IDAs, o desempenho alcançado em cada indicador foi agrupado em uma escala de efetividade que variou de 0 a 100. A aplicação da ADA apresentou resultados que evidenciam avanços alcançados em relação à estruturação do sistema de gestão ambiental local, ainda que restritos a um contingente pequeno de municípios. A analise de efetividade apresentou resultado parcialmente satisfatório na maioria dos IDAs em 2011 e pouco satisfatório em 2012. Em relação à aplicabilidade da ADA, é possível dizer que esta foi capaz de demonstrar o desempenho ambiental de cada ação analisada e contribuir para a análise da efetividade do PMVA. / The autonomy of municipalities resulting from decentralization allows the local governments to legislate, to administer and implement environmental initiatives in their territory. However, several issues affect this process, as the budget for the environmental area and the lack of trained technical team. In São Paulo, the \"Município VerdeAzul\" Program (PMVA) has been performed with the objective of implementing a minimal environmental agenda to all municipalities. This agenda assists the municipalities in environmental issues by sharing the responsibility of environmental quality between state and municipalities; promoting decentralization of environmental policy and strengthening local environmental management. The PMVA monitors and certifies the performance of municipalities through the Environmental Assessment Index (IAA). However, this monitoring is eventually focused on an overall score and on the position reached in Environmental Paulista Ranking, was not observed mechanisms to analyze the environmental performance of each action and the effectiveness of PMVA. Thus, this research aimed to develop a set of indicators for the Environmental Performance Evaluation (EPE) of the PMVA and analyze the applicability of these indicators for the analysis of program effectiveness. So, was used mainly the literature research and documentary research. The construction of the Environmental Performance Indicators (EPIs) was based on ISO 14031 and the data collection was performed in the Plans of Action of the municipalities of the UGRHI 13. The analysis of effectiveness was performed from the results of the EPIs, the performance achieved in each indicator was grouped on a scale of effectiveness between 0 and 100. The application of the EPE presented results that show progress with respect to the structuring of local environmental management system, though restricted to a small number of municipalities. The effectiveness analysis showed partially satisfactory results in most EPIs in 2011 and somewhat satisfactory in 2012. Regarding the applicability of the EPE, you can tell that this was able to demonstrate the environmental performance of every action analyzed and contribute to the analysis of the effectiveness of PMVA.
848

Um processo de desenvolvimento de software focado em sistemas distribuídos autonômicos / A software development process focused on autonomic distributed systems

Pedro Felipe do Prado 20 June 2017 (has links)
Os Sistemas Distribuídos (SDs) tem apresentado uma crescente complexidade no seu gerenciamento, além de possuir a necessidade de garantir Qualidade de Serviço (QoS) aos seus usuários. A Computação Autonômica (CA) surge como uma forma de transformar os SDs em Sistemas Distribuídos Autonômicos (SDAs), com capacidade de auto-gerenciamento. Entretanto, não foi encontrado um processo de desenvolvimento de software, focado na criação de SDAs. Na grande maioria dos trabalhos relacionados, simplesmente é apresentado um SD, juntamente com qual aspecto da CA deseja-se implementar, a técnica usada e os resultados obtidos. Isso é apenas uma parte do desenvolvimento de um SDA, não abordando desde a definição dos requisitos até a manutenção do software. Mais importante, não mostra como tais requisitos podem ser formalizados e posteriormente solucionados por meio do auto-gerenciamento fornecido pela CA. Esta tese foca na proposta de um processo de desenvolvimento de software voltado para SDAs. Com esse objetivo, foram integradas diferentes áreas de conhecimento, compreendendo: Processo Unificado de Desenvolvimento de Software (PU), SDs, CA, Pesquisa Operacional (PO) e Avaliação de Desempenho de Sistemas Computacionais (ADSC). A prova de conceito foi feita por meio de três estudos de caso, todos focando-se em problemas NP-Difícil, são eles: (i) otimização off-line (problema da mochila com múltiplas escolhas), (ii) otimização online (problema da mochila com múltiplas escolhas) e (iii) criação do módulo planejador de um gerenciador autonômico, visando realizar o escalonamento de requisições (problema de atribuição generalizado). Os resultados do primeiro estudo de caso, mostram que é possível usar PO e ADSC para definir uma arquitetura de base para o SDA em questão, bem como reduzir o tamanho do espaço de busca quando o SDA estiver em execução. O segundo, prova que é possível garantir a QoS do SDA durante sua execução, usando a formalização fornecida pela PO e sua respectiva solução. O terceiro, prova que é possível usar a PO para formalizar o problema de auto-gerenciamento, bem como a ADSC para avaliar diferentes algoritmos ou modelos de arquitetura para o SDA. / Distributed Systems (DSs) have an increasing complexity and do not have their management, besides having a quality of service (QoS) to its users. Autonomic Computing (AC) emerges as a way of transforming the SDs into Autonomous Distributed Systems (ADSs), with a capacity for self-management. However, your software development process is focused on creating SDAs. In the vast majority of related works, simply an SD model, along with what aspect of the AC implement, a technique used and the results obtained. This is only a part of the development of an ADS, not approaching from an definition of requirements for a maintenance of software. More importantly, it does not show how such requirements can be formalized and subsequently solved through the self-management provided by AC. This proposal aims at a software development process for the DASs. To this end, different areas of knowledge were integrated, including: Unified Software Development Process (PU), SDs, CA, Operations Research (OR) and Computer Systems Performance Evaluation (CSPE). The proof of concept was made through three case studies, all focusing on NP-Hard problems, namely: (i) off-line optimization (problem of the backpack with multiple choices), (ii) (Problem of the backpack with multiple choices) and (iii) creation of the scheduling module of an autonomic manager, aiming to carry out the scheduling of requests (problem of generalized assignment). The results of the first case study show that it is possible to use OR and CSPE to define a base architecture for the DAS in question, as well as reduce the size of the search space when SDA is running. The second, proves that it is possible to guarantee the QoS of the DAS during its execution, using the formalization provided by the OR and its respective solution. The third, proves that it is possible to use the PO to formalize the self-management problem, as well as the ADSC to evaluate different algorithms or architecture models for the ADS.
849

Avaliação do impacto de ecoinovações: o caso da tecnologia de biodigestores aplicada na agroindústria processadora de mandioca do estado do Paraná / Eco-innovations impact assessment: the case of biodigester technology applied in processing of cassava industries of Paraná state

Jesus, Marco Antonio Sampaio de 26 June 2015 (has links)
Submitted by Nadir Basilio (nadirsb@uninove.br) on 2016-06-07T16:19:57Z No. of bitstreams: 1 Marco Antonio Sampaio de Jesus.pdf: 2226717 bytes, checksum: 6a4b34c676cc6dcd79f81303dac46949 (MD5) / Made available in DSpace on 2016-06-07T16:19:57Z (GMT). No. of bitstreams: 1 Marco Antonio Sampaio de Jesus.pdf: 2226717 bytes, checksum: 6a4b34c676cc6dcd79f81303dac46949 (MD5) Previous issue date: 2015-06-26 / The negative externalities caused by production activities to satisfy the consumption needs impact directly on the natural environment, for example, the emission of greenhouse gases and pollution of soil and water, requiring new approaches to decision-making processes in organizations. Among these approaches are incorporating innovations that provide reduced environmental impacts, also called eco-innovations, and the adoption of models that assess comprehensively, integrated and in different perspectives the overall performance of these innovations. Considering that the object of this research is the eco-innovation 'biodigester technology' and the research field are the processing of cassava industries in State of Paraná, this multicase study included an extensive literature review to propose a specific set of indicators to assess the overall impact caused by the eco-innovation cited, covering the classic dimensions of sustainable development (environmental, social and economic) as well as other dimensions defined for the model used (training of human resources, institutional development, introducing innovation, unexpected/unwanted occurrences and characteristics of environmental management). This set of indicators was incorporated into the INOVA-tec System Model (Jesus-Hitzschky, 2007) developed by a researcher at the Brazilian Agricultural Research (Embrapa), setting up a new model called INOVA-tec System Modified. The primary data were processed in the new model and after analysis of the results it was evident that the technology object of this research has a favorable scenario for its spread, but the performance indicators are still low. Then were then presented suggestions for improvement to optimize the overall impact of the technology and at the end, it is recommended that the model proposed in this study be applied to other activities of the agricultural industry in order to validate and improve it´s theoretical contribution. / As externalidades negativas provocadas pelas atividades produtivas para atender as necessidades de consumo impactam diretamente o meio ambiente natural, por exemplo, pela emissão de gases efeito estufa e contaminação do solo e da água, exigindo novas abordagens para os processos decisórios das organizações. Entre essas abordagens estão a incorporação de inovações que proporcionem redução dos impactos ambientais, também denominadas ecoinovações, bem como a adoção de modelos que avaliem de forma abrangente, integrada e em diferentes perspectivas a performance geral dessas inovações. Tendo como objeto de pesquisa a ecoinovação ‘tecnologia de biodigestores’ e como campo de pesquisa as fecularias do Estado do Paraná, este estudo de casos múltiplos compreendeu uma extensa revisão na literatura para propor um conjunto específico de indicadores capazes de avaliar o impacto geral provocado pela referida ecoinovação, contemplando as dimensões clássicas do desenvolvimento sustentável (ambiental, social e econômica) bem como as demais dimensões definidas para o modelo utilizado (capacitação de recursos humanos, desenvolvimento institucional, introdução da inovação, ocorrências inesperadas/indesejadas e características da gestão ambiental). Esse conjunto de indicadores foi incorporado ao modelo INOVA-tec System (Jesus-Hitzschky, 2007) desenvolvido por uma pesquisadora da Empresa Brasileira de Pesquisa Agropecuária, configurando um novo modelo aqui denominado INOVA-tec System Modificado. Os dados primários obtidos foram processados no novo modelo e após análise dos resultados ficou evidenciado que a tecnologia objeto desta pesquisa tem um cenário favorável à sua disseminação, porém a performance dos indicadores ainda é baixa. Foram, então, apresentadas sugestões de melhoria visando otimizar o impacto geral da tecnologia e, ao final, recomenda-se que o modelo proposto neste estudo seja aplicado em outras atividades da agroindústria a fim de validar e aprimorar sua contribuição teórica.
850

A simulation workflow to evaluate the performance of dynamic load balancing with over decomposition for iterative parallel applications

Tesser, Rafael Keller January 2018 (has links)
Nesta tese é apresentado um novo workflow de simulação para avaliar o desempenho do balanceamento de carga dinâmico baseado em sobre-decomposição aplicado a aplicações paralelas iterativas. Seus objetivos são realizar essa avaliação com modificações mínimas da aplicação e a baixo custo em termos de tempo e de sua necessidade de recursos computacionais. Muitas aplicações paralelas sofrem com desbalanceamento de carga dinâmico (temporal) que não pode ser tratado a nível de aplicação. Este pode ser causado por características intrínsecas da aplicação ou por fatores externos de hardware ou software. Como demonstrado nesta tese, tal desbalanceamento é encontrado mesmo em aplicações cujo código não aparenta qualquer dinamismo. Portanto, faz-se necessário utilizar mecanismo de balanceamento de carga dinâmico a nível de runtime. Este trabalho foca no balanceamento de carga dinâmico baseado em sobre-decomposição. No entanto, avaliar e ajustar o desempenho de tal técnica pode ser custoso. Isso geralmente requer modificações na aplicação e uma grande quantidade de execuções para obter resultados estatisticamente significativos com diferentes combinações de parâmetros de balanceamento de carga Além disso, para que essas medidas sejam úteis, são usualmente necessárias grandes alocações de recursos em um sistema de produção. Simulated Adaptive MPI (SAMPI), nosso workflow de simulação, emprega uma combinação de emulação sequencial e replay de rastros para reduzir os custos dessa avaliação. Tanto emulação sequencial como replay de rastros requerem um único nó computacional. Além disso, o replay demora apenas uma pequena fração do tempo de uma execução paralela real da aplicação. Adicionalmente à simulação de balanceamento de carga, foram desenvolvidas técnicas de agregação espacial e rescaling a nível de aplicação, as quais aceleram o processo de emulação. Para demonstrar os potenciais benefícios do balanceamento de carga dinâmico com sobre-decomposição, foram avaliados os ganhos de desempenho empregando essa técnica a uma aplicação iterativa paralela da área de geofísica (Ondes3D). Adaptive MPI (AMPI) foi utilizado para prover o suporte a balanceamento de carga dinâmico, resultando em ganhos de desempenho de até 36.58% em 288 cores de um cluster Essa avaliação também é usada pra ilustrar as dificuldades encontradas nesse processo, assim justificando o uso de simulação para facilitá-la. Para implementar o workflow SAMPI, foi utilizada a interface SMPI do simulador SimGrid, tanto no modo de emulação, como no de replay de rastros. Para validar esse simulador, foram comparadas execuções simuladas (SAMPI) e reais (AMPI) da aplicação Ondes3D. As simulações apresentaram uma evolução do balanceamento de carga bastante similar às execuções reais. Adicionalmente, SAMPI estimou com sucesso a melhor heurística de balanceamento de carga para os cenários testados. Além dessa validação, nesta tese é demonstrado o uso de SAMPI para exploração de parâmetros de balanceamento de carga e para planejamento de capacidade computacional. Quanto ao desempenho da simulação, estimamos que o workflow completo é capaz de simular a execução do Ondes3D com 24 combinações de parâmetros de balanceamento de carga em 5 horas para o nosso cenário de terremoto mais pesado e 3 horas para o mais leve. / In this thesis we present a novel simulation workflow to evaluate the performance of dynamic load balancing with over-decomposition applied to iterative parallel applications at low-cost. Its goals are to perform such evaluation with minimal application modification and at a low cost in terms of time and of resource requirements. Many parallel applications suffer from dynamic (temporal) load imbalance that can not be treated at the application level. It may be caused by intrinsic characteristics of the application or by external software and hardware factors. As demonstrated in this thesis, such dynamic imbalance can be found even in applications whose codes do not hint at any dynamism. Therefore, we need to rely on runtime dynamic load balancing mechanisms, such as dynamic load balancing based on over-decomposition. The problem is that evaluating and tuning the performance of such technique can be costly. This usually entails modifications to the application and a large number of executions to get statistically sound performance measurements with different load balancing parameter combinations. Moreover, useful and accurate measurements often require big resource allocations on a production cluster. Our simulation workflow, dubbed Simulated Adaptive MPI (SAMPI), employs a combined sequential emulation and trace-replay simulation approach to reduce the cost of such an evaluation Both sequential emulation and trace-replay require a single computer node. Additionally, the trace-replay simulation lasts a small fraction of the real-life parallel execution time of the application. Besides the basic SAMPI simulation, we developed spatial aggregation and applicationlevel rescaling techniques to speed-up the emulation process. To demonstrate the real-life performance benefits of dynamic load balance with over-decomposition, we evaluated the performance gains obtained by employing this technique on a iterative parallel geophysics application, called Ondes3D. Dynamic load balancing support was provided by Adaptive MPI (AMPI). This resulted in up to 36.58% performance improvement, on 288 cores of a cluster. This real-life evaluation also illustrates the difficulties found in this process, thus justifying the use of simulation. To implement the SAMPI workflow, we relied on SimGrid’s Simulated MPI (SMPI) interface in both emulation and trace-replay modes.To validate our simulator, we compared simulated (SAMPI) and real-life (AMPI) executions of Ondes3D. The simulations presented a load balance evolution very similar to real-life and were also successful in choosing the best load balancing heuristic for each scenario. Besides the validation, we demonstrate the use of SAMPI for load balancing parameter exploration and for computational capacity planning. As for the performance of the simulation itself, we roughly estimate that our full workflow can simulate the execution of Ondes3D with 24 different load balancing parameter combinations in 5 hours for our heavier earthquake scenario and in 3 hours for the lighter one.

Page generated in 0.16 seconds