• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 18
  • 17
  • 6
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 17
  • 16
  • 15
  • 15
  • 15
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Otimização de um processo de classificação de couros no estágio wet blue : um caso em uma indústria curtidora

Arriba, Gustavo de January 2005 (has links)
A melhoria de qualidade e redução de custos para produtos e processos industriais pode ser realizada utilizando-se métodos de otimização experimental. Esta dissertação apresenta um método estruturado para a otimização do processo de classificação de couro wet blue usando em forma integrada as ferramentas de Análise de Sistemas de Medição e Projeto de Experimentos aplicado a variáveis discretas. O couro sofre vários processos químicos e mecânicos até chegar no estágio final de couro acabado sendo os principais estágios; os de wet blue, semi-acabado e acabado. O primeiro deles consiste na preparação da derme para o curtimento com o objetivo de tornar o couro imputrescível. O segundo processo, que transforma o couro em um produto semi-acabado, confere as características de maciez, resistência, cor e espessura e por último no processo de acabamento se obtêm as características finais de textura, brilho, toque superficial e valores técnicos exigidos pelos clientes como resistências a fricção, flexão, solidez a luz e adesão. O principal objetivo do trabalho é otimizar o processo de classificação no estágio wet blue a fim de reduzir o índice de refugos por classificação errada no produto acabado, sem aumentar o desperdício por refugar matéria-prima na origem. O método é ilustrado com um caso em uma Indústria Curtidora da região do Vale do Sinos em Rio Grande do Sul. Historicamente, existiam refugos no produto acabado que constituíam o principal problema de qualidade da indústria e um dos fatores de maior peso no custo da não qualidade. O estudo do Sistema de Medição permitiu modelar um treinamento aplicado aos classificadores de couro e estabelecer um sistema de indicadores que permitiu acompanhar a evolução ao longo do estudo. A aplicação do Projeto de Experimentos serviu para definir a melhor prática de classificação que conduziu à diminuição de erros por classificação errada. Os dados coletados ao longo da implantação permitiram apresentar conclusões comparando a situação antes e depois. Assim, verificou-se, com a implantação das medidas sugeridas com este estudo, uma redução de 60 % no índice de refugos por classificação errada no produto acabado e semi-acabado que traduzido a valores financeiros representou 420.000 reais por ano. / Quality improvement and cost reduction for manufacturing process and products can be executed using experimental optimization methodology. This Dissertation shows a structured method to optimize the sorting process of wet blue leather using at the same time the Measure Analysis System and Design of Experiment tools, both of them applied to discrete variables. Chemical and mechanical process, are applied to the hides to get a finish leather product, being the main steps; wet blue, crust and finish. The first consists in the preparation of the skin for the tanning to avoid the rottenness of the leather. The second process, which transforms the wet blue in a crust, gives to the leather the softness, resistance, color and substance and finally, the finishing process gives the final characteristic of texture, shine, touch and technical values required by the customers, like rubbing, bending, color fastness and adhesion. The main objective of the work is to optimize the sorting process in wet blue in order to reduce the rejects due sorting errors on the finished leather without increase the waste of good material at the origin. The method is illustrated with a case in a Tannery Industry located in Vale do Sinos area in Rio Grande do Sul. Historically, there were rejects in finished leather which created the main problems of Quality and one of the causes of greater importance of non-quality cost. The Measure Analysis System allowed to model a training applied to the leather sorters and establish a group of index to follow up the evolution of the study. The application of the Design of Experiments served to define the best practice of sorting which led to the reduction of sorting errors. Conclusions were shown during the implementation obtained by the collected data, comparing the previous situation with the present. In this way it was possible to verify, with the implementation of the suggested rules, a reduction of 60 % of the rejects index caused by sorting errors which means 180.000 u$s per year.
82

Avaliação da eficiência na alocação dos ativos nas companhias seguradoras brasileiras

Mette, Frederike Monika Budiner January 2009 (has links)
O objetivo do presente trabalho foi avaliar se as companhias seguradoras no Brasil otimizaram a alocação de seus ativos no período de 2001 a 2007. Dessa forma, baseando-se na carteira de investimentos dessas companhias e na teoria clássica de seleção de carteiras, formulada por Markowitz (1952), é possível avaliar a eficiência da alocação de todos os ativos. Assim, o trabalho buscou ilustrar a aplicação de um método de avaliação de ativos, bastante semelhante ao proposto por Leal, Silva e Ribeiro (2001) .Onde através da simulação de fronteiras eficientes, busca considerar a existência dos erros de estimação presentes nos retornos e covariâncias utilizados na teoria de Markowitz (1952). Os resultados obtidos mostraram que, segundo o método utilizado, a maioria dessas instituições alocou seus ativos de forma eficiente durante o período estudado. / The main purpose of this work was to evaluate if the insurance companies in Brazil are optimizing their asset allocation for the period of 2001 to 2007. So, based on the investment portfolios of these companies and on Markowitz (1952) Portfolio Selection Theory, it is possible to evaluate the investments in all the asset area. In this way, this work illustrated the application of an asset evaluation model, very similar to the one proposed by Leal, Silva and Ribeiro (2001), that, by simulating efficient frontiers, tries to consider the existence of estimation errors on the returns and covariances used as inputs on Markowitz (1952) Portfolio Selection Theory. The results have shown that, according to the applied methodology, the majority of these institutions allocated their assets efficiently during the studied period.
83

Avaliação da eficiência na alocação dos ativos nas companhias seguradoras brasileiras

Mette, Frederike Monika Budiner January 2009 (has links)
O objetivo do presente trabalho foi avaliar se as companhias seguradoras no Brasil otimizaram a alocação de seus ativos no período de 2001 a 2007. Dessa forma, baseando-se na carteira de investimentos dessas companhias e na teoria clássica de seleção de carteiras, formulada por Markowitz (1952), é possível avaliar a eficiência da alocação de todos os ativos. Assim, o trabalho buscou ilustrar a aplicação de um método de avaliação de ativos, bastante semelhante ao proposto por Leal, Silva e Ribeiro (2001) .Onde através da simulação de fronteiras eficientes, busca considerar a existência dos erros de estimação presentes nos retornos e covariâncias utilizados na teoria de Markowitz (1952). Os resultados obtidos mostraram que, segundo o método utilizado, a maioria dessas instituições alocou seus ativos de forma eficiente durante o período estudado. / The main purpose of this work was to evaluate if the insurance companies in Brazil are optimizing their asset allocation for the period of 2001 to 2007. So, based on the investment portfolios of these companies and on Markowitz (1952) Portfolio Selection Theory, it is possible to evaluate the investments in all the asset area. In this way, this work illustrated the application of an asset evaluation model, very similar to the one proposed by Leal, Silva and Ribeiro (2001), that, by simulating efficient frontiers, tries to consider the existence of estimation errors on the returns and covariances used as inputs on Markowitz (1952) Portfolio Selection Theory. The results have shown that, according to the applied methodology, the majority of these institutions allocated their assets efficiently during the studied period.
84

A Memory Allocation Framework for Optimizing Power Consumption and Controlling Fragmentation

Panwar, Ashish January 2015 (has links) (PDF)
Large physical memory modules are necessary to meet performance demands of today's ap- plications but can be a major bottleneck in terms of power consumption during idle periods or when systems are running with workloads which do not stress all the plugged memory resources. Contribution of physical memory in overall system power consumption becomes even more signi cant when CPU cores run on low power modes during idle periods with hardware support like Dynamic Voltage Frequency Scaling. Our experiments show that even 10% of memory allocations can make references to all the banks of physical memory on a long running system primarily due to the randomness in page allocation. We also show that memory hot-remove or memory migration for large blocks is often restricted, in a long running system, due to allocation policies of current Linux VM which mixes movable and unmovable pages. Hence it is crucial to improve page migration for large contiguous blocks for a practical realization of power management support provided by the hardware. Operating systems can play a decisive role in effectively utilizing the power management support of modern DIMMs like PASR(Partial Array Self Refresh) in these situations but have not been using them so far. We propose three different approaches for optimizing memory power consumption by in- ducing bank boundary awareness in the standard buddy allocator of Linux kernel as well as distinguishing user and kernel memory allocations at the same time to improve the movability of memory sections (and hence memory-hotplug) by page migration techniques. Through a set of minimal changes in the standard buddy system of Linux VM, we have been able to reduce the number of active memory banks significantly (upto 80%) as well as to improve performance of memory-hotplug framework (upto 85%).
85

Seleção de fornecedores de serviço de transporte utilizando leilão combinatório de compras: adaptação e aplicação do algoritmo Iterative Deepening Search A* (IDA*). / Supplier selection of transportation services using reverse combinatorial auction: adaptation and aplication of Iterative Deepening Search A* (IDA*).

Catalina Higuita Salazar 15 December 2011 (has links)
A seleção de fornecedores de transporte é um desafio cada vez maior. O crescimento da rede de clientes a ser coberta demanda uma alocação eficiente em termos de custo não suprida por mecanismos tradicionais de negociação. Neste âmbito, o leilão combinatório torna-se uma alternativa de negociação ao permitir capturar sinergias entre os trajetos que devem ser atendidos. Em conseqüência disso, diminui-se o custo de transporte do fornecedor que se reflete nos menores preços de suas propostas e finalmente no custo total de compra do serviço. Por outro lado, esta decisão envolve fatores além do custo total; a mensuração destes torna-se importante para identificar fornecedores que melhor se ajustam aos requerimentos do comprador. No entanto, é fundamental escolher um método adequado para sua avaliação porque este influência a decisão final. Este problema de compra de serviços de transporte é conhecido na literatura como Winner Determination Problem (WDP) que, devido a sua complexidade, possui uma resolução limitada. Após revisão teórica, foi observado que os estudos relacionados à área de transporte focalizavam o desenvolvimento de modelos matemáticos que fossem representativos da realidade. Alguns destes modelos abordam a utilização de múltiplos critérios atribuindo um coeficiente que pondera cada critério. Evidenciou-se a necessidade do desenvolvimento de um algoritmo alternativo que além de facilitar sinergias entre trajetos, fosse abrangente o suficiente para tratar múltiplos critérios em instâncias compatíveis com problemas reais. Logo, com o intuito de contribuir com a literatura foi adaptado um algoritmo matemático otimizante ao problema de compras de fornecedores de transporte com base no algoritmo de Sandholm (2002). Este algoritmo aplica leilão combinatório de compras, apoiando-se na teoria da análise de decisão para mensurar critérios relevantes do comprador. Inicialmente, o algoritmo minimiza o custo total do comprador designando combinações de trajetos e fornecedores; depois é modificado para o tratamento multi-critério. Os resultados obtidos foram comparados com o software comercial CPLEX. / Selecting suppliers is a crescent challenge for the enterprises. The extent of the client web that needs to be served demands efficient allocations, in terms of cost, that are not addressed by traditional mechanisms. In this scenario, another mechanism came to be: the combinatorial auction. In this one, suppliers can express their synergies on routes they wish to supply. This leads to lowering their transportation costs, which is reflected in lower bidding prices as well as in the total cost of service. On the other hand, the selection of a supplier involves other criteria besides cost. The definition of these is essential to define which supplier fits the needs of the buyer. That is why it is of most importance to choose the right method to evaluate these needs, as it defines the final choice. This problem is known as Winner Determination Problem (WDP) and due to its complexity, possesses a feeble solution. After compiling what has been done about the subject, it was noticed that in the field of transport, studies are focused on mathematical models that represent reality. Some models address criteria assigning coefficients to the objective function by weighting on it. Clearly, there was a need for alternative algorithms that would, besides promoting synergies on routes, also treat multi-criteria problems close to reality. Therefore, searching for a valid contribution in the field, an adaption of an optimizing algorithm based on Sandholm (2002)s was made. The algorithm applies combinatorial auction, supported by decision analysis for measuring relevant buyers criteria. First, the main algorithms objective is to minimize buyers costs by combining routes and suppliers; then, a modified approach considers multi criteria. Results were then compared to the commercial software CPLEX.
86

Spolupracující osoby z hlediska daní a pojistného / Cooperative persons in terms of taxes and insurance

STEHLÍKOVÁ, Blanka January 2011 (has links)
Diplomová práce se zabývá spolupracující osobou z hlediska daně z příjmů, sociálního a zdravotního pojištění. Teoretické poznatky jsou analyzovány na příkladech z praxe, na závěr jsou shrnuty výhody i nevýhody využití institutu spolupráce osob podle § 13 zákona o dani z příjmů. Příloha v MS Excel obsahuje interaktivní tabulky usnadňující výpočet podílu na příjmech a výdajích spolupracující osoby, výpočet daně z příjmů, sociálního a zdravotního pojištění. Algoritmy pro výpočty jsou stanoveny podle zákonů platných pro zdaňovací období 2010 a 2011.
87

A study of CABAC hardware acceleration with configurability in multi-standard media processing / En studie i konfigurerbar hårdvaruaccelerering för CABAC i flerstandards mediabearbetning

Flordal, Oskar January 2005 (has links)
To achieve greater compression ratios new video and image CODECs like H.264 and JPEG 2000 take advantage of Context adaptive binary arithmetic coding. As it contains computationally heavy algorithms, fast implementations have to be made when they are performed on large amount of data such as compressing high resolution formats like HDTV. This document describes how entropy coding works in general with a focus on arithmetic coding and CABAC. Furthermore the document dicusses the demands of the different CABACs and propose different options to hardware and instruction level optimisation. Testing and benchmarking of these implementations are done to ease evaluation. The main contribution of the thesis is parallelising and unifying the CABACs which is discussed and partly implemented. The result of the ILA is improved program flow through a specialised branching operations. The result of the DHA is a two bit parallel accelerator with hardware sharing between JPEG 2000 and H.264 encoder with limited decoding support.
88

Otimização de um processo de classificação de couros no estágio wet blue : um caso em uma indústria curtidora

Arriba, Gustavo de January 2005 (has links)
A melhoria de qualidade e redução de custos para produtos e processos industriais pode ser realizada utilizando-se métodos de otimização experimental. Esta dissertação apresenta um método estruturado para a otimização do processo de classificação de couro wet blue usando em forma integrada as ferramentas de Análise de Sistemas de Medição e Projeto de Experimentos aplicado a variáveis discretas. O couro sofre vários processos químicos e mecânicos até chegar no estágio final de couro acabado sendo os principais estágios; os de wet blue, semi-acabado e acabado. O primeiro deles consiste na preparação da derme para o curtimento com o objetivo de tornar o couro imputrescível. O segundo processo, que transforma o couro em um produto semi-acabado, confere as características de maciez, resistência, cor e espessura e por último no processo de acabamento se obtêm as características finais de textura, brilho, toque superficial e valores técnicos exigidos pelos clientes como resistências a fricção, flexão, solidez a luz e adesão. O principal objetivo do trabalho é otimizar o processo de classificação no estágio wet blue a fim de reduzir o índice de refugos por classificação errada no produto acabado, sem aumentar o desperdício por refugar matéria-prima na origem. O método é ilustrado com um caso em uma Indústria Curtidora da região do Vale do Sinos em Rio Grande do Sul. Historicamente, existiam refugos no produto acabado que constituíam o principal problema de qualidade da indústria e um dos fatores de maior peso no custo da não qualidade. O estudo do Sistema de Medição permitiu modelar um treinamento aplicado aos classificadores de couro e estabelecer um sistema de indicadores que permitiu acompanhar a evolução ao longo do estudo. A aplicação do Projeto de Experimentos serviu para definir a melhor prática de classificação que conduziu à diminuição de erros por classificação errada. Os dados coletados ao longo da implantação permitiram apresentar conclusões comparando a situação antes e depois. Assim, verificou-se, com a implantação das medidas sugeridas com este estudo, uma redução de 60 % no índice de refugos por classificação errada no produto acabado e semi-acabado que traduzido a valores financeiros representou 420.000 reais por ano. / Quality improvement and cost reduction for manufacturing process and products can be executed using experimental optimization methodology. This Dissertation shows a structured method to optimize the sorting process of wet blue leather using at the same time the Measure Analysis System and Design of Experiment tools, both of them applied to discrete variables. Chemical and mechanical process, are applied to the hides to get a finish leather product, being the main steps; wet blue, crust and finish. The first consists in the preparation of the skin for the tanning to avoid the rottenness of the leather. The second process, which transforms the wet blue in a crust, gives to the leather the softness, resistance, color and substance and finally, the finishing process gives the final characteristic of texture, shine, touch and technical values required by the customers, like rubbing, bending, color fastness and adhesion. The main objective of the work is to optimize the sorting process in wet blue in order to reduce the rejects due sorting errors on the finished leather without increase the waste of good material at the origin. The method is illustrated with a case in a Tannery Industry located in Vale do Sinos area in Rio Grande do Sul. Historically, there were rejects in finished leather which created the main problems of Quality and one of the causes of greater importance of non-quality cost. The Measure Analysis System allowed to model a training applied to the leather sorters and establish a group of index to follow up the evolution of the study. The application of the Design of Experiments served to define the best practice of sorting which led to the reduction of sorting errors. Conclusions were shown during the implementation obtained by the collected data, comparing the previous situation with the present. In this way it was possible to verify, with the implementation of the suggested rules, a reduction of 60 % of the rejects index caused by sorting errors which means 180.000 u$s per year.
89

Optimizing Communication Cost in Distributed Query Processing / Optimisation du coût de communication des données dans le traitement des requêtes distribuées

Belghoul, Abdeslem 07 July 2017 (has links)
Dans cette thèse, nous étudions le problème d’optimisation du temps de transfert de données dans les systèmes de gestion de données distribuées, en nous focalisant sur la relation entre le temps de communication de données et la configuration du middleware. En réalité, le middleware détermine, entre autres, comment les données sont divisées en lots de F tuples et messages de M octets avant d’être communiqués à travers le réseau. Concrètement, nous nous concentrons sur la question de recherche suivante : étant donnée requête Q et l’environnement réseau, quelle est la meilleure configuration de F et M qui minimisent le temps de communication du résultat de la requête à travers le réseau?A notre connaissance, ce problème n’a jamais été étudié par la communauté de recherche en base de données.Premièrement, nous présentons une étude expérimentale qui met en évidence l’impact de la configuration du middleware sur le temps de transfert de données. Nous explorons deux paramètres du middleware que nous avons empiriquement identifiés comme ayant une influence importante sur le temps de transfert de données: (i) la taille du lot F (c’est-à-dire le nombre de tuples dans un lot qui est communiqué à la fois vers une application consommant des données) et (ii) la taille du message M (c’est-à-dire la taille en octets du tampon du middleware qui correspond à la quantité de données à transférer à partir du middleware vers la couche réseau). Ensuite, nous décrivons un modèle de coût permettant d’estimer le temps de transfert de données. Ce modèle de coût est basé sur la manière dont les données sont transférées entre les noeuds de traitement de données. Notre modèle de coût est basé sur deux observations cruciales: (i) les lots et les messages de données sont communiqués différemment sur le réseau : les lots sont communiqués de façon synchrone et les messages dans un lot sont communiqués en pipeline (asynchrone) et (ii) en raison de la latence réseau, le coût de transfert du premier message d’un lot est plus élevé que le coût de transfert des autres messages du même lot. Nous proposons une stratégie pour calibrer les poids du premier et non premier messages dans un lot. Ces poids sont des paramètres dépendant de l’environnement réseau et sont utilisés par la fonction d’estimation du temps de communication de données. Enfin, nous développons un algorithme d’optimisation permettant de calculer les valeurs des paramètres F et M qui fournissent un bon compromis entre un temps optimisé de communication de données et une consommation minimale de ressources. L’approche proposée dans cette thèse a été validée expérimentalement en utilisant des données issues d’une application en Astronomie. / In this thesis, we take a complementary look to the problem of optimizing the time for communicating query results in distributed query processing, by investigating the relationship between the communication time and the middleware configuration. Indeed, the middleware determines, among others, how data is divided into batches and messages before being communicated over the network. Concretely, we focus on the research question: given a query Q and a network environment, what is the best middleware configuration that minimizes the time for transferring the query result over the network? To the best of our knowledge, the database research community does not have well-established strategies for middleware tuning. We present first an intensive experimental study that emphasizes the crucial impact of middleware configuration on the time for communicating query results. We focus on two middleware parameters that we empirically identified as having an important influence on the communication time: (i) the fetch size F (i.e., the number of tuples in a batch that is communicated at once to an application consuming the data) and (ii) the message size M (i.e., the size in bytes of the middleware buffer, which corresponds to the amount of data that can be communicated at once from the middleware to the network layer; a batch of F tuples can be communicated via one or several messages of M bytes). Then, we describe a cost model for estimating the communication time, which is based on how data is communicated between computation nodes. Precisely, our cost model is based on two crucial observations: (i) batches and messages are communicated differently over the network: batches are communicated synchronously, whereas messages in a batch are communicated in pipeline (asynchronously), and (ii) due to network latency, it is more expensive to communicate the first message in a batch compared to any other message that is not the first in its batch. We propose an effective strategy for calibrating the network-dependent parameters of the communication time estimation function i.e, the costs of first message and non first message in their batch. Finally, we develop an optimization algorithm to effectively compute the values of the middleware parameters F and M that minimize the communication time. The proposed algorithm allows to quickly find (in small fraction of a second) the values of the middleware parameters F and M that translate a good trade-off between low resource consumption and low communication time. The proposed approach has been evaluated using a dataset issued from application in Astronomy.
90

Optimalizace daňové povinnosti v podniku / Optimizing tax liability of a business entity

Kazdová, Hana January 2014 (has links)
The Master's thesis is concerned with optimizing the tax costs and tax expenditures of sole proprietor or legal entity businesses. Main objective of this thesis is to introduce the tax cost and tax expenditures, which can be optimized, as widely as possible and to analyze the most important methods of tax optimalization relevant to most business entities. To achieve this objective the thesis is divided into theoretical and practical part. The theoretical part is focused on a definition of tax, tax system of the Czech republic including more detailed descriptions of income taxes and value added tax, effects of taxation on an enterprise and reasons for tax optimalization related to it and last but not least the thin line between tax avoidance and tax evasion. The practical part introduces through verbal description and practical examples the most important methods of tax optimalization relevant to most business entities, i.e. income tax and value added tax. Finally the thesis deals with international tax planning and methods of cash flow management connected with tax administration.

Page generated in 0.4515 seconds