• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 317
  • 151
  • 35
  • 32
  • 25
  • 20
  • 19
  • 16
  • 14
  • 14
  • 7
  • 6
  • 5
  • 3
  • 3
  • Tagged with
  • 786
  • 786
  • 757
  • 142
  • 129
  • 122
  • 108
  • 93
  • 77
  • 73
  • 69
  • 58
  • 57
  • 56
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Barreiras e fatores críticos de sucesso relacionados à aplicação da produção mais limpa no Brasil

Vieira, Letícia Canal January 2016 (has links)
Em um contexto atual de busca pelo desenvolvimento sustentável, surge a necessidade de alteração de mentalidades e práticas. A Produção mais Limpa figura como um dos exemplos de iniciativa compatível com essa demanda. Seu objetivo é que sejam considerados previamente os efeitos negativos dos processos produtivos, fazendo com que seja reduzido o desperdício e a geração de poluentes. Mesmo estando de acordo com as aspirações atuais de busca por uma produção industrial mais sustentável, sua disseminação não ocorreu de forma satisfatória. Esta dissertação visa contribuir para a compreensão das barreiras para a aplicação da Produção mais Limpa, bem como identificar quais fatores devem estar presentes para que se atinja o sucesso na sua adoção, possibilitando a geração de uma proposta de framework. Para atingir os objetivos propostos um instrumento de pesquisa foi criado com base na literatura e entrevistas com profissionais. Com esse instrumento foi executada uma survey com profissionais que possuem envolvimento com a temática de Produção mais Limpa, atingindo-se um total de 185 respondentes. A partir dos resultados da análise de componentes principais, ficou evidenciado que os fatores mais cruciais dizem respeito à organização, estando relacionados com a visão, a cultura, o planejamento estratégico e os subsídios para a implantação da Produção mais Limpa, que constituíram na primeira e segunda componentes da análise. No caso das barreiras, destaca-se a existência de uma visão e cultura organizacional inadequadas (primeira componente), seguido da falta de apoio externo (segunda componente). Também foram encontrados indícios de que pode haver uma má compreensão do conceito de Produção mais Limpa, além de uma educação ambiental inadequada. Ao analisarem-se as medidas que podem ser tomadas para que a Produção mais Limpa tenha sua aplicação de forma mais efetiva, percebe-se que o principal é o reposicionamento do ambiente externo como um forte incentivador da aplicação da Produção mais Limpa, abandonando a posição de destaque ao serem observadas as barreiras. / In t In the present context of pursuit for sustainable development, the need to alter mentalities and practices arises. Cleaner Production is an example of initiative compatible with this demand. Cleaner Production aims at considering beforehand negative effects of the productive process, reducing wastes and pollutant generation. This concept is aligned with current aspirations of pursuing for a more sustainable industrial production, but its dissemination did not occur in a satisfactory way. This dissertation seeks to contribute for comprehension of barriers to Cleaner Production application, as well as identify critical success factors that exist to achieve success in its adoption, making possible the conception of a framework proposal. To reach the proposed objectives a research instrument was created based on literature and interviews with professionals. With this instrument a survey was performed with professionals that work with Cleaner Production; a total of 185 responses were obtained. Results of the Principal Component Analysis made evident that critical factors are related with organization aspects, such as vision, culture, strategic planning and subsides for Cleaner Production implementation, that composed the first and second components. Regarding the barriers, it was emphasized the existence of an inadequate organizational vision and culture (first component), followed by lack of external support (second component). It was also found evidences that a mistaken comprehension of the Cleaner Production concept might exist, as well as an inappropriate environmental education. Considering measures that might be taken in order to disseminate Cleaner Production more effectively, it was noticed that is important to repositioning the external environment, making it a strong support in Cleaner Production applications, leaving behind a position of highlight when barriers are observed.
92

Sistemáticas de agrupamento de países com base em indicadores de desempenho / Countries clustering systematics based on performance indexes

Mello, Paula Lunardi de January 2017 (has links)
A economia mundial passou por grandes transformações no último século, as quais incluiram períodos de crescimento sustentado seguidos por outros de estagnação, governos alternando estratégias de liberalização de mercado com políticas de protecionismo comercial e instabilidade nos mercados, dentre outros. Figurando como auxiliar na compreensão de problemas econômicos e sociais de forma sistêmica, a análise de indicadores de desempenho é capaz de gerar informações relevantes a respeito de padrões de comportamento e tendências, além de orientar políticas e estratégias para incremento de resultados econômicos e sociais. Indicadores que descrevem as principais dimensões econômicas de um país podem ser utilizados como norteadores na elaboração e monitoramento de políticas de desenvolvimento e crescimento desses países. Neste sentido, esta dissertação utiliza dados do Banco Mundial para aplicar e avaliar sistemáticas de agrupamento de países com características similares em termos dos indicadores que os descrevem. Para tanto, integra técnicas de clusterização (hierárquicas e não-hierárquicas), seleção de variáveis (por meio da técnica “leave one variable out at a time”) e redução dimensional (através da Análise de Componentes Principais) com vistas à formação de agrupamentos consistentes de países. A qualidade dos clusters gerados é avaliada pelos índices Silhouette, Calinski-Harabasz e Davies-Bouldin. Os resultados se mostraram satisfatórios quanto à representatividade dos indicadores destacados e qualidade da clusterização gerada. / The world economy faced transformations in the last century. Periods of sustained growth followed by others of stagnation, governments alternating strategies of market liberalization with policies of commercial protectionism, and instability in markets, among others. As an aid to understand economic and social problems in a systemic way, the analysis of performance indicators generates relevant information about patterns, behavior and trends, as well as guiding policies and strategies to increase results in economy and social issues. Indicators describing main economic dimensions of a country can be used guiding principles in the development and monitoring of development and growth policies of these countries. In this way, this dissertation uses data from World Bank to elaborate a system of grouping countries with similar characteristics in terms of the indicators that describe them. To do so, it integrates clustering techniques (hierarchical and non-hierarchical), selection of variables (through the "leave one variable out at a time" technique) and dimensional reduction (appling Principal Component Analysis). The generated clusters quality is evaluated by the Silhouette Index, Calinski-Harabasz and Davies-Bouldin indexes. The results were satisfactory regarding the representativity of the highlighted indicators and the generated a good clustering quality.
93

Razão 6/3 como indicador de qualidade da dieta brasileira e a relação com doenças crônicas não transmissíveis / 6/3 ratio as a quality indicator of the Brazilian diet and it relation with chronic diseases

Mainardi, Giulia Marcelino 23 October 2014 (has links)
Introdução: A carga de doenças crônicas está aumentando rapidamente em todo o mundo. A proporção de ácido graxo 6/3 é um indicador qualitativo da dieta e sua elevação tem se mostrado associada a doenças crônicas na idade adulta. Em diversos países os padrões alimentares modernos apresentam proporção elevada de ácido graxo 6/3, no Brasil esse dado é desconhecido. Objetivo: Identificar os padrões de consumo alimentar da população brasileira na faixa etária de 15 a 35 anos e investigar a associação desses padrões com fatores de risco biológicos para doenças crônicas. Métodos: Foram utilizados dados do inquérito de consumo alimentar individual (POF 7) da Pesquisa Orçamento Familiares (POF) 2008 a 2009. Para estimar os padrões alimentares utilizou-se a análise de componentes principais (ACP), com rotação varimax. Para determinar o número de componentes a serem retidos na análise, consideramos aqueles com eingenvalues 1 e, para caracterizá-los, as variáveis com loadings |0,20|. Realizou-se o teste de Kaiser-Meyer-Olkin (KMO) para indicar a adequação dos dados à ACP. As associações entre os padrões alimentares (escores fatoriais) e fatores de risco para doenças crônicas, sintetizados na razão 6/3 do consumo alimentar acima de 10:1, foram estimadas através de regressões linear e logística. Foram considerados estatisticamente significantes os valores com p<0,05. As análises foram realizadas no software STATA 12. Resultados: Na amostra de 12527 indivíduos foram identificamos 3 padrões alimentares (P). O P3 caracterizado pelo consumo de preparações mistas, pizza/sanduíches, vitaminas/iogurtes, doces, sucos diversos e refrigerantes apresentou efeito de redução na 6/3 da dieta; o P1 pró inflamatório caracterizado por carnes processadas, panificados, laticínios, óleos e gorduras apresentou efeito de aumento na razão 6/3, este padrão é mais praticado pela população de menor renda em ambos os sexos. Observou-se baixo consumo de frutas e hortaliças em todos os padrões alimentares. Supondo-se um aumento na prática dos padrões 2 e 3, haveria a diminuição da probabilidade da pratica do P1 em 5 por cento em ambos os sexos. O índice de confiança da ACP, estimado pelo coeficiente KMO foi 0,57. Conclusão: Os padrões alimentares caracterizados pelo consumo de óleos e gorduras, carnes processadas, laticínios e panificados contribuíram para o aumento da 6/3 da dieta brasileira e, por extensão, para o risco de desenvolvimento de doenças crônicas não transmissíveis. Padrões alimentares complexos e com ampla gama de alimentos consumidos se mostram mais efetivos na redução da razão 6/3 da dieta de brasileiros adultos, lembrando que esse efeito é devido ao papel de sinergia durante a digestão e absorção que os alimentos exercem no organismo, já que o consumo se dá por uma variedade de alimentos e não por alimentos isolados. As políticas públicas na área da alimentação devem levar em conta a razão 6/3 como um dos marcadores da qualidade da dieta no País. / Introduction: The burden of chronic diseases is rapidly increasing worldwide. The proportion of fatty acid 6/3 is a qualitative indicator of diet and its increase has been shown to be associated with chronic diseases in adulthood. In many countries modern dietary patterns have a high proportion of fatty acid 6/3, in Brazil this data is unknown. Objective: To identify dietary patterns of the population in the age group between 15-35 years and to investigate the association between these patterns and biological risk factors for chronic diseases. Methods: We used data from individual food consumption survey (POF 7) Pesquisa de Orçamento Familiar (POF) from 2008 to 2009. To estimate the dietary patterns we used the principal component analysis (PCA) with varimax rotation. To determine the number of components to be retained in the analysis we consider those with eingenvalues 1 and to characterize them variables with loadings | 0.20 |. We used Kaiser-Meyer-Olkin (KMO) test to indicate the adequacy of the data to PCA. Associations between dietary patterns (factor scores) and risk factors for chronic diseases, characterized by 6/3 ratio > 10:1 of food consumption were estimated by linear and logistic regressions. Values with p <0.05 were considered statistically significant. Analyses were performed in STATA 12 software. Results: In the sample of 12527 individuals we identified 3 dietary patterns (P). The P3 characterized by the use of mixed preparations, pizza / sandwiches, vitamins / yogurts, pastries, juices and soft drinks, was effective in reducing the 6/3 ratio; P1 \"pro inflammatory\" characterized by processed meats, bakery, dairy, oils and fats had an effect of increasing the 6/3ratio, this pattern is more practiced by the population of lower income in both sexes. We found a low consumption of fruits and 8 vegetables in all dietary patterns. Assuming an increase in the practice of the patterns 2 and 3 would be decreased the probability of the P1 practice by 5 per cent in both sexes. The PCA confidence index, estimated by KMO coefficient was 0.57. Conclusion: Dietary patterns characterized by the consumption of oils and fats, processed meats, dairy and bakery products contributed to the increase in 6/3 in the Brazilian diet and, by extension, the risk of developing chronic diseases. Complex and wide range of foods consumed in dietary patterns are more effective in reducing the ratio 6/3 diet of Brazilian adults, this effect is due to the role of synergy during digestion and absorption that food has on the body, since the consumption takes place by a variety of food and not by food consumption isolated. Nutrition public policies must take into account the 6/3 ratio as one of the markers of diet quality in the countrys food consumption.
94

Determinação do intervalo pós-morte por espectroscopia de fluorescência / Determination of post-mortem interval using in situ tissue optical fluorescence

Estracanholli, Éverton Sérgio 17 July 2008 (has links)
Neste trabalho estamos propondo um novo método para a determinação do intervalo pós-morte, tendo em vista a necessidade de métodos que forneçam resultados em um curto período de tempo de análise e, ao mesmo tempo, tenha uma precisão semelhante à dos métodos utilizados atualmente. Além disso, é muito importante que sua aquisição seja viável para as instituições interessadas em tal análise, seja para fins como perícia criminal, instituições de pesquisa, instituto médico-legal ou para re-implante de órgãos amputados. Com estas motivações, o objetivo do trabalho é verificar se a técnica de espectroscopia de fluorescência apresenta sensibilidade suficiente para diferenciar os tecidos que estão em diferentes estágios de decomposição, uma vez que a técnica já tem sido muito utilizada para caracterização de tecidos biológicos com outras aplicações. Durante a realização do trabalho e análise dos dados obtidos (espectros de fluorescência), consideradas as dificuldades encontradas para a caracterização temporal das informações, são propostas formas de análise pertinentes. Dentre as análises realizadas, é apresentado o cálculo de análise em componentes principais, onde a diagonalização de matrizes de covariância permite identificar os padrões nos espectros obtidos. Uma interpretação dos dados obtidos de um primeiro grupo de animais utilizados (ratos da raça Wistar) foi feita, criando modelos matemáticos que descrevam as variações observadas em função do tempo, e logo a seguir foi realizado o processo de validação, onde o modelo matemático é posto à prova através da determinação do intervalo pós-morte de outro grupo de animais, cujos valores são conhecidos - estes dados, porém, não foram utilizados para a criação do modelo. A seguir, os resultados obtidos neste trabalho foram comparados com alguns resultados encontrados na literatura obtida, de onde foi possível concluir que o trabalho respondeu positivamente às expectativas. / In this study we propose a new method to evaluate the post-mortem interval, due to a need for methods that simultaneously provide results in a short analysis period of time and present accuracy similar to the methods currently used. Furthermore, it is very important to emphasize that the acquisition system must be practical and accessible in relations to costs getting possible the utilization for different institutions as: judicial expertise, research institutions, medical-legal institutes or to re-implantation of organ amputees. With these goals as motivation, the purpose of this study was to validate the fluorescence spectroscopy as a technique sufficiently sensitive to distinguish tissues in several stages of decomposition, since this technique has been widely used for characterization of biological tissue with other applications. Considering the difficulties found for temporal characterizations obtained by fluorescence spectra analysis any relevant ways of analysis were proposed. Among them we presented the Principal Components Analysis calculation, on which the diagonalization of covariance matrices allows us to identify standard in the obtained spectra. The interpretation of the data obtained from a first group of animals (breed: Wistar rats) was realized, creating mathematical models that describe the changes observed over time. Immediately after, the validation process was applied, where the mathematical model was tested, determining the post-mortem interval of another animals group whose values was known. However these data was not used to create the model. Afterwards, the results obtained in this study were compared to those from literature, and this comparison gotten possible to conclude that the study showed a positive response to the expectations.
95

Adaptive supervisory control scheme for voltage controlled demand response in power systems

Abraham, Etimbuk January 2018 (has links)
Radical changes to present day power systems will lead to power systems with a significant penetration of renewable energy sources and smartness, expressed in an extensive utilization of novel sensors and cyber secure Information and Communication Technology. Although these renewable energy sources prove to contribute to the reduction of CO2 emissions into the environment, its high penetration affects power system dynamic performance as a result of reduced power system inertia as well as less flexibility with regards to dispatching generation to balance future demand. These pose a threat both to the security and stability of future power systems. It is therefore very important to develop new methods through which power system security and stability can be maintained. This research investigated the development of methods through which the contributions of on-load tap changing transformers/transformer clusters could be assessed with the intent of developing real time adaptive voltage controlled demand response schemes for power systems. The development of such a scheme enables more active system components to be involved in the provision of frequency control as an ancillary service and deploys a new frequency control service with low infrastructural investment, bearing in mind that OLTC transformers are already very prevalent in power systems. In this thesis, a novel online adaptive supervisory controller for ensuring optimal dispatch of voltage-controlled demand response resources is developed. This novel controller is designed using the assessment results of OLTC transformer impacts on steady-state frequency and was tested for a variety of scenarios. To achieve the effective performance of the adaptive supervisory controller, the extensive use of statistical techniques for assessing OLTC transformer contributions to voltage controlled demand response is presented. This thesis also includes the use of unsupervised machine learning techniques for power system partitioning and the further use of statistical methods for assessing the contributions of OLTC transformer aggregates.
96

A Comparison of Data Transformations in Image Denoising

Michael, Simon January 2018 (has links)
The study of signal processing has wide applications, such as in hi-fi audio, television, voice recognition and many other areas. Signals are rarely observed without noise, which obstruct our analysis of signals. Hence, it is of great interest to study the detection, approximation and removal of noise.  In this thesis we compare two methods for image denoising. The methods are each based on a data transformation. Specifically, Fourier Transform and Singular Value Decomposition are utilized in respective methods and compared on grayscale images. The comparison is based on the visual quality of the resulting image, the maximum peak signal-to-noise ratios attainable for the respective methods and their computational time. We find that the methods are fairly equal in visual quality. However, the method based on the Fourier transform scores higher in peak signal-to-noise ratio and demands considerably less computational time.
97

A Vision-Based Approach For Unsupervised Modeling Of Signs Embedded In Continuous Sentences

Nayak, Sunita 07 July 2005 (has links)
The common practice in sign language recognition is to first construct individual sign models, in terms of discrete state transitions, mostly represented using Hidden Markov Models, from manually isolated sign samples and then to use them to recognize signs in continuous sentences. In this thesis we use a continuous state space model, where the states are based on purely image-based features, without the use of special gloves. We also present an unsupervised approach to both extract and learn models for continuous basic units of signs, which we term as signemes, from continuous sentences. Given a set of sentences with a common sign, we can automatically learn the model for part of the sign,or signeme, that is least affected by coarticulation effects. We tested our idea using the publicly available Boston SignStreamDataset by building signeme models of 18 signs. We test the quality of the models by considering how well we can localize the sign in a new sentence. We also present the concept of smooth continuous curve based models formed using functional splines and curve registration. We illustrate this idea using 16 signs.
98

Modelling Distance Functions Induced by Face Recognition Algorithms

Chaudhari, Soumee 09 November 2004 (has links)
Face recognition algorithms has in the past few years become a very active area of research in the fields of computer vision, image processing, and cognitive psychology. This has spawned various algorithms of different complexities. The concept of principal component analysis(PCA) is a popular mode of face recognition algorithm and has often been used to benchmark other face recognition algorithms for identification and verification scenarios. However in this thesis, we try to analyze different face recognition algorithms at a deeper level. The objective is to model the distances output by any face recognition algorithm as a function of the input images. We achieve this by creating an affine eigen space from the PCA space such that it can approximate the results of the face recognition algorithm under consideration as closely as possible. Holistic template matching algorithms like the Linear Discriminant Analysis algorithm( LDA), the Bayesian Intrapersonal/Extrapersonal classifier(BIC), as well as local feature based algorithms like the Elastic Bunch Graph Matching algorithm(EBGM) and a commercial face recognition algorithm are selected for our experiments. We experiment on two different data sets, the FERET data set and the Notre Dame data set. The FERET data set consists of images of subjects with variation in both time and expression. The Notre Dame data set consists of images of subjects with variation in time. We train our affine approximation algorithm on 25 subjects and test with 300 subjects from the FERET data set and 415 subjects from the Notre Dame data set. We also analyze the effect of different distance metrics used by the face recognition algorithm on the accuracy of the approximation. We study the quality of the approximation in the context of recognition for the identification and verification scenarios, characterized by cumulative match score curves (CMC) and receiver operator curves (ROC), respectively. Our studies indicate that both the holistic template matching algorithms as well as feature based algorithms can be well approximated. We also find the affine approximation training can be generalized across covariates. For the data with time variation, we find that the rank order of approximation performance is BIC, LDA, EBGM, and commercial. For the data with expression variation, the rank order is LDA, BIC, commercial, and EBGM. Experiments to approximate PCA with distance measures other than Euclidean also performed very well. PCA+Euclidean distance is best approximated followed by PCA+MahL1, PCA+MahCosine, and PCA+Covariance.
99

An Indepth Analysis of Face Recognition Algorithms using Affine Approximations

Reguna, Lakshmi 19 May 2003 (has links)
In order to foster the maturity of face recognition analysis as a science, a well implemented baseline algorithm and good performance metrics are highly essential to benchmark progress. In the past, face recognition algorithms based on Principal Components Analysis(PCA) have often been used as a baseline algorithm. The objective of this thesis is to develop a strategy to estimate the best affine transformation, which when applied to the eigen space of the PCA face recognition algorithm can approximate the results of any given face recognition algorithm. The affine approximation strategy outputs an optimal affine transform that approximates the similarity matrix of the distances between a given set of faces generated by any given face recognition algorithm. The affine approximation strategy would help in comparing how close a face recognition algorithm is to the PCA based face recognition algorithm. This thesis work shows how the affine approximation algorithm can be used as a valuable tool to evaluate face recognition algorithms at a deep level. Two test algorithms were choosen to demonstrate the usefulness of the affine approximation strategy. They are the Linear Discriminant Analysis(LDA) based face recognition algorithm and the Bayesian interpersonal and intrapersonal classifier based face recognition algorithm. Our studies indicate that both the algorithms can be approximated well. These conclusions were arrived based on the results produced by analyzing the raw similarity scores and by studying the identification and verification performance of the algorithms. Two training scenarios were considered, one in which both the face recognition and the affine approximation algorithm were trained on the same data set and in the other, different data sets were used to train both the algorithms. Gross error measures like the average RMS error and Stress-1 error were used to directly compare the raw similarity scores. The histogram of the difference between the similarity matrixes also clearly showed that the error spread is small for the affine approximation algorithm. The performance of the algorithms in the identification and the verification scenario were characterized using traditional CMS and ROC curves. The McNemar's test showed that the difference between the CMS and the ROC curves generated by the test face recognition algorithms and the affine approximation strategy is not statistically significant. The results were statistically insignificant at rank 1 for the first training scenario but for the second training scenario they became insignificant only at higher ranks. This difference in performance can be attributed to the different training sets used in the second training scenario.
100

Dimensionality Reduction Using Factor Analysis

Khosla, Nitin, n/a January 2006 (has links)
In many pattern recognition applications, a large number of features are extracted in order to ensure an accurate classification of unknown classes. One way to solve the problems of high dimensions is to first reduce the dimensionality of the data to a manageable size, keeping as much of the original information as possible and then feed the reduced-dimensional data into a pattern recognition system. In this situation, dimensionality reduction process becomes the pre-processing stage of the pattern recognition system. In addition to this, probablility density estimation, with fewer variables is a simpler approach for dimensionality reduction. Dimensionality reduction is useful in speech recognition, data compression, visualization and exploratory data analysis. Some of the techniques which can be used for dimensionality reduction are; Factor Analysis (FA), Principal Component Analysis(PCA), and Linear Discriminant Analysis(LDA). Factor Analysis can be considered as an extension of Principal Component Analysis. The EM (expectation maximization) algorithm is ideally suited to problems of this sort, in that it produces maximum-likelihood (ML) estimates of parameters when there is a many-to-one mapping from an underlying distribution to the distribution governing the observation, conditioned upon the obervations. The maximization step then provides a new estimate of the parameters. This research work compares the techniques; Factor Analysis (Expectation-Maximization algorithm based), Principal Component Analysis and Linear Discriminant Analysis for dimensionality reduction and investigates Local Factor Analysis (EM algorithm based) and Local Principal Component Analysis using Vector Quantization.

Page generated in 0.0925 seconds