361 |
Controle coerente das bandas de emissão do ZnO através de algoritmo genético / Coherent control of the emission bands of ZnO using genetic algorithmsMartins, Renato Juliano 14 February 2012 (has links)
Neste trabalho, investigamos o controle coerente das bandas de emissão, excitadas via absorção multifotônica, em um cristal de óxido de Zinco (ZnO) através das formatação de pulsos laser ultracurtos (790 nm, 30 fs, 80 MHz e 5 nJ). O ZnO vem se mostrado um possível candidato a dispositivos fotônicos devido a sua grande energia de ligação de éxciton (60 meV).Inicialmente, implementamos a montagem experimental do sistema de formatação de pulsos, bem como de excitação e coleta da fluorescência do ZnO. O controle coerente foi feito através de um programa baseado em um algoritmo genético (GA), também desenvolvido no transcorrer deste trabalho. Através do algoritmo genético, observamos um ganho significativo da emissão do ZnO por meio de fases espectrais impostas ao pulso laser. Monitorando o traço de autocorrelação do pulso, inferimos que este se torna mais longo após a otimização das bandas de emissão via GA. Além disso, verificamos que as funções de fase que otimizam o processo são complexas e oscilatórias. Através da análise das componentes principais (PCA), fizemos uma análise do conjunto de dados providos pelo GA, onde observamos que este método pode ser usado como um filtro para os dados, suavizando as curvas e enfatizando os aspectos mais importantes das máscaras de fase obtidas pelo controle coerente. Por fim investigamos qual a importância das máscaras suavizadas para o entendimento físico do processo. / In this work, we investigate the coherent control of the emission bands, excited via multiphoton absorption, in a zinc oxide crystal (ZnO) by pulse shaping ultrashort laser pulses (790 nm, 30 fs, 5 nJ and 80 MHz). ZnO has been preposed as a potential material for photonic devices due to its strong exciton binding energy(60 meV). Initially, we have implemented the pulse shaper experimental setup, as well as the fluorescence measurements of ZnO. The coherent control was carried out through genetic algorithm (GA) based software, also developed in the course of this work. Using the genetic algorithm, we have observed a significant increase in the ZnO emission when appropriated spectral phase masks are applied to the laser pulse. Autocorrelation measurements were used to infer the pulse duration, which get longer after optimization of the emission band via GA. Additionally, we have found that the phase masks that optimize the process are complex oscillatory functions. Through the Principal Component Analysis, we analyzed the data provided by the GA and observed that it can be used to filter the data, smoothing the curves and highlighting the most important aspects of phase masks obtained by the coherent control. Finally we investigate how important the smoothed masks are for the physical understanding of the process.
|
362 |
System identification and control of smart structures: PANFIS modeling method and dissipativity analysis of LQR controllersMohammadzadeh, Soroush 30 May 2013 (has links)
"Maintaining an efficient and reliable infrastructure requires continuous monitoring and control. In order to accomplish these tasks, algorithms are needed to process large sets of data and for modeling based on these processed data sets. For this reason, computationally efficient and accurate modeling algorithms along with data compression techniques and optimal yet practical control methods are in demand. These tools can help model structures and improve their performance. In this thesis, these two aspects are addressed separately. A principal component analysis based adaptive neuro-fuzzy inference system is proposed for fast and accurate modeling of time-dependent behavior of a structure integrated with a smart damper. Since a smart damper can only dissipate energy from structures, a challenge is to evaluate the dissipativity of optimal control methods for smart dampers to decide if the optimal controller can be realized using the smart damper. Therefore, a generalized deterministic definition for dissipativity is proposed and a commonly used controller, LQR is proved to be dissipative. Examples are provided to illustrate the effectiveness of the proposed modeling algorithm and evaluating the dissipativity of LQR control method. These examples illustrate the effectiveness of the proposed modeling algorithm and dissipativity of LQR controller."
|
363 |
Apathy and impulsivity in frontotemporal lobar degeneration syndromesLansdall, Claire Jade January 2017 (has links)
There has been considerable progress in the clinical, pathological and genetic fractionation of frontotemporal lobar degeneration syndromes in recent years, driving the development of novel diagnostic criteria. However, phenotypic boundaries are not always distinct and syndromes converge with disease progression, limiting the insights available from traditional diagnostic classification. Alternative transdiagnostic approaches may provide novel insights into the neurobiological underpinnings of symptom commonalities across the frontotemporal lobar degeneration spectrum. In this thesis, I illustrate the use of transdiagnostic methods to investigate apathy and impulsivity. These two multifaceted constructs are observed across all frontotemporal lobar degeneration syndromes, including frontotemporal dementia, progressive supranuclear palsy and corticobasal syndrome. They cause substantial patient morbidity and carer distress, often coexist and are undertreated. Using data from the Pick’s disease and Progressive supranuclear palsy Prevalence and INcidence (PiPPIN) Study, I examine the frequency, characteristics and components of apathy and impulsivity across the frontotemporal lobar degeneration spectrum. A principal component analysis of the neuropsychological data identified eight distinct components of apathy and impulsivity, separating patient ratings, carer ratings and behavioural tasks. Apathy and impulsivity measures were positively correlated, frequently loading onto the same components and providing evidence of their overlap. The data confirmed that apathy and impulsivity are common across the spectrum of frontotemporal lobar degeneration syndromes. Voxel based morphometry revealed distinct neural correlates for the components of apathy and impulsivity. Patient ratings correlated with white matter changes in the corticospinal tracts, which may reflect retained insight into their physical impairments. Carer ratings correlated with grey and white matter changes in frontostriatal, frontotemporal and brainstem systems, which have previously been implicated in motivation, arousal and goal directed behaviour. Response inhibition deficits on behavioural tasks correlated with focal frontal cortical atrophy in areas implicated in goal-directed behaviour and cognitive control. Diffusion tensor imaging was highly sensitive to the white matter changes underlying apathy and impulsivity in frontotemporal lobar degeneration syndromes. Diffusion tensor imaging findings were largely consistent with voxel-based morphometry, with carer ratings reflecting widespread changes while objective measures showed changes in focal, task-specific brain regions. White matter abnormalities often extended beyond observed grey matter changes, providing supportive evidence that white matter dysfunction represents a core pathophysiology in frontotemporal lobar degeneration. Apathy was a significant predictor of death within two and a half years from assessment, consistent with studies linking apathy to poor outcomes. The prognostic importance of apathy warrants more accurate measurement tools to facilitate clinical trials. Although causality remains unclear, the influence of apathy on survival suggests effective symptomatic treatments may also prove disease-modifying. These findings have several implications. First, clinical studies for apathy/impulsivity in frontotemporal lobar degeneration syndromes should target patients who present with these symptoms, irrespective of their diagnostic category. Second, data-driven approaches can inform the choice of assessment tools for clinical trials, and their link to neural drivers of apathy and impulsivity. Third, the components and their neural correlates provide a principled means to measure (and interpret) the effects of novel treatments in the context of frontotemporal lobar degeneration.
|
364 |
Vis-Saúde - Uma metodologia para visualização e análise de dados de saúde públicaKaieski, Naira 30 July 2014 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-05-21T16:43:31Z
No. of bitstreams: 1
Naira Kaieski_.pdf: 5786537 bytes, checksum: 569a3914f90b146ed5c83be6961e4cbf (MD5) / Made available in DSpace on 2015-05-21T16:43:31Z (GMT). No. of bitstreams: 1
Naira Kaieski_.pdf: 5786537 bytes, checksum: 569a3914f90b146ed5c83be6961e4cbf (MD5)
Previous issue date: 2014-07-30 / Nenhuma / Os sistemas computacionais provêem uma série de ferramentas que possibilitam a análise e a extração de conhecimento em um vasto e complexo conjunto de variáveis. O conjunto de dados que compõem um evento, em uma primeira visão sob a forma numérica pura pode não oferecer qualquer tipo de informação mais relevante que subsidie um aprofundamento do estudo. Contudo o emprego de técnicas de visualização em conjunto com metodologias de análise de dados, possibilitam uma compreensão mais sólida de um evento. Possíveis correlações entre as variáveis podem ser evidenciadas mediante o emprego de metodologias de análise, abrindo novas possibilidades de investigação. Enquanto que o emprego de técnicas de visualização de dados apropriadas ao contexto, momento e conjunto de variáveis podem evidenciar informações importantes sobre o evento que permaneciam à sombra dos números. O Vis-Saúde desenvolvido neste trabalho utiliza tecnologias open source para o desenvolvimento de uma metodologia de análise e visualização de dados em um ambiente web onde, se disponíveis, os registros podem ser submetidos à análise e o resultado visualizado em tempo real. Esforços para empregar metodologias de análise e visualização em registros advindos da saúde pública são relevantes a fim de extrair conhecimento do complexo conjunto de dados disponíveis, com o objetivo de orientar a população e os gestores de saúde pública. A metodologia de análise de dados empregada neste trabalho é baseada na análise de componentes principais cujo emprego objetiva encontrar a relação de dependência entre um evento de interesse e outras variáveis relacionadas disponíveis para estudo. A utilização de técnicas de visualização de dados enriquece as possibilidades de compreensão da informação que está sendo estudada. No Vis-Saúde são disponibilizadas duas visualizações georreferenciadas distintas dos registros de saúde, onde ambas são baseadas na região onde o evento ocorreu e apresentadas sobre um mapa geográfico para facilitar a identificação da região de interesse. A primeira contempla a visualização das informações resultantes da análise de dependência entre as variáveis de estudo. A segunda visa facilitar a compreensão da dinâmica da incidência de uma enfermidade com base na apresentação da concentração de ocorrências destas, através de mapas de calor, onde os dados são agrupados em séries temporais e apresentados sob a forma de uma animação mediante a passagem de tempo. A saúde pública é um segmento da administração governamental muito dinâmica, que está constantemente sujeita a situações endêmicas ou epidêmicas. Tal comportamento demanda um monitoramento constante por parte dos gestores a fim de identificar situações graves para as quais devem ser direcionadas ações e recursos. Para demonstrar o funcionamento da metodologia com dados reais, foram utilizados os registros quanto à incidência de dengue no Brasil. A dengue é uma enfermidade presente em todos os estados da federação e que representa um risco grave à saúde coletiva da população, uma vez que a incidência desta doença tem aumentado nos últimos anos. / Computer systems provide a set of tools that allow the analysis and extraction of knowledge in a large and complex set of variables. The set of data that form an event, in a first analysis under numerical form can not show any important information to support a deep study of the data. Although the use of visualization techniques with data analysis methodologies can allow a best comprehension about an event. Possible correlations between variables can be evidenced by the use of methodologies of analysis that open a set of new possibilities of research. The use of data visualization techniques according to the context, moment and set of variables can show important information about an event that was hidden by numbers. Vis-Health shown in this paper used open source technologies to develop an analysis and visualization methodology of data in an web environment, where if the data are available, the registers can be processed and the result shown in real time. Efforts to utilize analysis and visualization methodologies in a set of public health data are relevant to extract knowledge of the complex data set available with the goal of guiding the population and public health managers. The methodology analysis used in this paper is based on principal component analysis with the aim to find the dependency relationship between an event of interest and other related variables available for study. The use of data visualization techniques enhances the possibilities of comprehension of the information that is being studied. Vis-Health provided two distinct georeferenced views of health records, where both are based in the region where the event occurred and displayed on a geographical map for easy identification of the region of interest. The first visualization is showing the result of the dependency analysis between the variables of study. The second one has the aim to make more dynamic the comprehension about the incidence of a disease through the presentation of concentration of events occurred using heat map. In this visualization the data are grouped in temporal series and displayed with an animation with temporal passage. The public health is a very dynamic segment of government administration that is always threatened with endemic or epidemic situations. This behavior requires a constant monitoring of managers to identify serious situations that can require more attention and resources. With the goal to demonstrate the operation of the methodology with real data, the registers about dengue incidence in Brazil were used. Dengue is a disease present in all Brazil’s states and represents a serious risk to the collective health considering that the incidence has increasing in the latest years.
|
365 |
Uma Abordagem para Detecção Automática de Planos em Modelos Digitais de Afloramentos Baseada em PCAGomes, Róbson Koszeniewski 19 September 2014 (has links)
Submitted by Nara Lays Domingues Viana Oliveira (naradv) on 2015-07-15T18:16:55Z
No. of bitstreams: 1
ROBSON.pdf: 2705073 bytes, checksum: d6eefccd3ecb288572c9262f7dc4079a (MD5) / Made available in DSpace on 2015-07-15T18:16:55Z (GMT). No. of bitstreams: 1
ROBSON.pdf: 2705073 bytes, checksum: d6eefccd3ecb288572c9262f7dc4079a (MD5)
Previous issue date: 2014 / PROCERGS - Companhia de Processamento Dados do Estado Rio Grande Sul / A coleta de dados espaciais tem sido intensamente empregada na área geológica, através da técnica de LIDAR (Light Detection and Ranging). Este tipo de sensoriamento digital remoto de alta resolução e precisão, resulta em modelos digitais 3D que permitem uma análise mais detalhada e quantitativa de estruturas heterôgeneas, como afloramentos. Um dos estudos realizados pelos geólogos são análises sobre a geometria da formação de rochas, onde a informação de orientação de um plano inclinado é um indicativo para a compreensão global da estrutura. Este trabalho propõe a utilização da técnica de Análise de Componentes Principais (PCA) para calcular e detectar automaticamente todos os planos em uma nuvem de pontos. Uma ferramenta foi construída para implementar a visualização do modelo digital e apurar os melhores planos. Um estudo foi realizado a fim de validar as informações encontradas pelo método proposto e dados medidos em campo. / The use of LIDAR (Light Detection and Ranging) systems for gathering spatial data has been extensively used in geological studies. This type of digital remote sensing delivers high resolution and accuracy, resulting in 3D digital models which allow a more detailed and quantitative analysis of heterogeneous structures, as outcrops. One of the studies is based on analysis of the forming rocks geometry. The orientation of a slope plane is an indication for the overall undestanding of the structure. This work proposes a new method to automatically compute and detect all possible planes in a point cloud, based on Principal Component Analysis (PCA) technique. A software tool was constructed to implement the digital model visualization and compute the best planes. A study was conducted to compare and validate the results of the method and the field data collected.
|
366 |
Reamostragem adaptativa para simplificação de nuvens de pontosSilva, Fabrício Müller da 31 August 2015 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2015-10-27T14:34:35Z
No. of bitstreams: 1
Fabrício Müller da Silva_.pdf: 105910980 bytes, checksum: 4ce66a9d5fff9a2b2a97835c54dac355 (MD5) / Made available in DSpace on 2015-10-27T14:34:35Z (GMT). No. of bitstreams: 1
Fabrício Müller da Silva_.pdf: 105910980 bytes, checksum: 4ce66a9d5fff9a2b2a97835c54dac355 (MD5)
Previous issue date: 2015-08-31 / Nenhuma / Este trabalho apresenta um algoritmo para simplificação de nuvens de pontos baseado na inclinação local da superfície amostrada pelo conjunto de pontos de entrada. O objetivo é transformar a nuvem de pontos original no menor conjunto possível, mantendo as características e a topologia da superfície original. O algoritmo proposto reamostra de forma adaptativa o conjunto de entrada, removendo pontos redundantes para manter um determinado nível de qualidade definido pelo usuário no conjunto final. O processo consiste em um particionamento recursivo do conjunto de entrada através da Análise de Componentes Principais (PCA). No algoritmo, PCA é aplicada para definir as partições sucessivas, para obter uma aproximação linear (por planos) em cada partição e para avaliar a qualidade de cada aproximação. Por fim, o algoritmo faz uma escolha simples de quais pontos serão mantidos para representar a aproximação linear de cada partição. Estes pontos formarão o conjunto de dados final após o processo de simplificação. Para avaliação dos resultados foi aplicada uma métrica de distância entre malhas de polígonos, baseada na distância de Hausdorff, comparando a superfície reconstruída com a nuvem de pontos original e aquela reconstruída com a nuvem filtrada. Os resultados obtidos com o algoritmo conseguiram uma taxa de até 95% de compactação do conjunto de dados de entrada, diminuindo o tempo total de execução do processo de reconstrução, mantendo as características e a topologia do modelo original. A qualidade da superfície reconstruída com a nuvem filtrada também é atestada pela métrica de comparação. / This paper presents a simple and efficient algorithm for point cloud simplification based on the local inclination of the surface sampled by the input set. The objective is to transform the original point cloud in a small as possible one, keeping the features and topology of the original surface. The proposed algorithm performs an adaptive resampling of the input set, removing unnecessary points to maintain a level of quality defined by the user in the final dataset. The process consists of a recursive partitioning in the input set using Principal Component Analysis (PCA). PCA is applied for defining the successive partitions, for obtaining the linear approximations (planes) for each partition, and for evaluating the quality of those approximations. Finally, the algorithm makes a simple choice of the points to represent the linear approximation of each partition. These points are the final dataset of the simplification process. For result evaluation, a distance metric between polygon meshes, based on Hausdorff distance, was defined, comparing the reconstructed surface using the original point clouds and the reconstructed surface usingthe filtered ones. The algorithm achieved compression rates up to 95% of the input dataset,while reducing the total execution time of reconstruction process, keeping the features and the topology of the original model. The quality of the reconstructed surface using the filtered point cloud is also attested by the distance metric.
|
367 |
Characterization of chemical markers for the discrimination of East Asian handmade papers using pyrolysis, gas chromatography and mass spectrometry / Caractérisation de marqueurs chimiques pour l'identification des papiers traditionnels asiatiques en utilisant la pyrolyse, la chromatographie gazeuse et la spectrométrie de masseHan, Ung Bin 11 July 2018 (has links)
Cette thèse a été conduite afin d’explorer le potentiel d’une nouvelle méthodologie utilisant la pyrolyse, la chromatographie gazeuse et la spectrométrie de masse pour la caractérisation et l’identification des fibres papetières utilisées dans la fabrication des papiers asiatiques traditionnels à partir de la caractérisation des métabolites de ces fibres. Cette méthodologie utilise un processus d’échantillonnage facilité nécessitant une très petite quantité d’échantillons (de l’ordre de quelques dizaines de µg). Après la pyrolyse des échantillons de papiers et la séparation chromatographique des composés formés, des distributions caractéristiques pour les métabolites des fibres papetières (considérant leur présence et leur intensité) ont été observées dans une région définie comme région d’intérêt dans les chromatogrammes: ces distributions se sont révélées spécifiques pour la caractérisation des papiers fabriqués à partir de différents types de fibres et ont été utilisées pour distinguer l’origine des différentes fibres papetières couramment utilisées dans la fabrication de papiers asiatiques traditionnels. Premièrement, les problèmes rencontrés dans l’étude des papiers faits à la main ont été présentés, comme l’origine de la fabrication du papier, l’incohérence de certains résultats dans l’identification des fibres (reportés dans différentes études scientifiques), les limites de la microscopie pour l’identification des fibres papetières d’origines botaniques similaires et les risques d’imprécision dans le référencement des échantillons. Tous ces problèmes montrent la nécessité d’explorer de nouvelles méthodes pour (1) améliorer la fiabilité de l’identification des fibres papetières des papiers asiatiques traditionnels, (2) valider et confirmer les résultats obtenus par l’analyse microscopique. À cette fin, dans un premier temps, des papiers asiatiques de référence ont été étudiés. Les résultats expérimentaux ont montré que les différentes fibres papetières utilisées pour la fabrication des papiers étudiés montraient des différences dans les distributions de leurs marqueurs spécifiques : par exemple, les fibres d’origine de la famille Moraceae montrent une distribution caractéristiques de composés triterpèniques alors que les fibres d’origine de la famille Thymelaeaceae montrent une distribution caractéristiques de composés de type stigmastanes. De leur côté, les fibres des plantes appartenant au groupe Ma montrent peu de métabolites caractéristiques. Les différences observées dans la distribution de ces métabolites ont été attestées par la comparaison entre distributions obtenues à partir des fibres végétales et celles des papiers faits à la main attestant de l’origine commune de ces métabolites issus des tissus végétaux d’origine. Ainsi, la méthodologie étudiée se révèle prometteuse en tant que méthode de chimiotaxonomie pour l’identification des fibres inconnues de papiers faits à la main. Avec les exemples d'applications fournies au cours du travail expérimental, le couplage de la pyrolyse, de la chromatographie en phase gazeuse et de la spectrométrie de masse (avec l’utilisation de la Py-GC/MS et de la Py-GCxGC/MS) a montré sa capacité à distinguer les fibres d'une même famille (qui peuvent présenter des caractéristiques similaires en microscopie) et peut ainsi constituer une méthode efficace d'identification des fibres et de validation des résultats d'identification obtenus par l'observation microscopique. Dans la présente thèse, les caractéristiques de la chromatographie gazeuse intégralement bidimensionnelle GCxGC, ses avantages pour les applications dans le domaine du patrimoine culturel et son apport potentiel pour le traitement des données 1D ont été discutées (...) / This study was conducted to explore a new methodology for handmade fiber characterizationand identification using pyrolysis, gas chromatography and mass spectrometry. It employseasy sampling process with minor quantity of samples required. After pyrolysis of handmadepapers, a featured metabolites distribution patterns (presence plus intensity) eluting in the defined region of interest (ROI) was observed to be characteristic for handmade papers of different material origins. The method utilizes these metabolites distribution patterns as markers to discriminate different fiber origins. Firstly, the problems encountered in the investigation of handmade papers were introduced such as the origin of papermaking, the inconsistency in the fiber identification results sometimes gained by different scholars, the limits of microscopy in identifying fibers from similar species and the likely imprecision of the reference sample labeling. All these problems showed the necessity to explore a new method in order to (i) make precise fiber identification of handmade papers and (ii) to validate or confirm the identification results obtained by microscopy. Then, modern reference handmade papers were firstly studied. The result revealed that different plant fibers used for papermaking have different marker distributions in the ROI, forinstance, the Moraceae family with a featured distribution of terpene compounds and theThymelaeaceae family with a featured distribution of stigmasta compounds. The fibers fromthe ma group usually revealed few compounds in the ROI. This metabolites difference in theROI was attested from the plant tissues with their similar distribution in handmade papers and plant raw fibers. Thus, the chosen methodology offers promise as a method of chemotaxonomy for unknown handmade paper fiber identification. With the examples ofapplications provided during the experimental work, the coupling of pyrolysis, gaschromatography and mass spectrometry (through the use of Py-GC/MS and Py-GCxGC/MS)showed its ability to distinguish fibers from the same plant family (that may present similar microscopic features) and thus, can constitutes an effective method for fiber identification as well as to validate the identification results of the microscopic observation. In the present thesis, the features of GCxGC and the benefits for cultural heritage applications and its help for the ID data treatment were discussed. The tested Py-GCxGC/MS methodology has been for the first time proposed in the cultural heritage field and it harbors the potential to promote the research in this domain, enhancing our capacity to handle small quantities of complex samples while providing an exhaustive response on its composition.
|
368 |
Sistemática para seleção de fornecedores na indústria da construção civilDenicol, Juliano January 2014 (has links)
Atualmente, o ambiente industrial é caracterizado pela intensa globalização, competição entre cadeias de suprimentos, manutenção das competências centrais e terceirização dos demais serviços. Desta forma, a gestão das relações entre os agentes independentes da cadeia de suprimentos e do processo de aquisição são fatores potenciais para o aumento da competitividade empresarial. No contexto da construção civil, a seleção adequada dos parceiros de negócios é um elemento fundamental para o sucesso dos projetos, uma vez que uma grande proporção das atividades podem ser sub-contratadas e possuem relação de precedência entre si. Os suprimentos representam um percentual significativo dos custos das construções, 60%, dado que demonstra o potencial de lucratividade passível de ser atingida ao estruturar o processo de seleção de fornecedores na construção civil. Seleções baseadas no preço prejudicam os sub-empreiteiros e fornecedores mais responsáveis na concorrência, contribuindo para a queda do nível de desempenho e redução da eficiência global do projeto, uma vez que as ineficiências são somadas ao longo da cadeia. Através da estruturação do processo de seleção de fornecedores, é possível mitigar os riscos de suprimentos oriundos de falhas destes contratados ao longo da relação. O objetivo deste trabalho foi desenvolver uma sistemática para seleção de fornecedores críticos, considerando diversos critérios além do preço, entre qualitativos e quantitativos. A abordagem visa também, a eliminação da subjetividade do processo e a extração do melhor fornecedor de forma objetiva. Para tanto, foram definidas dimensões competitivas para avaliar os fornecedores e posteriormente foram utilizados dois métodos quantitativos, Teoria dos Conjuntos Difusos (TCD) e Análise de Componentes Principais (ACP), para selecionar o melhor fornecedor dentre as alternativas, com base na avaliação de múltiplos agentes. / Currently, the industrial environment is characterized by intense globalization, competition between supply chains, maintenance of core competencies and outsourcing of other services. Thus, the management of relationships between independent agents of the supply chain and the procurement process are potential factors for increasing enterprise competitiveness. In the construction context, the proper selection of business partners is a key element for the success of projects, since a large proportion of the activities can be sub-contracted and have precedence relationship between them. Supplies represent a significant percentage of the cost of buildings, 60%, information that demonstrates the potential of profitability that can be achieved by structuring the process of supplier selection in the construction industry. Selection based on price take off from competition the sub-contractors and suppliers more responsible, contributing to the decline in the level of performance and reduction in the overall project efficiency, since inefficiencies are summed through the chain. By structuring the supplier selection process, it is possible to mitigate the supply risk arising from failures of these suppliers during the relationship. The objective of this study was to develop a systematic for selection of critical suppliers, considering several criteria other than price, among qualitative and quantitative. The approach also aims at eliminating the subjectivity of the process and the extraction of the best supplier in an objective way. In order to that, competitive dimensions were set to evaluate vendors and subsequently two quantitative methods, Fuzzy Sets Theory (FST) and Principal Component Analysis (PCA) were used to select the best supplier among the alternatives based on multiple agents evaluation.
|
369 |
Classificador de qualidade de álcool combustível e poder calorífico de gás GLP. / Alcohol combustible quality and LPG gas calorific power classifier.Hirayama, Vitor 08 June 2004 (has links)
Este trabalho apresenta os resultados obtidos com o desenvolvimento de um sistema robusto como uma alternativa de reconhecimento da qualidade de vapor de álcool combustível e do poder calorífico do gás combustível GLP em um nariz eletrônico. Foram implementadas duas metodologias experimentais para a extração de atributos dos padrões de vapor de álcool combustível e de gás GLP. Na primeira abordagem de tratamento dos dados, foram usados um Sistema de Inferência Fuzzy (FIS), e dois algoritmos de treinamento de Redes Neurais Artificiais (RNA) para reconhecer padrões de vapor de álcool combustível: a Backpropagation e Learning Vector Quantization. A segunda abordagem para o tratamento dos dados foi desenvolver um sistema reconhecedor do poder calorífico do gás GLP robusto à perda aleatória de um dos sensores. Foram usados três sistemas. No primeiro foi implementada uma RNA para reconhecer todos os dados que simulavam a falha de um sensor aleatório. O resultado desse sistema foi de 97% de acertos. O segundo implementou sete RNAs treinadas com subconjuntos dos dados de entrada, tais que seis RNAs foram treinadas com um sensor diferente com falha; e a sétima RNA foi treinada com dados dos sensores sem falhas. O resultado desse sistema foi de 99% de acertos. O terceiro implementou uma Máquina de Comitê Estática Ensemble constituída de dez RNAs em paralelo para resolver o problema. O resultado foi de 97% de acertos. As RNAs tiveram melhores respostas que os FIS. Foram sugeridas algumas formas de implementação em hardware do sistema reconhecedor em sistemas pré-fabricados com DSPs e micro-controladores. / This work shows the results of a robust system development as an alternative to recognize the quality of an alcohol fuel vapor sample and Liquid Petrol Gas (LPG) heat power in an electric nose. Two experimental methodologies were implemented to extract the features of alcohol fuel vapor and LPG gas patterns. The first approach to process the data used an Fuzzy Inference System (FIS) and two training algorithms of Artificial Neural Networks (ANN) to recognize alcohol fuel vapor patterns: Backpropagation and Learning Vector Quantization. The second approach consists of process data to develop an LPG heat power recognizing system robust to one-random-sensor-loss. Three systems were used. The first implemented an ANN to recognize all data that simulated the failure of a random sensor. This system had 97% of right responses. The second implemented seven ANNs trained with input data subsets, such that six ANNs were trained with a different failure sensor, and the seventh ANN was trained with data of all sensors without failure. This system had 99% of right responses. The third implemented an Ensemble Static Learning Machine containing ten parallel RNAs to solve the problem. The result were 97% of right responses. RNAs had better results than FIS. Some ways of hardware implementation of the recognizing system were suggested in DSP and micro-controllers pre-built systems.
|
370 |
PREDICTION OF PROTECTED-PERMISSIVE LEFT-TURN PHASING CRASHES BASED ON CONFLICT ANALYSISSagar, Shraddha 01 January 2017 (has links)
Left-turning maneuvers are considered to be the highest risk movements at intersections and two-thirds of the crashes associated with left-turns are reported at signalized intersections. Left-turning vehicles typically encounter conflicts from opposing through traffic. To separate conflicting movements, transportation agencies use a protected-only phase at signalized intersections where each movement is allowed to move alone. However, this could create delays and thus the concept of a protected-permissive phase has been introduced to balance safety and delays. However, the permissive part of this phasing scheme retains the safety concerns and could increase the possibility of conflicts resulting in crashes. This research developed a model that can predict the number of crashes for protected-permissive left-turn phasing, based on traffic volumes and calculated conflicts. A total of 103 intersections with permissive-protected left-turn phasing in Kentucky were simulated and their left-turn related conflicts were obtained from post processing vehicle trajectories through the Surrogate Safety Assessment Model (SSAM). Factors that could affect crash propensity were identified through the Principal Component Analysis in Negative Binomial Regression. Nomographs were developed from the models which can be used by traffic engineers in left-turn phasing decisions with enhanced safety considerations.
|
Page generated in 0.1325 seconds