• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Apply Fuzzy Cluster Method for Identifying the Spatial Distribution of Pollutants around Kaohsiung Coastal Water

Chang, Dun-Cheng 15 August 2002 (has links)
Abstract The near shore water intake pollutants from the land area and is heavily polluted. In order to assess such impact efficiently, the focus of marine environmental monitoring is shifting from inspecting pollutants in a water body to the measurement of pollutants adhered to sediments on seabed. The statistical methods are then used to analyze survey data for the purpose of interpretation. As for the problem of identifying the spatial distributions of classified pollutants over the water around Kaohsiung harbor, the result from the commonly used K-Means Cluster Analysis is not very satisfactory. It is therefore that the proposed research is trying to use the Fuzzy Cluster Method (FCM) to achieve better results. Through adaptive searching approach, the FCM should be able to generate the appropriate cluster centers for discerning the pollutants¡¦ spatial distribution, which in turn would convey more meaning to support feasible interpretation. The FCM model developed by this research will also help to trace the most suspicious or new pollutant source with the assistance from the domain expertise if an unusual pollutant were found in the study area. The benefit is therefore obvious that the authority in charge of marine environment can respond efficiently and correctly against such pollution event and take appropriate actions. FCM has been heavily applied to the research on computer vision and pattern recognition with great success. Recently quite amounts of literatures in the environmental and natural resource management study, including geo-statistical modeling, pollution mitigation, and groundwater quality management, have probed into the applications of cluster analysis using FCM. The problems of marine environment are highly complex and full of uncertainty in nature. Nevertheless by introducing advanced analysis techniques, such as FCM, for tackling such problems, the overall managerial efficiency of marine environment will be improved.
2

Desenvolvimento de algoritmo de clusterização para calorímetro frontal do experimento ALICE no LHC / Development of clustering algorithm for foward calorimeter in the ALICE experiment at the LHC

Silva, Danilo Anacleto Arruda da 22 September 2014 (has links)
O Grande Colisor de Hádrons (Large Hadron Collider - LHC) é um acelerador de prótons e íons pesados localizado no CERN (Conseil Européen pour la Recherche Nucléaire). Em um de seus experimentos, ALICE (A Large Ion Collider Experiment ), está sendo projetado um detector dedicado a explorar os aspectos únicos de colisões núcleo-núcleo. A principal finalidade do ALICE é estudar a formação de um novo estado da matéria, o plasma de quarks e glúon. Para isto devem-se ter medidas precisas de hádrons, elétrons, múons e fótons produzidos em colisões chumbo-chumbo. Assim está sendo proposto um calorímetro frontal (Foward Calorimeter - FoCal) como um upgrade para o ALICE. A função deste calorímetro é o estudo das funções de distribuição de pártons (Partons distribuction Functions - PDF) no regime de pequenos valores do x de Bjorken. Nesta região é esperado que estas PDFs tenham um comportamento não linear devido ao processo de saturação de glúons. Para o estudo desta região é necessária a medida de fótons diretos produzidos na colisão. Estes, por sua vez, ficam mascarados pelo fundo de fótons provenientes do decaimento de píon, o que leva a uma necessidade de suas identificações. Com isto surge a oportunidade para a utilização do método de clusterização que é uma ferramenta de mineração de dados. Este trabalho contribuiu para o desenvolvimento inicial de um algoritmo de clusterização para o calorímetro FoCal. / The Large Hadron Collider (LHC) is a CERN\'s accelerator that collides protons and heavy ions. One of its experiments, ALICE, is building a new detector to explore new aspects of heavy ions collisions. The Alice\'s main goal is to study the formation of quark-gluon plasma (QGP). To do that it\'s necessary to get accurate data on hadrons, electrons, muons and gammas created in lead-lead collision. So, to accomplish that a new calorimeter is in study to scan the foward region of experiment, the Foward Calorimeter (FoCal). It\'s proposed to study Parton Distribution Functions (PDFs) in a regime of very small Bjorken-x, where it is expected that the PDFs evolve non-linearly due to the high gluon densities, a phenomena referred to as gluon saturation.But to do that it\'s required to measure the direct gammas created on collision. These fotons are blended on by fotons descendant of pion. So there\'s a need to separate it from the direct gammas. One way to solve this problem is to use clustering methods (a type of mining data algorithm). This work helped on early stages of development that clustering algorithm.
3

Détournement d'usage de médicaments psychoactifs : développement d'une approche pharmacoépidémiologique / Abuse of psychoactive prescription drugs : development of a new pharmacoepidemiologic method

Frauger-Ousset, Elisabeth 18 June 2010 (has links)
Ce travail présente le développement d’une nouvelle approche pharmacoépidémiologique, reposant sur les bases de données de l'assurance maladie, permettant de caractériser et d’estimer le détournement d’usage de médicaments psychoactifs. Cette approche utilisant la méthode de classification, regroupe, a posteriori, les sujets en différents sous-groupes, menant à l’identification, la caractérisation et la quantification de différents profils de comportement dont le comportement déviant. Nous avons appliqué cette méthode sur plusieurs médicaments. Pour chaque médicament, nous avons inclus l'ensemble des sujets affiliés au régime général des régions PACA et Corse ayant eu un remboursement. Leurs délivrances ont été suivies sur 9 mois. Après une analyse descriptive, une méthode de classification est appliquée, suivie d’une analyse des différents sous-groupes.Un premier travail a permis de confirmer l'importance du détournement d'usage d'une molécule émergente, le clonazépam (publication n°1). Ensuite nous avons adapté notre méthode afin de pouvoir suivre l'évolution sur plusieurs années de ce détournement (publication n°2). Nous avons appliqué cette méthode pour souligner l’existence, sur plusieurs années, du détournement d'usage du méthylphénidate (publication n°3). Notre équipe avait également développé une autre méthode pour estimer la polyprescription (publication n°4). Enfin, nous avons appliqué de façon conjointe ces deux méthodes (publication n°5). La méthode de classification est de plus en plus utilisée afin de surveiller l'évolution du détournement d'usage de médicaments et commencent à être intégrés au système français de surveillance de l’abus de médicament.aux cotés des autres outils pharmacoépidémiologiques plus traditionnels (OSIAP, OPPIDUM, OPEMA, ASOS, DRAMES). / This work presents the development of a new pharmacoepidemiologic method. This methodallows to estimate abuse of psychoactive prescription drugs in real life using prescriptiondatabase. The method is based on a cluster analysis which is a statistical method used todetermine, a posteriori, different subgroups of subjects. According the subgroups’characteristics, we can determine and estimate different behaviours whose subjects with adeviant behaviour. It assesses the rate of subjects with a deviant behaviour among all thesubjects that obtain the drug from a pharmacy.We used this method on several prescription drugs. For each prescription drug, we includedall individuals, affiliated to the French health reimbursement system of two southern Franceareas (Provence-Alpes-Côte-d’Azur and Corsica), who have had a prescription drugreimbursed during the first weeks of the year. Their deliveries have been monitored over a 9month-period. After a descriptive analysis, a clustering method has been used. The fourquantitative variables used to establish profiles of consumers were : number of differentprescribers, number of different pharmacies, number of dispensings and quantity dispensed(DDD). Finally, the characteristics of different subgroups have been presented, especiallythose with a deviant behavior.The first study using this method allows to confirm and assess the magnitude of the abuseliability of an emerging prescription drug as clonazepam (publication n°1). Then we adapt thismethod in order to follow the abuse evolution during several years. In the second publicationon clonazepam, we identified that the proportion of deviant subjects has increased between2001 and 2006 (from 0.86% to 1.38%). We also applied this method to estimatemethylphenidate abuse during several years (from 2005 to 2008) (publication n°3).Methylphenidate abuse is already describe in other countries whereas few data are available inFrance. This study estimates the proportion of subjects with a deviant behaviour (from 0.5%9in 2005 and in 2006 to 2.0% in 2007 and 1.2% in 2008) and assesses its evolution since theapplication of a specific regulation.Our research team has also developed an other method using prescription database : thedoctor shopping indicator which measures the quantity obtained by doctor shopping amongthe overall quantity reimbursed (publication n°4). The objective of the last publication is toanalyze and compare results from those two methods applied to High Dosage Buprenorphine,a product well-known to be diverted in France.Actually, clustering method is more and more used on prescription drugs in order to assesstheir abuse. The results obtained by this method begin to be included in the other postmarketing surveillance of CNS drugs (OSIAP, OPPIDUM, OPEAM, ASOS, DRAMES)which are used by French public health authorities.
4

Desenvolvimento de algoritmo de clusterização para calorímetro frontal do experimento ALICE no LHC / Development of clustering algorithm for foward calorimeter in the ALICE experiment at the LHC

Danilo Anacleto Arruda da Silva 22 September 2014 (has links)
O Grande Colisor de Hádrons (Large Hadron Collider - LHC) é um acelerador de prótons e íons pesados localizado no CERN (Conseil Européen pour la Recherche Nucléaire). Em um de seus experimentos, ALICE (A Large Ion Collider Experiment ), está sendo projetado um detector dedicado a explorar os aspectos únicos de colisões núcleo-núcleo. A principal finalidade do ALICE é estudar a formação de um novo estado da matéria, o plasma de quarks e glúon. Para isto devem-se ter medidas precisas de hádrons, elétrons, múons e fótons produzidos em colisões chumbo-chumbo. Assim está sendo proposto um calorímetro frontal (Foward Calorimeter - FoCal) como um upgrade para o ALICE. A função deste calorímetro é o estudo das funções de distribuição de pártons (Partons distribuction Functions - PDF) no regime de pequenos valores do x de Bjorken. Nesta região é esperado que estas PDFs tenham um comportamento não linear devido ao processo de saturação de glúons. Para o estudo desta região é necessária a medida de fótons diretos produzidos na colisão. Estes, por sua vez, ficam mascarados pelo fundo de fótons provenientes do decaimento de píon, o que leva a uma necessidade de suas identificações. Com isto surge a oportunidade para a utilização do método de clusterização que é uma ferramenta de mineração de dados. Este trabalho contribuiu para o desenvolvimento inicial de um algoritmo de clusterização para o calorímetro FoCal. / The Large Hadron Collider (LHC) is a CERN\'s accelerator that collides protons and heavy ions. One of its experiments, ALICE, is building a new detector to explore new aspects of heavy ions collisions. The Alice\'s main goal is to study the formation of quark-gluon plasma (QGP). To do that it\'s necessary to get accurate data on hadrons, electrons, muons and gammas created in lead-lead collision. So, to accomplish that a new calorimeter is in study to scan the foward region of experiment, the Foward Calorimeter (FoCal). It\'s proposed to study Parton Distribution Functions (PDFs) in a regime of very small Bjorken-x, where it is expected that the PDFs evolve non-linearly due to the high gluon densities, a phenomena referred to as gluon saturation.But to do that it\'s required to measure the direct gammas created on collision. These fotons are blended on by fotons descendant of pion. So there\'s a need to separate it from the direct gammas. One way to solve this problem is to use clustering methods (a type of mining data algorithm). This work helped on early stages of development that clustering algorithm.
5

Shluková a regresní analýza mikropanelových dat / Clustering and regression analysis of micro panel data

Sobíšek, Lukáš January 2010 (has links)
The main purpose of panel studies is to analyze changes in values of studied variables over time. In micro panel research, a large number of elements are periodically observed within the relatively short time period of just a few years. Moreover, the number of repeated measurements is small. This dissertation deals with contemporary approaches to the regression and the clustering analysis of micro panel data. One of the approaches to the micro panel analysis is to use multivariate statistical models originally designed for crosssectional data and modify them in order to take into account the within-subject correlation. The thesis summarizes available tools for the regression analysis of micro panel data. The known and currently used linear mixed effects models for a normally distributed dependent variable are recapitulated. Besides that, new approaches for analysis of a response variable with other than normal distribution are presented. These approaches include the generalized marginal linear model, the generalized linear mixed effects model and the Bayesian modelling approach. In addition to describing the aforementioned models, the paper also includes a brief overview of their implementation in the R software. The difficulty with the regression models adjusted for micro panel data is the ambiguity of their parameters estimation. This thesis proposes a way to improve the estimations through the cluster analysis. For this reason, the thesis also contains a description of methods of the cluster analysis of micro panel data. Because supply of the methods is limited, the main goal of this paper is to devise its own two-step approach for clustering micro panel data. In the first step, the panel data are transformed into a static form using a set of proposed characteristics of dynamics. These characteristics represent different features of time course of the observed variables. In the second step, the elements are clustered by conventional spatial clustering techniques (agglomerative clustering and the C-means partitioning). The clustering is based on a dissimilarity matrix of the values of clustering variables calculated in the first step. Another goal of this paper is to find out whether the suggested procedure leads to an improvement in quality of the regression models for this type of data. By means of a simulation study, the procedure drafted herein is compared to the procedure applied in the kml package of the R software, as well as to the clustering characteristics proposed by Urso (2004). The simulation study demonstrated better results of the proposed combination of clustering variables as compared to the other combinations currently used. A corresponding script written in the R-language represents another benefit of this paper. It is available on the attached CD and it can be used for analyses of readers own micro panel data.
6

Computational Simulation of Southern Pine Lumber Using Finite Element Analysis

Li, Yali 06 August 2021 (has links)
Finite element analysis modeling is a powerful technology to predict the response of materials and structures under certain loaded situations including the applied force, the changing temperature and humanity, the alterative boundary condition, etc. In this paper, the mechanical properties of wood material were analyzed with an emphasis on bending behavior under lateral applied force with the finite element simulation in ABAQUS (Dassault Systèmes, 2020 version). Two methods were conducted in ABAQUS commercial software and the modulus of elastic (MOE) attained from the computational results were compared with the data obtained from the experimental records. The simulation model with grain patterns into consideration showed more accurate behavior when comparing with the displacement from the 3rd point bending test during the elastic range. Machine learning method is widely applied to the image processing procedures like digital recognition. The paper developed a python script to process the wood image cross section with an environmental background and calculated the late wood proportion based on the unsupervised machine learning concept. Grab cut function and Gray Level Co-Occurrence Matrix (GLCM) image processing were defined to obtain the wood section and wood texture features separately. K-Means method was used to cluster the latewood and early wood material based on the mean value from the GLCM matrix then the script was able to calculate the ratio with a simple definition of the equation. The results of the latewood ratio from the python script were compared with the ratio from the dot grid method in this paper. Statistical models in SPSS version 27 (IBM, Chicago, IL) were taken for this paper to predict the relationship between several parameters quantitatively. Since the density, latewood ratio, and number of rings per inch are obviously correlated with each other, this paper proposed a ridge regression statistical model to study the relationship between MOE/modulus of rupture (MOR) with multiple independents. Ridge regression model is also known as Tikhonov Regularization method which aims to solve the collinearity problems that may lead to statistical bias with stepwise regression analysis.
7

Aplicação de máquinas de vetores de suporte na identificação de perfis de alunos de acordo com características da teoria das inteligências múltiplas / Implementation of support vector machines for students’profiles identification according to characteristics of multiple intelligences

Lázaro, Diego Henrique Emygdio [UNESP] 31 May 2016 (has links)
Submitted by DIEGO HENRIQUE EMYGDIO LÁZARO null (diegoemygdio@gmail.com) on 2016-06-27T15:28:11Z No. of bitstreams: 1 Aplicação de Máquinas de Vetores de Suporte na Identificação de Perfis de Alunos de acordo com Características da Teoria das Inteligências Múltiplas.pdf: 2758329 bytes, checksum: 02e2c2154153f7f78fdc32629f761d03 (MD5) / Rejected by Ana Paula Grisoto (grisotoana@reitoria.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo a orientação abaixo: O arquivo submetido não contém o certificado de aprovação. A versão submetida por você é considerada a versão final da dissertação/tese, portanto não poderá ocorrer qualquer alteração em seu conteúdo após a aprovação. Corrija esta informação e realize uma nova submissão contendo o arquivo correto. Agradecemos a compreensão. on 2016-06-27T17:22:37Z (GMT) / Submitted by DIEGO HENRIQUE EMYGDIO LÁZARO null (diegoemygdio@gmail.com) on 2016-06-27T20:26:31Z No. of bitstreams: 1 Aplicação de Máquinas de Vetores de Suporte na Identificação de Perfis de Alunos de acordo com as Características das Inteligências Múltiplas.pdf: 2980004 bytes, checksum: d8b55bde9f111d6df2e3cc9a8db5e8e9 (MD5) / Rejected by Ana Paula Grisoto (grisotoana@reitoria.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo a orientação abaixo: O arquivo submetido está sem a ficha catalográfica. A versão submetida por você é considerada a versão final da dissertação/tese, portanto não poderá ocorrer qualquer alteração em seu conteúdo após a aprovação. Corrija esta informação e realize uma nova submissão contendo o arquivo correto. Agradecemos a compreensão. on 2016-06-28T18:22:33Z (GMT) / Submitted by DIEGO HENRIQUE EMYGDIO LÁZARO null (diegoemygdio@gmail.com) on 2016-06-28T19:33:55Z No. of bitstreams: 1 Aplicação de Máquinas de Vetores de Suporte na Identificação de Perfis de Alunos de acordo com Características da Teoria das Inteligências Múltiplas.pdf: 2736602 bytes, checksum: 51b12df288fa6ceb2ba0e0a908303beb (MD5) / Approved for entry into archive by Ana Paula Grisoto (grisotoana@reitoria.unesp.br) on 2016-06-28T19:59:20Z (GMT) No. of bitstreams: 1 lazaro_dhe_me_sjrp.pdf: 2736602 bytes, checksum: 51b12df288fa6ceb2ba0e0a908303beb (MD5) / Made available in DSpace on 2016-06-28T19:59:20Z (GMT). No. of bitstreams: 1 lazaro_dhe_me_sjrp.pdf: 2736602 bytes, checksum: 51b12df288fa6ceb2ba0e0a908303beb (MD5) Previous issue date: 2016-05-31 / Nesta dissertação foi desenvolvido um mecanismo de classificação capaz de identificar o perfil de um aluno de acordo com características da teoria das inteligências múltiplas, baseado em Support Vector Machines (SVMs, sigla em inglês para Máquinas de Vetores de Suporte), métodos de agrupamento e balanceamento de classes. O objetivo dessa classificação consiste em permitir que os tutores responsáveis por gerar o material para aulas em ferramentas de apoio ao ensino à distância possam utilizar este método de classificação para direcionar o conteúdo ao aluno de forma a explorar sua inteligência múltipla predominante. Para realização dos experimentos, duas SVMs foram criadas, utilizando o método de classificação baseado em k problemas binários, que reduzem o problema de múltiplas classes a um conjunto de problemas binários. Os resultados obtidos durante as fases de treino e teste das SVMs foram apresentados em percentuais por meio de um algoritmo de agrupamento particionado. Esses percentuais ajudam a interpretar a classificação do perfil de acordo com as inteligências predominantes. Além disso, com o uso de métodos de balanceamento de classes, obteve-se melhora no desempenho do classificador, assim, aumentando a eficácia do mecanismo, pois, suas taxas de incorreções foram baixas. / In this work, it was developed a mechanism in order to classify students’ profiles according to the Theory of Multiple Intelligences, based on Support Vector Machines (SVMs), cluster methods and classes balancing. By using these classifications, tutors, who prepare materials for classes in specific tools for distance education purposes, are able to suggest contents for students so that they are able to explore their predominant multiple intelligence. To perform these experiments, SVMs were created by using classification methods based on binary problems that reduce multiple classes problems into a set of binary problems. The results generated during the training and the SVM test stages were presented in percentages by using partitioning clustering algorithm. These percentages are helpful for analysis of profiles classifications according to multiple intelligences. Besides that, by using classes balancing methods, it was possible to obtain improvements on the classifier performance and, consequently, the mechanism efficiency was increased as well, considering the fact that inaccuracy rates were low.
8

Methodological proposal for social impact assessment and environmental conflict analysis

Delgado Villanueva, Kiko Alexi 05 October 2016 (has links)
[EN] Social impact assessment (SIA) is a part of environmental impact assessment (EIA), which is characterized by a high level of uncertainty and the subjective aspects that are presents in the methods used during its conduction. In addition, environmental conflict analysis (ECA) has become a key factor for the viability of projects and welfare of affected populations. In this thesis, an integrated method for SIA and ECA is proposed, by the combination of the grey clustering method and the entropy-weight method. SIA was performed using the grey clustering method, which enables qualitative information coming from a stakeholder group to be quantified. In turn, ECA was performed using the entropy-weight method, which identifies the criteria in which there is greater divergence between stakeholder groups, thus enabling to establish measures to prevent potential environmental conflicts. Then, in order to apply and test the proposed integrated method, two case studies were conducted. The first case study was a mining project in northern Peru. In this study, three stakeholder groups and seven criteria were identified. The results revealed that for the urban population group and the rural population group, the project would have a positive and negative social impact, respectively. For the group of specialists the project would have a normal social impact. It was also noted that the criteria most likely to generate environmental conflicts in order of importance were: access to drinking water, poverty, GDP per capita, and employment. The second case study considered was a hydrocarbon exploration project located in the Gulf of Valencia, Spain. In this study, four stakeholder groups and four criteria were identified. The results revealed that for the group of specialists the project would have a negative social impact, and contrary perceptions were shown between the group of those directly affected by the project and the group of citizens in favour. It was also noted that the criteria most likely to generate environmental conflict were the percentage of unemployment and GDP per capita. The proposed integrated method in this thesis showed great potential on the studied cases, and could be applied to other contexts and other projects, such as water resources management, industrial projects, construction projects, and to measure social impact and prevent conflicts during the implementation of government policies and programs. / [ES] La evaluación del impacto social (SIA) forma parte de la evaluación de impacto ambiental (EIA), y está caracterizada por su alto nivel de incertidumbre, y por los aspectos subjetivos presentes en los métodos usados para su realización. Por otro lado, el análisis del conflicto ambiental (ECA) se ha convertido en un factor clave para la viabilidad de los proyectos y el bienestar de la población afectada. En esta tesis, se propone un método integrado para la SIA y el ECA, mediante la combinación de los métodos grey clustering y entropy-weight. La SIA fue desarrollada usando el método grey clustering, el cual permite cuantificar la información cualitativa recogida de los grupos de interés o stakeholders. Sucesivamente, el ECA fue realizado usando el método entropy-weight, el cual identifica los criterios en los cuales existe gran divergencia entre los grupos de interés, permitiendo así establecer medidas para prevenir potenciales conflictos ambientales. Luego, con el fin de aplicar y testear el método integrado propuesto fueron realizados dos casos de estudio. El primer caso de estudio fue un proyecto minero ubicado en el norte de Perú. En este estudio se identificaron tres grupos de interés y siete criterios. Los resultados revelaron que para el grupo población urbana y el grupo población rural, el proyecto tendría un impacto social positivo y negativo, respectivamente. Para el grupo de los especialistas el proyecto tendría un impacto social normal. También fue notado que los criterios más probables de generar conflicto ambiental en orden de importancia fueron: acceso al agua potable, pobreza, PIB per cápita, y empleo. El segundo caso de estudio considerado fue un proyecto de exploración de hidrocarburos ubicado en el Golfo de Valencia, España. En este estudio se identificaron cuatro grupos de interés y cuatro criterios. Los resultados revelaron que para el grupo de los especialistas el proyecto tendría un impacto social negativo, y contrarias percepciones se encontraron entre el grupo de los directamente afectados y el grupo de los ciudadanos a favor. También fue notado que los criterios más probables de generar conflicto ambiental fueron el porcentaje de desempleo y el PIB per cápita. El método integrado propuesto en esta tesis mostró un gran potencial sobre los casos estudiados, y podría ser aplicado a otros contextos y otros tipos de proyectos, tales como gestión de recursos hídricos, proyectos industriales, proyectos de construcción de obras públicas, y para medir el impacto social y prevenir conflictos durante la aplicación de políticas y programas gubernamentales. / [CAT] L'avaluació de l'impacte social (SIA) és una part de l'avaluació de l'impacte ambiental (EIA), la qual està caracteritzada pel seu alt nivell d'incertitud i els aspectes subjectius presents en els mètodes amprats durant la seua conducció. A més, la anàlisis del conflicte ambiental (ECA) s'ha convertit en un factor clau per a la viabilitat dels projectes i el benestar de la població afectada. En esta tesis es proposa un mètode integrat per a l'avaluació de l'impacte social i la anàlisis del conflicte ambiental, mitjançant la combinació del mètode grey clustering i el mètode entropy-weight. L'avaluació de l'impacte social ha segut realitzada usant el mètode grey clustering, el qual permet que la informació qualitativa arreplegada dels grups d'interès siga quantificada. Successivament, la anàlisis del conflicte ambiental ha segut realitzada usant el mètode entropy-weight, el qual identifica els criteris en els quals existeix gran divergència entre els grups d'interès, la qual cosa permet establir mides per a prevenir conflictes ambientals potencials. Després, amb la finalitat d'aplicar i testejar el mètode integrat proposat han segut realitzats dos casos d'estudi. El primer d'ells ha segut un projecte miner al nord de Perú. En aquest estudi, tres grups d'interès i set criteris foren identificats. Els resultats revelaren que per al grup població-urbana i el grup població-rural, el projecte experimentaria un positiu i un negatiu impacte social respectivament. Per al grup dels especialistes el projecte tindria un impacte social normal. Per altra banda també va ser reconegut que els criteris més probables de generar conflicte ambiental en orde d'importància foren: accés a l'aigua potable, pobresa, PIB per càpita, i ofici. El segon cas d'estudi considerat va ser un projecte d'exploració d'hidrocarburs ubicat al Golf de València, Espanya. En este estudi, quatre grups d'interès i quatre criteris foren identificats. Els resultats revelaren que per al grup dels especialistes el projecte tindria un impacte social negatiu, mentre que entre el grup dels directament afectats i el grup dels ciutadans a favor es mostraren percepcions contraries. Va ser també reconegut que els criteris més probables de generar conflicte ambiental foren el percentatge de desocupació i el PIB per càpita. El mètode integrat proposat en aquesta tesis mostra un gran potencial sobre els casos estudiats, i pot ser aplicat a altres contexts i altres tipus de projectes com gestió de recursos hídrics, projectes industrials i projectes de construcció d'obres públiques. A més pot fer-se servir per mesurar l'impacte social i prevenir conflictes durant l'aplicació de polítiques i programes governamentals. / Delgado Villanueva, KA. (2016). Methodological proposal for social impact assessment and environmental conflict analysis [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/64063 / TESIS
9

Identifikace osob pomocí bipedální lokomoce / Person's identification by means of bipedal locomotion

Krzyžanek, Jakub January 2010 (has links)
The aim of this thesis is to recognize a walking person in a sequence of images by defining his or her reference points to compare the course of their movement and then to identify the scanned person. Methods „k-means“ and „mean shift“ are used to obtain the silhouette of the person. However “environment model estimation” method is used here before those mentioned above. It is a type of a difference method and it helps to specify the scanning area and shortens the time of segmentation. During the search for the reference points the thesis focuses on three areas: the centre of the head and both ankle joints. Those points are later determined on the previous image sequence and compared with the real locations of the centre of the head and ankle joints marked by the user. The thesis also focuses on comparing the movement courses of those points and tries to identify the people whose walks are being scanned. Problematic situations which occurred during the whole process are analyzed in the end. The result of the thesis is an algorithm which can locate a moving person in an image sequence (video) and determine the reference points (centre of the head and ankles) to compare them and identify the scanned person.
10

模糊統計分類及其在茶葉品質評定的應用 / Analysis fuzzy statistical cluster and its application in tea quality

林雅慧, Lin, Ya-Hui Unknown Date (has links)
模糊理論開始於 1960 年代中期,關於這方面的研究與發展均已獲得相當不錯的成果.其中尤以在群落分析應用上的專題研究更是廣泛.Bezdek 提出的模糊分類演算法,乃根據 Dunn 的C平均法所作的一改良方法.但仍有其缺點,例如,未考慮權重且以靜態資料為主. 有鑑於此,本研究對 Bezdek 之方法加以改進推廣,提出加權模糊分類法.對於評價因素為多變量時,應加入模糊權重的考量.此外更結合時間因素,使準則函數成為動態的模式,將傳統的模糊分類法由靜態資料轉為動態資料形式,以反映真實 的情況. / Research on the theory of fuzzy sets has been growing steadily since itsinception during the mid-1960s. The literature especially dealing with fuzzycluster analysis is quite extensive. But the research on FCM still has somedisadvantages. For instance, the

Page generated in 0.1083 seconds