• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Apply Fuzzy Cluster Method for Identifying the Spatial Distribution of Pollutants around Kaohsiung Coastal Water

Chang, Dun-Cheng 15 August 2002 (has links)
Abstract The near shore water intake pollutants from the land area and is heavily polluted. In order to assess such impact efficiently, the focus of marine environmental monitoring is shifting from inspecting pollutants in a water body to the measurement of pollutants adhered to sediments on seabed. The statistical methods are then used to analyze survey data for the purpose of interpretation. As for the problem of identifying the spatial distributions of classified pollutants over the water around Kaohsiung harbor, the result from the commonly used K-Means Cluster Analysis is not very satisfactory. It is therefore that the proposed research is trying to use the Fuzzy Cluster Method (FCM) to achieve better results. Through adaptive searching approach, the FCM should be able to generate the appropriate cluster centers for discerning the pollutants¡¦ spatial distribution, which in turn would convey more meaning to support feasible interpretation. The FCM model developed by this research will also help to trace the most suspicious or new pollutant source with the assistance from the domain expertise if an unusual pollutant were found in the study area. The benefit is therefore obvious that the authority in charge of marine environment can respond efficiently and correctly against such pollution event and take appropriate actions. FCM has been heavily applied to the research on computer vision and pattern recognition with great success. Recently quite amounts of literatures in the environmental and natural resource management study, including geo-statistical modeling, pollution mitigation, and groundwater quality management, have probed into the applications of cluster analysis using FCM. The problems of marine environment are highly complex and full of uncertainty in nature. Nevertheless by introducing advanced analysis techniques, such as FCM, for tackling such problems, the overall managerial efficiency of marine environment will be improved.
2

Desenvolvimento de algoritmo de clusterização para calorímetro frontal do experimento ALICE no LHC / Development of clustering algorithm for foward calorimeter in the ALICE experiment at the LHC

Silva, Danilo Anacleto Arruda da 22 September 2014 (has links)
O Grande Colisor de Hádrons (Large Hadron Collider - LHC) é um acelerador de prótons e íons pesados localizado no CERN (Conseil Européen pour la Recherche Nucléaire). Em um de seus experimentos, ALICE (A Large Ion Collider Experiment ), está sendo projetado um detector dedicado a explorar os aspectos únicos de colisões núcleo-núcleo. A principal finalidade do ALICE é estudar a formação de um novo estado da matéria, o plasma de quarks e glúon. Para isto devem-se ter medidas precisas de hádrons, elétrons, múons e fótons produzidos em colisões chumbo-chumbo. Assim está sendo proposto um calorímetro frontal (Foward Calorimeter - FoCal) como um upgrade para o ALICE. A função deste calorímetro é o estudo das funções de distribuição de pártons (Partons distribuction Functions - PDF) no regime de pequenos valores do x de Bjorken. Nesta região é esperado que estas PDFs tenham um comportamento não linear devido ao processo de saturação de glúons. Para o estudo desta região é necessária a medida de fótons diretos produzidos na colisão. Estes, por sua vez, ficam mascarados pelo fundo de fótons provenientes do decaimento de píon, o que leva a uma necessidade de suas identificações. Com isto surge a oportunidade para a utilização do método de clusterização que é uma ferramenta de mineração de dados. Este trabalho contribuiu para o desenvolvimento inicial de um algoritmo de clusterização para o calorímetro FoCal. / The Large Hadron Collider (LHC) is a CERN\'s accelerator that collides protons and heavy ions. One of its experiments, ALICE, is building a new detector to explore new aspects of heavy ions collisions. The Alice\'s main goal is to study the formation of quark-gluon plasma (QGP). To do that it\'s necessary to get accurate data on hadrons, electrons, muons and gammas created in lead-lead collision. So, to accomplish that a new calorimeter is in study to scan the foward region of experiment, the Foward Calorimeter (FoCal). It\'s proposed to study Parton Distribution Functions (PDFs) in a regime of very small Bjorken-x, where it is expected that the PDFs evolve non-linearly due to the high gluon densities, a phenomena referred to as gluon saturation.But to do that it\'s required to measure the direct gammas created on collision. These fotons are blended on by fotons descendant of pion. So there\'s a need to separate it from the direct gammas. One way to solve this problem is to use clustering methods (a type of mining data algorithm). This work helped on early stages of development that clustering algorithm.
3

Détournement d'usage de médicaments psychoactifs : développement d'une approche pharmacoépidémiologique / Abuse of psychoactive prescription drugs : development of a new pharmacoepidemiologic method

Frauger-Ousset, Elisabeth 18 June 2010 (has links)
Ce travail présente le développement d’une nouvelle approche pharmacoépidémiologique, reposant sur les bases de données de l'assurance maladie, permettant de caractériser et d’estimer le détournement d’usage de médicaments psychoactifs. Cette approche utilisant la méthode de classification, regroupe, a posteriori, les sujets en différents sous-groupes, menant à l’identification, la caractérisation et la quantification de différents profils de comportement dont le comportement déviant. Nous avons appliqué cette méthode sur plusieurs médicaments. Pour chaque médicament, nous avons inclus l'ensemble des sujets affiliés au régime général des régions PACA et Corse ayant eu un remboursement. Leurs délivrances ont été suivies sur 9 mois. Après une analyse descriptive, une méthode de classification est appliquée, suivie d’une analyse des différents sous-groupes.Un premier travail a permis de confirmer l'importance du détournement d'usage d'une molécule émergente, le clonazépam (publication n°1). Ensuite nous avons adapté notre méthode afin de pouvoir suivre l'évolution sur plusieurs années de ce détournement (publication n°2). Nous avons appliqué cette méthode pour souligner l’existence, sur plusieurs années, du détournement d'usage du méthylphénidate (publication n°3). Notre équipe avait également développé une autre méthode pour estimer la polyprescription (publication n°4). Enfin, nous avons appliqué de façon conjointe ces deux méthodes (publication n°5). La méthode de classification est de plus en plus utilisée afin de surveiller l'évolution du détournement d'usage de médicaments et commencent à être intégrés au système français de surveillance de l’abus de médicament.aux cotés des autres outils pharmacoépidémiologiques plus traditionnels (OSIAP, OPPIDUM, OPEMA, ASOS, DRAMES). / This work presents the development of a new pharmacoepidemiologic method. This methodallows to estimate abuse of psychoactive prescription drugs in real life using prescriptiondatabase. The method is based on a cluster analysis which is a statistical method used todetermine, a posteriori, different subgroups of subjects. According the subgroups’characteristics, we can determine and estimate different behaviours whose subjects with adeviant behaviour. It assesses the rate of subjects with a deviant behaviour among all thesubjects that obtain the drug from a pharmacy.We used this method on several prescription drugs. For each prescription drug, we includedall individuals, affiliated to the French health reimbursement system of two southern Franceareas (Provence-Alpes-Côte-d’Azur and Corsica), who have had a prescription drugreimbursed during the first weeks of the year. Their deliveries have been monitored over a 9month-period. After a descriptive analysis, a clustering method has been used. The fourquantitative variables used to establish profiles of consumers were : number of differentprescribers, number of different pharmacies, number of dispensings and quantity dispensed(DDD). Finally, the characteristics of different subgroups have been presented, especiallythose with a deviant behavior.The first study using this method allows to confirm and assess the magnitude of the abuseliability of an emerging prescription drug as clonazepam (publication n°1). Then we adapt thismethod in order to follow the abuse evolution during several years. In the second publicationon clonazepam, we identified that the proportion of deviant subjects has increased between2001 and 2006 (from 0.86% to 1.38%). We also applied this method to estimatemethylphenidate abuse during several years (from 2005 to 2008) (publication n°3).Methylphenidate abuse is already describe in other countries whereas few data are available inFrance. This study estimates the proportion of subjects with a deviant behaviour (from 0.5%9in 2005 and in 2006 to 2.0% in 2007 and 1.2% in 2008) and assesses its evolution since theapplication of a specific regulation.Our research team has also developed an other method using prescription database : thedoctor shopping indicator which measures the quantity obtained by doctor shopping amongthe overall quantity reimbursed (publication n°4). The objective of the last publication is toanalyze and compare results from those two methods applied to High Dosage Buprenorphine,a product well-known to be diverted in France.Actually, clustering method is more and more used on prescription drugs in order to assesstheir abuse. The results obtained by this method begin to be included in the other postmarketing surveillance of CNS drugs (OSIAP, OPPIDUM, OPEAM, ASOS, DRAMES)which are used by French public health authorities.
4

Desenvolvimento de algoritmo de clusterização para calorímetro frontal do experimento ALICE no LHC / Development of clustering algorithm for foward calorimeter in the ALICE experiment at the LHC

Danilo Anacleto Arruda da Silva 22 September 2014 (has links)
O Grande Colisor de Hádrons (Large Hadron Collider - LHC) é um acelerador de prótons e íons pesados localizado no CERN (Conseil Européen pour la Recherche Nucléaire). Em um de seus experimentos, ALICE (A Large Ion Collider Experiment ), está sendo projetado um detector dedicado a explorar os aspectos únicos de colisões núcleo-núcleo. A principal finalidade do ALICE é estudar a formação de um novo estado da matéria, o plasma de quarks e glúon. Para isto devem-se ter medidas precisas de hádrons, elétrons, múons e fótons produzidos em colisões chumbo-chumbo. Assim está sendo proposto um calorímetro frontal (Foward Calorimeter - FoCal) como um upgrade para o ALICE. A função deste calorímetro é o estudo das funções de distribuição de pártons (Partons distribuction Functions - PDF) no regime de pequenos valores do x de Bjorken. Nesta região é esperado que estas PDFs tenham um comportamento não linear devido ao processo de saturação de glúons. Para o estudo desta região é necessária a medida de fótons diretos produzidos na colisão. Estes, por sua vez, ficam mascarados pelo fundo de fótons provenientes do decaimento de píon, o que leva a uma necessidade de suas identificações. Com isto surge a oportunidade para a utilização do método de clusterização que é uma ferramenta de mineração de dados. Este trabalho contribuiu para o desenvolvimento inicial de um algoritmo de clusterização para o calorímetro FoCal. / The Large Hadron Collider (LHC) is a CERN\'s accelerator that collides protons and heavy ions. One of its experiments, ALICE, is building a new detector to explore new aspects of heavy ions collisions. The Alice\'s main goal is to study the formation of quark-gluon plasma (QGP). To do that it\'s necessary to get accurate data on hadrons, electrons, muons and gammas created in lead-lead collision. So, to accomplish that a new calorimeter is in study to scan the foward region of experiment, the Foward Calorimeter (FoCal). It\'s proposed to study Parton Distribution Functions (PDFs) in a regime of very small Bjorken-x, where it is expected that the PDFs evolve non-linearly due to the high gluon densities, a phenomena referred to as gluon saturation.But to do that it\'s required to measure the direct gammas created on collision. These fotons are blended on by fotons descendant of pion. So there\'s a need to separate it from the direct gammas. One way to solve this problem is to use clustering methods (a type of mining data algorithm). This work helped on early stages of development that clustering algorithm.
5

Shluková a regresní analýza mikropanelových dat / Clustering and regression analysis of micro panel data

Sobíšek, Lukáš January 2010 (has links)
The main purpose of panel studies is to analyze changes in values of studied variables over time. In micro panel research, a large number of elements are periodically observed within the relatively short time period of just a few years. Moreover, the number of repeated measurements is small. This dissertation deals with contemporary approaches to the regression and the clustering analysis of micro panel data. One of the approaches to the micro panel analysis is to use multivariate statistical models originally designed for crosssectional data and modify them in order to take into account the within-subject correlation. The thesis summarizes available tools for the regression analysis of micro panel data. The known and currently used linear mixed effects models for a normally distributed dependent variable are recapitulated. Besides that, new approaches for analysis of a response variable with other than normal distribution are presented. These approaches include the generalized marginal linear model, the generalized linear mixed effects model and the Bayesian modelling approach. In addition to describing the aforementioned models, the paper also includes a brief overview of their implementation in the R software. The difficulty with the regression models adjusted for micro panel data is the ambiguity of their parameters estimation. This thesis proposes a way to improve the estimations through the cluster analysis. For this reason, the thesis also contains a description of methods of the cluster analysis of micro panel data. Because supply of the methods is limited, the main goal of this paper is to devise its own two-step approach for clustering micro panel data. In the first step, the panel data are transformed into a static form using a set of proposed characteristics of dynamics. These characteristics represent different features of time course of the observed variables. In the second step, the elements are clustered by conventional spatial clustering techniques (agglomerative clustering and the C-means partitioning). The clustering is based on a dissimilarity matrix of the values of clustering variables calculated in the first step. Another goal of this paper is to find out whether the suggested procedure leads to an improvement in quality of the regression models for this type of data. By means of a simulation study, the procedure drafted herein is compared to the procedure applied in the kml package of the R software, as well as to the clustering characteristics proposed by Urso (2004). The simulation study demonstrated better results of the proposed combination of clustering variables as compared to the other combinations currently used. A corresponding script written in the R-language represents another benefit of this paper. It is available on the attached CD and it can be used for analyses of readers own micro panel data.
6

Computational Simulation of Southern Pine Lumber Using Finite Element Analysis

Li, Yali 06 August 2021 (has links)
Finite element analysis modeling is a powerful technology to predict the response of materials and structures under certain loaded situations including the applied force, the changing temperature and humanity, the alterative boundary condition, etc. In this paper, the mechanical properties of wood material were analyzed with an emphasis on bending behavior under lateral applied force with the finite element simulation in ABAQUS (Dassault Systèmes, 2020 version). Two methods were conducted in ABAQUS commercial software and the modulus of elastic (MOE) attained from the computational results were compared with the data obtained from the experimental records. The simulation model with grain patterns into consideration showed more accurate behavior when comparing with the displacement from the 3rd point bending test during the elastic range. Machine learning method is widely applied to the image processing procedures like digital recognition. The paper developed a python script to process the wood image cross section with an environmental background and calculated the late wood proportion based on the unsupervised machine learning concept. Grab cut function and Gray Level Co-Occurrence Matrix (GLCM) image processing were defined to obtain the wood section and wood texture features separately. K-Means method was used to cluster the latewood and early wood material based on the mean value from the GLCM matrix then the script was able to calculate the ratio with a simple definition of the equation. The results of the latewood ratio from the python script were compared with the ratio from the dot grid method in this paper. Statistical models in SPSS version 27 (IBM, Chicago, IL) were taken for this paper to predict the relationship between several parameters quantitatively. Since the density, latewood ratio, and number of rings per inch are obviously correlated with each other, this paper proposed a ridge regression statistical model to study the relationship between MOE/modulus of rupture (MOR) with multiple independents. Ridge regression model is also known as Tikhonov Regularization method which aims to solve the collinearity problems that may lead to statistical bias with stepwise regression analysis.
7

Aplicação de máquinas de vetores de suporte na identificação de perfis de alunos de acordo com características da teoria das inteligências múltiplas / Implementation of support vector machines for students’profiles identification according to characteristics of multiple intelligences

Lázaro, Diego Henrique Emygdio [UNESP] 31 May 2016 (has links)
Submitted by DIEGO HENRIQUE EMYGDIO LÁZARO null (diegoemygdio@gmail.com) on 2016-06-27T15:28:11Z No. of bitstreams: 1 Aplicação de Máquinas de Vetores de Suporte na Identificação de Perfis de Alunos de acordo com Características da Teoria das Inteligências Múltiplas.pdf: 2758329 bytes, checksum: 02e2c2154153f7f78fdc32629f761d03 (MD5) / Rejected by Ana Paula Grisoto (grisotoana@reitoria.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo a orientação abaixo: O arquivo submetido não contém o certificado de aprovação. A versão submetida por você é considerada a versão final da dissertação/tese, portanto não poderá ocorrer qualquer alteração em seu conteúdo após a aprovação. Corrija esta informação e realize uma nova submissão contendo o arquivo correto. Agradecemos a compreensão. on 2016-06-27T17:22:37Z (GMT) / Submitted by DIEGO HENRIQUE EMYGDIO LÁZARO null (diegoemygdio@gmail.com) on 2016-06-27T20:26:31Z No. of bitstreams: 1 Aplicação de Máquinas de Vetores de Suporte na Identificação de Perfis de Alunos de acordo com as Características das Inteligências Múltiplas.pdf: 2980004 bytes, checksum: d8b55bde9f111d6df2e3cc9a8db5e8e9 (MD5) / Rejected by Ana Paula Grisoto (grisotoana@reitoria.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo a orientação abaixo: O arquivo submetido está sem a ficha catalográfica. A versão submetida por você é considerada a versão final da dissertação/tese, portanto não poderá ocorrer qualquer alteração em seu conteúdo após a aprovação. Corrija esta informação e realize uma nova submissão contendo o arquivo correto. Agradecemos a compreensão. on 2016-06-28T18:22:33Z (GMT) / Submitted by DIEGO HENRIQUE EMYGDIO LÁZARO null (diegoemygdio@gmail.com) on 2016-06-28T19:33:55Z No. of bitstreams: 1 Aplicação de Máquinas de Vetores de Suporte na Identificação de Perfis de Alunos de acordo com Características da Teoria das Inteligências Múltiplas.pdf: 2736602 bytes, checksum: 51b12df288fa6ceb2ba0e0a908303beb (MD5) / Approved for entry into archive by Ana Paula Grisoto (grisotoana@reitoria.unesp.br) on 2016-06-28T19:59:20Z (GMT) No. of bitstreams: 1 lazaro_dhe_me_sjrp.pdf: 2736602 bytes, checksum: 51b12df288fa6ceb2ba0e0a908303beb (MD5) / Made available in DSpace on 2016-06-28T19:59:20Z (GMT). No. of bitstreams: 1 lazaro_dhe_me_sjrp.pdf: 2736602 bytes, checksum: 51b12df288fa6ceb2ba0e0a908303beb (MD5) Previous issue date: 2016-05-31 / Nesta dissertação foi desenvolvido um mecanismo de classificação capaz de identificar o perfil de um aluno de acordo com características da teoria das inteligências múltiplas, baseado em Support Vector Machines (SVMs, sigla em inglês para Máquinas de Vetores de Suporte), métodos de agrupamento e balanceamento de classes. O objetivo dessa classificação consiste em permitir que os tutores responsáveis por gerar o material para aulas em ferramentas de apoio ao ensino à distância possam utilizar este método de classificação para direcionar o conteúdo ao aluno de forma a explorar sua inteligência múltipla predominante. Para realização dos experimentos, duas SVMs foram criadas, utilizando o método de classificação baseado em k problemas binários, que reduzem o problema de múltiplas classes a um conjunto de problemas binários. Os resultados obtidos durante as fases de treino e teste das SVMs foram apresentados em percentuais por meio de um algoritmo de agrupamento particionado. Esses percentuais ajudam a interpretar a classificação do perfil de acordo com as inteligências predominantes. Além disso, com o uso de métodos de balanceamento de classes, obteve-se melhora no desempenho do classificador, assim, aumentando a eficácia do mecanismo, pois, suas taxas de incorreções foram baixas. / In this work, it was developed a mechanism in order to classify students’ profiles according to the Theory of Multiple Intelligences, based on Support Vector Machines (SVMs), cluster methods and classes balancing. By using these classifications, tutors, who prepare materials for classes in specific tools for distance education purposes, are able to suggest contents for students so that they are able to explore their predominant multiple intelligence. To perform these experiments, SVMs were created by using classification methods based on binary problems that reduce multiple classes problems into a set of binary problems. The results generated during the training and the SVM test stages were presented in percentages by using partitioning clustering algorithm. These percentages are helpful for analysis of profiles classifications according to multiple intelligences. Besides that, by using classes balancing methods, it was possible to obtain improvements on the classifier performance and, consequently, the mechanism efficiency was increased as well, considering the fact that inaccuracy rates were low.
8

Identifikace osob pomocí bipedální lokomoce / Person's identification by means of bipedal locomotion

Krzyžanek, Jakub January 2010 (has links)
The aim of this thesis is to recognize a walking person in a sequence of images by defining his or her reference points to compare the course of their movement and then to identify the scanned person. Methods „k-means“ and „mean shift“ are used to obtain the silhouette of the person. However “environment model estimation” method is used here before those mentioned above. It is a type of a difference method and it helps to specify the scanning area and shortens the time of segmentation. During the search for the reference points the thesis focuses on three areas: the centre of the head and both ankle joints. Those points are later determined on the previous image sequence and compared with the real locations of the centre of the head and ankle joints marked by the user. The thesis also focuses on comparing the movement courses of those points and tries to identify the people whose walks are being scanned. Problematic situations which occurred during the whole process are analyzed in the end. The result of the thesis is an algorithm which can locate a moving person in an image sequence (video) and determine the reference points (centre of the head and ankles) to compare them and identify the scanned person.
9

Methodological proposal for social impact assessment and environmental conflict analysis

Delgado Villanueva, Kiko Alexi 05 October 2016 (has links)
Tesis por compendio / [EN] Social impact assessment (SIA) is a part of environmental impact assessment (EIA), which is characterized by a high level of uncertainty and the subjective aspects that are presents in the methods used during its conduction. In addition, environmental conflict analysis (ECA) has become a key factor for the viability of projects and welfare of affected populations. In this thesis, an integrated method for SIA and ECA is proposed, by the combination of the grey clustering method and the entropy-weight method. SIA was performed using the grey clustering method, which enables qualitative information coming from a stakeholder group to be quantified. In turn, ECA was performed using the entropy-weight method, which identifies the criteria in which there is greater divergence between stakeholder groups, thus enabling to establish measures to prevent potential environmental conflicts. Then, in order to apply and test the proposed integrated method, two case studies were conducted. The first case study was a mining project in northern Peru. In this study, three stakeholder groups and seven criteria were identified. The results revealed that for the urban population group and the rural population group, the project would have a positive and negative social impact, respectively. For the group of specialists the project would have a normal social impact. It was also noted that the criteria most likely to generate environmental conflicts in order of importance were: access to drinking water, poverty, GDP per capita, and employment. The second case study considered was a hydrocarbon exploration project located in the Gulf of Valencia, Spain. In this study, four stakeholder groups and four criteria were identified. The results revealed that for the group of specialists the project would have a negative social impact, and contrary perceptions were shown between the group of those directly affected by the project and the group of citizens in favour. It was also noted that the criteria most likely to generate environmental conflict were the percentage of unemployment and GDP per capita. The proposed integrated method in this thesis showed great potential on the studied cases, and could be applied to other contexts and other projects, such as water resources management, industrial projects, construction projects, and to measure social impact and prevent conflicts during the implementation of government policies and programs. / [ES] La evaluación del impacto social (SIA) forma parte de la evaluación de impacto ambiental (EIA), y está caracterizada por su alto nivel de incertidumbre, y por los aspectos subjetivos presentes en los métodos usados para su realización. Por otro lado, el análisis del conflicto ambiental (ECA) se ha convertido en un factor clave para la viabilidad de los proyectos y el bienestar de la población afectada. En esta tesis, se propone un método integrado para la SIA y el ECA, mediante la combinación de los métodos grey clustering y entropy-weight. La SIA fue desarrollada usando el método grey clustering, el cual permite cuantificar la información cualitativa recogida de los grupos de interés o stakeholders. Sucesivamente, el ECA fue realizado usando el método entropy-weight, el cual identifica los criterios en los cuales existe gran divergencia entre los grupos de interés, permitiendo así establecer medidas para prevenir potenciales conflictos ambientales. Luego, con el fin de aplicar y testear el método integrado propuesto fueron realizados dos casos de estudio. El primer caso de estudio fue un proyecto minero ubicado en el norte de Perú. En este estudio se identificaron tres grupos de interés y siete criterios. Los resultados revelaron que para el grupo población urbana y el grupo población rural, el proyecto tendría un impacto social positivo y negativo, respectivamente. Para el grupo de los especialistas el proyecto tendría un impacto social normal. También fue notado que los criterios más probables de generar conflicto ambiental en orden de importancia fueron: acceso al agua potable, pobreza, PIB per cápita, y empleo. El segundo caso de estudio considerado fue un proyecto de exploración de hidrocarburos ubicado en el Golfo de Valencia, España. En este estudio se identificaron cuatro grupos de interés y cuatro criterios. Los resultados revelaron que para el grupo de los especialistas el proyecto tendría un impacto social negativo, y contrarias percepciones se encontraron entre el grupo de los directamente afectados y el grupo de los ciudadanos a favor. También fue notado que los criterios más probables de generar conflicto ambiental fueron el porcentaje de desempleo y el PIB per cápita. El método integrado propuesto en esta tesis mostró un gran potencial sobre los casos estudiados, y podría ser aplicado a otros contextos y otros tipos de proyectos, tales como gestión de recursos hídricos, proyectos industriales, proyectos de construcción de obras públicas, y para medir el impacto social y prevenir conflictos durante la aplicación de políticas y programas gubernamentales. / [CA] L'avaluació de l'impacte social (SIA) és una part de l'avaluació de l'impacte ambiental (EIA), la qual està caracteritzada pel seu alt nivell d'incertitud i els aspectes subjectius presents en els mètodes amprats durant la seua conducció. A més, la anàlisis del conflicte ambiental (ECA) s'ha convertit en un factor clau per a la viabilitat dels projectes i el benestar de la població afectada. En esta tesis es proposa un mètode integrat per a l'avaluació de l'impacte social i la anàlisis del conflicte ambiental, mitjançant la combinació del mètode grey clustering i el mètode entropy-weight. L'avaluació de l'impacte social ha segut realitzada usant el mètode grey clustering, el qual permet que la informació qualitativa arreplegada dels grups d'interès siga quantificada. Successivament, la anàlisis del conflicte ambiental ha segut realitzada usant el mètode entropy-weight, el qual identifica els criteris en els quals existeix gran divergència entre els grups d'interès, la qual cosa permet establir mides per a prevenir conflictes ambientals potencials. Després, amb la finalitat d'aplicar i testejar el mètode integrat proposat han segut realitzats dos casos d'estudi. El primer d'ells ha segut un projecte miner al nord de Perú. En aquest estudi, tres grups d'interès i set criteris foren identificats. Els resultats revelaren que per al grup població-urbana i el grup població-rural, el projecte experimentaria un positiu i un negatiu impacte social respectivament. Per al grup dels especialistes el projecte tindria un impacte social normal. Per altra banda també va ser reconegut que els criteris més probables de generar conflicte ambiental en orde d'importància foren: accés a l'aigua potable, pobresa, PIB per càpita, i ofici. El segon cas d'estudi considerat va ser un projecte d'exploració d'hidrocarburs ubicat al Golf de València, Espanya. En este estudi, quatre grups d'interès i quatre criteris foren identificats. Els resultats revelaren que per al grup dels especialistes el projecte tindria un impacte social negatiu, mentre que entre el grup dels directament afectats i el grup dels ciutadans a favor es mostraren percepcions contraries. Va ser també reconegut que els criteris més probables de generar conflicte ambiental foren el percentatge de desocupació i el PIB per càpita. El mètode integrat proposat en aquesta tesis mostra un gran potencial sobre els casos estudiats, i pot ser aplicat a altres contexts i altres tipus de projectes com gestió de recursos hídrics, projectes industrials i projectes de construcció d'obres públiques. A més pot fer-se servir per mesurar l'impacte social i prevenir conflictes durant l'aplicació de polítiques i programes governamentals. / Delgado Villanueva, KA. (2016). Methodological proposal for social impact assessment and environmental conflict analysis [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/64063 / Compendio
10

<sub><strong>THE EFFECTS OF SURFACE CHARACTERISTICS AND SYNOPTIC PATTERNS ON TORNADIC STORMS IN THE UNITED STATES</strong></sub>

Qin Jiang (19183822) 21 July 2024 (has links)
<p dir="ltr">It is known that tornadic storms favor environments characteristic of high values of thermal instability, adequate vertical wind shear, abundant near-surface moisture supply, and strong storm-relative helicity at the lowest 1-km boundary layer. These mesoscale environmental conditions and associated storm behaviors are strongly governed by large-scale synoptic patterns and sensitive to variabilities in near-surface characteristics, which are less known in the current research community. This study aims to advance the relatively underexplored area regarding the interaction between surface characteristics, mesoscale environmental conditions, and large-scale synoptic patterns driving tornadic storms in the U.S. </p><p dir="ltr">We first investigate the impact of surface drag on the structure and evolution of these boundaries, their associated distribution of near-surface vorticity, and tornadogenesis and maintenance. Comparisons between idealized simulations without and with drag introduced in the mature stage of the storm prior to tornadogenesis reveal that the inclusion of surface drag substantially alters the low-level structure, particularly with respect to the number, location, and intensity of surface convergence boundaries. Substantial drag-generated horizontal vorticity induces rotor structures near the surface associated with the convergence boundaries in both the forward and rear flanks of the storm. Stretching of horizontal vorticity and subsequent tilting into the vertical along the convergence boundaries lead to elongated positive vertical vorticity sheets on the ascending branch of the rotors and the opposite on the descending branch. The larger near-surface pressure deficit associated with the faster development of the near-surface cyclone when drag is active creates a downward dynamic vertical pressure gradient force that suppresses vertical growth, leading to a weaker and wider tornado detached from the surrounding convergence boundaries. A conceptual model of the low-level structure of the tornadic supercell is presented that focuses on the contribution of surface drag, with the aim of adding more insight and complexity to previous conceptual models.</p><p dir="ltr">We then examine the behaviors and dynamics of TLVs in response to a range of surface drag strengths in idealized simulations and explore their sensitivities to different storm environments. We find that the contribution of surface drag on TLV development is strongly governed by the interaction between surface rotation, surface convergence boundaries, and the low-level mesocyclone. Surface drag facilitates TLV formation by enhancing near-surface vortices and low-level lifting, mitigating the need for an intense updraft gradient developing close to the ground. As surface drag increases, a wider circulation near the surface blocks the inflow from directly reaching the rotating core, leading to a less tilted structure that allows the TLV position beneath the pressure minima aloft. Further increase in drag strength discourages TLV intensification by suppressing vertical stretching due to a negative vertical pressure perturbation gradient force, and it stops benefiting from the support of surrounding convergence boundaries and the overlying low-level updraft, instead becoming detached from them. We hence propose a favorable condition for TLV formation and duration where a TLV forms a less tilted structure directly beneath the low-level mesocyclone but also evolves near surrounding surface boundaries, which scenario strongly depends on underlying surface drag strength. </p><p dir="ltr">Beyond near-surface characteristics, we further explore how these storm-favorable environmental conditions may interact with the larger-scale synoptic patterns and how these interactions may affect the tornadic storm potential in the current warming climate. We employ hierarchical clustering analysis to classify the leading synoptic patterns driving tornadic storms across different geographic regions in the U.S. We find that the primary synoptic patterns are distinguishable across geographic regions and seasonalities. The intense upper-level jet streak described by the high values of eddy kinetic energy (EKE) associated with the dense distribution of Z500 contours dominates the tornado events in the southeast U.S. in the cold season (November-March). Late Spring and early Summer Tornado events in the central and south Great Plains are dominated by deep trough systems to the west axes of the tornado genesis position, while more summer events associated with weak synoptic forcing are positioned closer to the lee side of Rocky Mountain. Moreover, the increasing trend in tornado frequency in the southeastern U.S. is mainly driven by synoptic patterns with intense forcing, and the decreasing trends in portions of the Great Plains are associated with weak synoptic forcing. This finding indicates that the physical mechanisms driving the spatial trends of tornado occurrences differ across regions in the U.S.</p>

Page generated in 0.1151 seconds