401 |
Application Of A Natural-resonance Based Feature Extraction Technique To Small-scale Aircraft Modeled By Conducting Wires For Electromagnetic Target ClassificationErsoy, Mehmet Okan 01 October 2004 (has links) (PDF)
The problem studied in this thesis, is the classification of the small-scale
aircraft targets by using a natural resonance based electromagnetic feature extraction
technique. The aircraft targets are modeled by perfectly conducting, thin wire
structures. The electromagnetic back-scattered data used in the classification process,
are numerically generated for five aircraft models.
A contemporary signal processing tool, the Wigner-Ville distribution is
employed in this study in addition to using the principal components analysis
technique to extract target features mainly from late-time target responses. The
Wigner-Ville distribution (WD) is applied to the electromagnetic back-scattered
responses from different aspects. Then, feature vectors are extracted from suitably
chosen late-time portions of the WD outputs, which include natural resonance related
v
information, for every target and aspect to decrease aspect dependency. The database
of the classifier is constructed by the feature vectors extracted at only a few reference
aspects. Principal components analysis is also used to fuse the feature vectors and/or
late-time aircraft responses extracted from reference aspects of a given target into a
single characteristic feature vector of that target to further reduce aspect dependency.
Consequently, an almost aspect independent classifier is designed for small-scale
aircraft targets reaching high correct classification rate.
|
402 |
一種基於函數型資料主成分分析的曲線對齊方式 / A Curve Alignment Method Based on Functional PCA林昱航, Lin,Yu-Hang Unknown Date (has links)
函數型資料分析的是一組曲線資料,通常定義域為一段時間範圍。常見的如某一個地區人口在成長期的身高紀錄表或是氣候統計資料。函數型資料主要特色曲線間常有共同趨勢,而且個別曲線反應共同趨勢時也有時間和強度上的差異。本文研究主要是使用Kneip 和 Ramsay提出,結合對齊程序和主成分分析的想法作為模型架構,來分析函數型資料的特性。首先在對齊過程中,使用時間轉換函數(warping function),解決觀測資料上時間的差異;並使用主成分分析方法,幫助研究者探討資料的主要特性。基於函數型資料被預期的共同趨勢性,我們可以利用此一特色作為各種類型資料分類上的依據。此外本研究會對幾種選取主成分個數的方法,進行綜合討論與比較。 / In this thesis, a procedure combining curve alignment and functional principal component analysis is studied. The procedure is proposed by Kneip and Ramsay .In functional principal component analysis, if the data curves are roughly linear combinations of k basis curves, then the data curves are expected to be explained well by principle component curves. The goal of this study is to examine whether this property still holds when curves need to be aligned. It is found that, if the aligned data curves can be approximated well by k basis curves, then applying Kneip and Ramsay's procedure to the unaligned curves gives k principal components that can explain the aligned curves well. Several approaches for selecting the number of principal components are proposed and compared.
|
403 |
Crop decision planning under yield and price uncertaintiesKantanantha, Nantachai 25 June 2007 (has links)
This research focuses on developing a crop decision planning model to help farmers make decisions for an upcoming crop year. The decisions consist of which crops to plant, the amount of land to allocate to each crop, when to grow, when to harvest, and when to sell. The objective is to maximize the overall profit subject to available resources under yield and price uncertainties.
To help achieve this objective, we develop yield and price forecasting models to estimate the probable outcomes of these uncertain factors. The output from both forecasting models are incorporated into the crop decision planning model which enables the farmers to investigate and analyze the possible scenarios and eventually determine the appropriate decisions for each situation.
This dissertation has three major components, yield forecasting, price forecasting, and crop decision planning. For yield forecasting, we propose a crop-weather regression model under a semiparametric framework. We use temperature and rainfall information during the cropping season and a GDP macroeconomic indicator as predictors in the model. We apply a functional principal components analysis technique to reduce the dimensionality of the model and to extract meaningful information from the predictors. We compare the prediction results from our model with a series of other yield forecasting models. For price forecasting, we develop a futures-based model which predicts a cash price from futures price and commodity basis. We focus on forecasting the commodity basis rather than the cash price because of the availability of futures price information and the low uncertainty of the commodity basis. We adopt a model-based approach to estimate the density function of the commodity basis distribution, which is further used to estimate the confidence interval of the commodity basis and the cash price. Finally, for crop decision planning, we propose a stochastic linear programming model, which provides the optimal policy. We also develop three heuristic models that generate a feasible solution at a low computational cost. We investigate the robustness of the proposed models to the uncertainties and prior probabilities. A numerical study of the developed approaches is performed for a case of a representative farmer who grows corn and soybean in Illinois.
|
404 |
Near infrared hyperspectral imaging as detection method for pre-germination in whole wheat, barley and sorghum grainsEngelbrecht, Paulina 03 1900 (has links)
Thesis (MSc Food Sc)--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: The use of near infrared (NIR) hyperspectral imaging for distinguishing between pre-germinated
and non pre-germinated barley, wheat and sorghum kernels and, the effect of kernel shape on
hyperspectral images, have been investigated.
Two sample sets were imaged. The first sample set was divided into six subsets; these
subsets were treated with water and left to pre-germinate for different times (0, 6, 9, 12, 18 and 24
hrs). Subset viability was determined with the tetrazolium test. The second sample set was divided
into seven subsets, treated with water and left to pre-germinate for 0, 3, 6, 9, 12, 18, 24 or 30 hrs.
Individual kernel viability was determined with the tetrazolium test.
NIR hyperspectral images were acquired using two different SisuCHEMA hyperspectral
imaging systems. The first system acquired images with a 150 9m spatial resolution (first sample
set) and the second system acquired images with a 30 9m spatial resolution (second sample set).
Principal component analysis (PCA) was performed and a distinction between pre-germinated and
non pre-germinated kernels was illustrated in PCA score images. Loading line plots showed that
the main compounds contributing to spectral variation were starch, water and protein. These
compounds were related to starch and protein hydrolysis. The distinction between pre-germinated
and non pre-germinated kernels observed in the 30 9m spatial resolution images indicated NIR
hyperspectral imaging was perhaps sensing incomplete endosperm degradation. Some kernels
determined as pre-germinated by the tetrazolium test had the same chemical composition
according to the score image as non pre-germinated kernels in the 30 9m spatial resolution
images.
A partial least squares discriminant analysis (PLS-DA) model with two classes (pre-
germinated and non pre-germinated) was developed for each of the cultivars of the first sample
set. The two classes were assigned in principal component (PC) 1 vs. PC 5 score plots. The model
created for the barley cultivars resulted in excessive false positives and false negatives. The
prediction results of wheat cultivars revealed that the model had a classification rate of 81% for the
non pre-germinated class and 93% for the pre-germinated class. The sorghum prediction results
revealed that the model correctly predicted 97% of the non pre-germinated class and 93% of the
pre-germinated class.
Two different PLS-DA models were developed for one image of each cultivar of the 30 9m
spatial resolution images. The first model was developed by assigning each kernel in the score
image and the second model was developed by assigning pixels in the score plot to either the pre-
germinated or non pre-germinated class. Model 1 resulted in excessive false negatives. Model 2
resulted in excessive false positives.
The differences between pre-germinated and non pre-germinated kernels were only observed
in higher (PC 5 and 6) order PCs of the 150 9m spatial resolution images. The lower (PCs 1 to 4) order PCs (of each commodity) were subsequently examined with the aid of classification
gradients. Kernel shape effects were observed in these PCs.
The use of NIR hyperspectral imaging for distinguishing between pre-germinated and non
pre-germinated grain kernels shows promise. / AFRIKAANSE OPSOMMING: Die gebruik van naby infrarooi (NIR) hiperspektrale beeld-analise is geëvalueer om onderskeid te
tref tussen voor-ontkiemde en nie-voor-ontkiemde gars, koring en sorghum korrels. Die effek van
korrelvorm op hiperspektrale beelde is ook geëvalueer.
Die eerste stel graan-monsters is gebruik vir 150 9m ruimtelike resolusie beelde en die
tweede stel is gebruik vir 30 9m ruimtelike resolusie beelde. Die eerste kultivar stel is verdeel in
ses sub-stelle en met gedistilleerde water behandel vir 0, 6, 9, 12, 18 en 24 hr. Sub-stel
lewensvatbaarheid is met die tetrazolium toets vasgestel. Elke kultivar in die tweede stel is in sewe
sub-stelle verdeel en is vir 0, 3, 6, 9, 12, 18, 24 of 30 hr geïnkubeer. Individuele korrel
lewensvatbaarheid is met die tetrazolium toets vasgestel.
NIR hiperspektrale beelde is verkry deur gebruik te maak van twee verskillende SisuCHEMA
kameras. Die verskillende kameras is gebruik om verskillende resolusie (30 en 150 9m ruimtelike
resolusie) beelde te verkry. Hoofkomponent analise (HKA) is uitgevoer en ’n verskil tussen voor-
ontkiemde en nie-voor-ontkiemde korrels is waargeneem in die 150 9m ruimtelike resolusie
beelde. HK ladings stippe het water, stysel en proteïene uitgesonder as die verbindings wat bydrae
het tot spektrale variasie. ’n Verskil tussen die voor-ontkiemde korrels en nie-voor-ontkiemde
korrels is ook gesien vir die 30 9m ruimtelike resolusie beelde. Dit is egter ook waargeneem dat
sommige korrels as voor-ontkiem bepaal is deur die tetrazolium toets, maar dié korrels het
dieselfde chemiese samestelling volgens die punte beeld as nie-voor-ontkiemde korrels.
Onvolledige endosperm hidrolise is ’n moontlike verduideliking vir die verskynsel. Die verbindings
wat bygedra het tot die variasie is water, stysel en proteïene.
’n Parsiële kleinste kwadrate diskriminant analise (PKW-DA) model met twee klasse is
ontwikkel vir elke kultivar van die 150 9m ruimtelike resolusie beelde. Die klasse is aangewys in
the punte stip. Die model met die hoogste variasie in Y is gekies om die ander kultivars van
dieselfde kommoditeit te voorspel. The PKW-DA resultate vir die gars kultivars het getoon dat die
model vals positiewes en vals negatiewes opgelewer het. Die koring PKW-DA model het ’n
klassifikasie koers van 81% vir die nie-voor-ontkiemde klasse en 93% vir die voor-ontkiemde
klasse opgelewer. The PKW-DA resultate vir sorghum het getoon dat die model ’n klassifikasie
koers van 97% vir die nie-voor-ontkiemde klasse en 93% vir die voor-ontkiemde klasse opgelewer.
Twee verskillende PKW-DA modelle is ontwikkel vir elke beeld van elke kultivar van die 30
9m ruimtelike resolusie beelde. Die eerste model is ontwikkel deur elke korrel in die punte beeld
aan te wys tot een van twee klasse en die tweede model is ontwikkel deur die beeldelemente in die
punte stip tot een van twee klasse toe te skryf. Model 1 het vals negatiewes opgelewer en model 2
vals positiewes.
Die verskille tussen die nie-voor-ontkiemde en voor-ontkiemde korrels is eers verduidelik in
hoër orde HK van die 150 9m ruimtelike resolusie beelde. Die laer orde HK is dus ondersoek vir hul bydrae tot spektrale variasie met die hulp van klassifikasie gradiënte. Korrel vorm effekte is
waargeneem.
Die gebruik van NIR hiperspektrale beelding om onderskeid te tref tussen voor-ontkiemde en
nie-voor-ontkiemde graan korrels, lyk belowend.
|
405 |
Tranformada wavelet e redes neurais artificiais na análise de sinais relacionados à qualidade da energia elétrica / Wavelet transform and artificial neural networks in power quality signal analysisPozzebon, Giovani Guarienti 10 February 2009 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This work presents a different method for power quality signal classification using the principal components analysis (PCA) associated to the wavelet transform (WT). The standard deviation of the detail coefficients and the average of the approximation coefficients from WT are combined to extract discriminated characteristics from the disturbances. The PCA was used to condense the information of those characteristics, than a smaller group of characteristics uncorrelated were generated. These were processed by a probabilistic neural network (PNN) to accomplish the classifications. In the application of the algorithm, in the first case, seven classes of signals which represent different types of disturbances were classified, they are as follows: voltage sag and interruption, flicker, oscillatory transients, harmonic distortions, notching and normal sine waveform. In the second case were increased four more situations that usually happen in distributed generation systems connected to distribution grids through converters, they are as follows: connection of the distributed generation, connection of local load, normal operation and islanding occurrence. In this case, the voltage on the point of common coupling between GD and grid were measured by simulations and were analyzed by the proposed algorithm. In both cases, the signals were decomposed in nine resolution levels by the wavelet transformed, being represented by detail and approximation coefficients. The application of the WT generated a lot of variations in the coefficients. Therefore, the application of the standard deviation in different resolution levels can quantify the magnitude of the variations. In order to take into account those features originated from low frequency components contained in the signals, was proposed to calculate the average of the approximation coefficients. The standard deviations of the detail coefficients and the average of the approximation coefficients composed the feature vector containing 10 variables for each signal. Before accomplishing the classification these vectors were processed by the principal component analysis algorithm in order to reduce the dimension of the feature vectors that contained correlated variables. Consequently, the processing time of the neural network were reduced to. The principal components, which are uncorrelated, were ordered so that the first few components account for the most variation that all the original variables acted previously. The first three components were chosen. Like this, a new group of variables was generated through the principal components. Thus, the number of variables on the feature vector was reduced to 3 variables. These 3 variables were inserted in a neural network for the classification of the disturbances. The output of the neural network indicates the type of disturbance. / Este trabalho apresenta um diferente método para a classificação de distúrbios em sinais elétricos visando analisar a qualidade da energia elétrica (QEE). Para isso, a análise de componentes principais (ACP) e a transformada wavelet (TW) são associadas. O desvio padrão dos coeficientes de detalhes e a média dos coeficientes de aproximação da TW são combinados para extrair características discriminantes dos distúrbios. A ACP é utilizada para condensar a informação dessas características, originando um conjunto menor de
características descorrelacionadas. Estas são processadas por uma rede neural probabilística (RNP) para realizar as classificações. Na aplicação do algoritmo, inicialmente, foram utilizadas senóides puras e seis classes de sinais que representam os diferentes tipos de distúrbios: afundamentos e interrupções de tensão, flicker, transitórios oscilatórios, distorções harmônicas e notching. Em seguida, são acrescentadas mais quatro situações ocorridas em sistemas de geração distribuída (GD) conectados em redes de distribuição através de conversores. São elas: conexão da geração distribuída, conexão de carga local, operação normal e ocorrência de ilhamento. Neste caso, os sinais de tensão no ponto de acoplamento comum (PAC) entre a GD e a rede são medidos e analisados pelo algoritmo. Em ambos os casos, os sinais são decompostos em nove níveis de resolução pela
transformada wavelet, ficando representados por coeficientes de detalhes e aproximações. A aplicação da transformada wavelet discreta gera muitas variações nos coeficientes. Por isso a aplicação do desvio padrão, nos diferentes níveis de resolução, é capaz de quantificar a magnitude destas variações. Para considerar as características originadas pelas componentes de baixa freqüência contidas nos sinais, propõe-se o uso da média dos coeficientes de aproximação do sinal. Os desvios padrões dos coeficientes de detalhes e a média da
aproximação compõem um vetor de características contendo 10 variáveis para cada sinal analisado. Antes de realizar a classificação estes vetores passam por um algoritmo de análise das componentes principais, visando reduzir a dimensão dos vetores de características que continham variáveis correlacionadas e conseqüentemente, reduzir o tempo de processamento da rede neural. As componentes principais, descorrelacionadas, são ordenadas de forma que as primeiras componentes contenham a maior parte das informações das variáveis originais. Dessa forma, as três primeiras componentes são escolhidas, pois elas representam cerca de 90% das informações relacionadas com o sinal em estudo. Assim, um novo conjunto de variáveis é gerado através das componentes principais, reduzindo o número de variáveis contidas no vetor de características de 10 (dez) para 3 (três). Finalmente, estas 3 variáveis são inseridas em uma rede neural para a classificação dos distúrbios de forma que o resultado da rede neural indica o tipo de distúrbio presente no sinal analisado.
|
406 |
Readjusting Historical Credit Ratings : using Ordered Logistic Regression and Principal ComponentAnalysisCronstedt, Axel, Andersson, Rebecca January 2018 (has links)
Readjusting Historical Credit Ratings using Ordered Logistic Re-gression and Principal Component Analysis The introduction of the Basel II Accord as a regulatory document for creditrisk presented new concepts of credit risk management and credit risk mea-surements, such as enabling international banks to use internal estimates ofprobability of default (PD), exposure at default (EAD) and loss given default(LGD). These three measurements is the foundation of the regulatory capitalcalculations and are all in turn based on the bank’s internal credit ratings. Ithas hence been of increasing importance to build sound credit rating modelsthat possess the capability to provide accurate measurements of the credit riskof borrowers. These statistical models are usually based on empirical data andthe goodness-of-fit of the model is mainly depending on the quality and sta-tistical significance of the data. Therefore, one of the most important aspectsof credit rating modeling is to have a sufficient number of observations to bestatistically reliable, making the success of a rating model heavily dependenton the data collection and development state.The main purpose of this project is to, in a simple but efficient way, createa longer time series of homogeneous data by readjusting the historical creditrating data of one of Svenska Handelsbanken AB’s credit portfolios. Thisreadjustment is done by developing ordered logistic regression models thatare using independent variables consisting of macro economic data in separateways. One model uses macro economic variables compiled into principal com-ponents, generated through a Principal Component Analysis while all othermodels uses the same macro economic variables separately in different com-binations. The models will be tested to evaluate their ability to readjust theportfolio as well as their predictive capabilities. / Justering av historiska kreditbetyg med hjälp av ordinal logistiskregression och principialkomponentsanalys När Basel II implementerades introducerades även nya riktlinjer för finan-siella instituts riskhantering och beräkning av kreditrisk, så som möjlighetenför banker att använda interna beräkningar av Probability of Default (PD),Exposure at Default (EAD) och Loss Given Default (LGD), som tillsammansgrundar sig i varje låntagares sannoliket för fallissemang. Dessa tre mått ut-gör grunden för beräkningen av de kapitaltäckningskrav som banker förväntasuppfylla och baseras i sin tur på bankernas interna kreditratingsystem. Detär därmed av stor vikt för banker att bygga stabila kreditratingmodeller medkapacitet att generera pålitliga beräkningar av motparternas kreditrisk. Dessamodeller är vanligtvis baserade på empirisk data och modellens goodness-of-fit,eller passning till datat, beror till stor del på kvalitén och den statistiska sig-nifikansen hos det data som står till förfogande. Därför är en av de viktigasteaspekterna för kreditratingsmodeller att ha tillräckligt många observationeratt träna modellen på, vilket gör modellens utvecklingsskede samt mängdendata avgörande för modellens framgång.Huvudsyftet med detta projekt är att, på ett enkelt och effektivt sätt, skapaen längre, homogen tidsserie genom att justera historisk kreditratingdata i enportfölj med företagslån tillhandahållen av Svenska Handelsbanken AB. Jus-teringen görs genom att utveckla olika ordinala logistiska regressionsmodellermed beroende variabler bestående av makroekonomiska variabler, på olikasätt. En av modellerna använder makroekonomiska variabler i form av princi-palkomponenter skapade med hjälp av en principialkomponentsanalys, medande andra modelelrna använder de makroekonomiska variablerna enskilt i olikakombinationer. Modellerna testas för att utvärdera både deras förmåga attjustera portföljens historiska kreditratings samt för att göra prediktioner.
|
407 |
[en] DEVELOPMENT OF A METHODOLOGY FOR THE DETERMINATION OF METALS IN SEDIMENTS APPLYING CLOSED FLASK MICROWAVES FURNACE AND MULTIVARIATE STATISTICAL ANALYSIS OF METALS IN BACIA DE CAMPOS SEDIMENTS / [pt] METODOLOGIA PARA DETERMINAÇÃO DE METAIS EM SEDIMENTOS UTILIZANDO MICROONDAS COM FRASCO FECHADO E ANÁLISE ESTATÍSTICA MULTIVARIADA DAS CONCENTRAÇÕES DE METAIS EM SEDIMENTOS DA BACIA DE CAMPOSMARIA LUCIA TEIXEIRA GUERRA DE MENDONCA 02 August 2006 (has links)
[pt] Foi estudada a otimização da digestão ácida de amostras de
sedimento visando à
determinação de metais (Al, Fe, Mn,Cr, Ni, V, Cu, Zn e
Pb), usando-se microondas
com frasco fechado. Para tal, verificou-se a recuperação
destes utilizando materiais de
referência certificados: MESS-3, sedimento marinho, do
National Research Council of
Canada (NRCC) e SRM 1645, sedimento de rio, do National
Institute of Standards and
Technology (NIST), sendo a determinação realizada por
espectrometria de emissão
óptica por plasma indutivamente acoplado (ICP-OES). O
processo de digestão ácida
com microondas em sistema fechado foi otimizado empregando-
se planejamento
fatorial com três variáveis e dois níveis. Como resultado,
obteve-se as seguintes
condições operacionais: potência máxima de 600W, o tempo
total de digestão de 40
minutos e uma mistura ácida constituída de 2 mL HNO3 + 6
mL HCl para a digestão
de 250 mg de amostra. Com base na metodologia
desenvolvida, foi realizada a
determinação destes elementos em 163 amostras de sedimento
oriundas da região
petrolífera da Bacia de Campos do Estado do Rio de
Janeiro, Brasil. Os resultados
obtidos foram avaliados empregando técnicas estatísticas
univariadas e multivariadas
como: regressão linear, regressão múltipla, análise de
componente principal (PCA) e
de agrupamento (CA). Foi feito também a comparação dos
dados obtidos com os
resultados do Diagnóstico ambiental das áreas de
exploração e produção de petróleo da
Bacia de Campos, Santos e Espírito Santo (2002) pelo
laboratório contratado pela
Petrobras. / [en] The optimization of acid digestion of sediment samples was
studied with the purpose
of determining the metals (Al, Fe, Mn, Cr, Ni, V, Cu, Zn
and Pb) using closed
microwaves sisteem. For that, the recovery of these metals
was noticed using certified
reference materials: MESS-3, marine sediment, from the
National Research Council of
Canada (NRCC) and SRM 1645, river sediment, from the
National Institute of
Standards and Technology (NIST), with the determination
performed by inductively
coupled plasma optic emission spectrometry (ICP-OES). The
acid digestion process
with closed microwave system was optimized using factorial
planning with three
variables and two levels. As a result, the following
operational conditions were
achieved: maximum power of 600W, total digestion time of
40 minutes, and an acid
mixture of 2 mL HNO3 + 6 mL HCl for 250 mg of sample
digestion. Determination of
these elements in 163 sediment samples from in the oil
region of Bacia de Campos in
the state of Rio de Janeiro, Brazil, was based on the
methodology developed. The
results were evaluated employing univariate and
multivariate statistical techniques
such as: linear regression, multiple regressions,
Principal Component Analysis (PCA)
and Cluster Analysis (CA). The data were compared with the
results from the
environmental diagnosis of oil exploration and production
areas in Bacia de Campos,
Santos and Espírito Santo (2002) by laboratory contracting
by Petrobras.
|
408 |
[en] MAPPING SEISMIC EVENTS USING CLUSTERING-BASED METHODOLOGIES / [pt] MAPEAMENTO DE EVENTOS SÍSMICOS BASEADO EM ALGORITMOS DE AGRUPAMENTO DE DADOSAURELIO MORAES FIGUEIREDO 29 June 2016 (has links)
[pt] Neste trabalho apresentamos metodologias baseadas em algoritmos de
agrupamento de dados utilizadas para processamento de dados sísmicos 3D.
Nesse processamento, os voxels de entrada do volume são substituídos por
vetores de características que representam a vizinhança local do voxel dentro
do seu traço sísmico. Esses vetores são processados por algoritmos de
agrupamento de dados. O conjunto de grupos resultantes é então utilizado
para gerar uma nova representação do volume sísmico de entrada. Essa estratégia permite modelar a estrutura global do sinal sísmico ao longo de
sua vizinhança lateral, reduzindo significativamente o impacto de ruído e
demais anomalias presentes no dado original. Os dados pós-processados são
então utilizados com duas finalidades principais: o mapeamento automático
de horizontes ao longo do volume, e a produção de volumes de visualização
destinados a enfatizar possíveis descontinuidades presentes no dado sísmico
de entrada, particularmente falhas geológicas. Com relação ao mapeamento
de horizontes, o fato de as amostras de entrada dos processos de agrupamento
não conterem informação de sua localização 3D no volume permite
uma classificação não enviesada dos voxels nos grupos. Consequentemente a
metodologia apresenta desempenho robusto mesmo em casos complicados,
e o método se mostrou capaz de mapear grande parte das interfaces presentes
nos dados testados. Já os atributos de visualização são construídos
através de uma função auto-adaptável que usa a informação da vizinhança
dos grupos sendo capaz de enfatizar as regiões do dado de entrada onde existam
falhas ou outras descontinuidades. Nós aplicamos essas metodologias
a dados reais. Os resultados obtidos evidenciam a capacidade dos métodos
de mapear mesmo interfaces severamente interrompidas por falhas sísmicas,
domos de sal e outras descontinuidades, além de produzirmos atributos de
visualização que se mostraram bastante úteis no processo de identificação
de descontinuidades presentes nos dados. / [en] We present clustering-based methodologies used to process 3D seismic
data. It firstly replaces the volume voxels by corresponding feature samples
representing the local behavior in the seismic trace. After this step samples
are used as entries to clustering procedures, and the resulting cluster maps
are used to create a new representation of the original volume data. This
strategy finds the global structure of the seismic signal. It strongly reduces
the impact of noise and small disagreements found in the voxels of the
entry volume. These clustered versions of the input seismic data can then
be used in two different applications: to map 3D horizons automatically
and to produce visual attribute volumes where seismic faults and any
discontinuities present in the data are highlighted. Concerning the horizon
mapping, as the method does not use any lateral similarity measure to
organize horizon voxels into clusters, the methodology is very robust when
mapping difficult cases. It is capable of mapping a great portion of the
seismic interfaces present in the data. In the case of the visualization
attribute, it is constructed by applying an auto-adaptable function that
uses the voxel neighboring information through a specific measurement that
globally highlights the fault regions and other discontinuities present in the
original volume. We apply the methodologies to real seismic data, mapping
even seismic horizons severely interrupted by various discontinuities and
presenting visualization attributes where discontinuities are adequately
highlighted.
|
409 |
Interdoménové a intradoménové interakce u motorové podjednotky EcoR124I: Výpočetní studieSINHA, Dhiraj January 2016 (has links)
EcoR124I is a Type I restrictionmodification (RM) enzyme and as such forms multifunctional pentameric complexes with DNA cleavage and ATP-dependent DNA translocation activities located on the motor subunit HsdR. When non-methylated invading DNA is recognized by the complex, two HsdR endonuclease/motor subunits start to translocate dsDNA without strand separation activity up to thousands base pairs towards the stationary enzyme while consuming ~1 molecule of ATP per base pair advanced. Whenever translocation is stalled the HsdR subunits cleave the dsDNA nonspecifically far from recognition site. The X-ray crystal structure of HsdR of EcoR124I bound to ATP gave a first insight of structural/functional correlation in the HsdR subunit. The four domains within the subunit were found to be in a square planer arrangement. Computational modeling including molecular dynamics in combination with crystallography, point mutations, in vivo and in vitro assays reveals how interactions between these four domains contribute to ATP-dependent DNA translocation, DNA cleavage or inter-domain communication between the translocase and endonuclease activities.
|
410 |
Ecodesign of large-scale photovoltaic (PV) systems with multi-objective optimization and Life-Cycle Assessment (LCA)Perez Gallardo, Jorge Raúl 25 October 2013 (has links) (PDF)
Because of the increasing demand for the provision of energy worldwide and the numerous damages caused by a major use of fossil sources, the contribution of renewable energies has been increasing significantly in the global energy mix with the aim at moving towards a more sustainable development. In this context, this work aims at the development of a general methodology for designing PV systems based on ecodesign principles and taking into account simultaneously both techno-economic and environmental considerations. In order to evaluate the environmental performance of PV systems, an environmental assessment technique was used based on Life Cycle Assessment (LCA). The environmental model was successfully coupled with the design stage model of a PV grid-connected system (PVGCS). The PVGCS design model was then developed involving the estimation of solar radiation received in a specific geographic location, the calculation of the annual energy generated from the solar radiation received, the characteristics of the different components and the evaluation of the techno-economic criteria through Energy PayBack Time (EPBT) and PayBack Time (PBT). The performance model was then embedded in an outer multi-objective genetic algorithm optimization loop based on a variant of NSGA-II. A set of Pareto solutions was generated representing the optimal trade-off between the objectives considered in the analysis. A multi-variable statistical method (i.e., Principal Componet Analysis, PCA) was then applied to detect and omit redundant objectives that could be left out of the analysis without disturbing the main features of the solution space. Finally, a decision-making tool based on M-TOPSIS was used to select the alternative that provided a better compromise among all the objective functions that have been investigated. The results showed that while the PV modules based on c-Si have a better performance in energy generation, the environmental aspect is what makes them fall to the last positions. TF PV modules present the best trade-off in all scenarios under consideration. A special attention was paid to recycling process of PV module even if there is not yet enough information currently available for all the technologies evaluated. The main cause of this lack of information is the lifetime of PV modules. The data relative to the recycling processes for m-Si and CdTe PV technologies were introduced in the optimization procedure for ecodesign. By considering energy production and EPBT as optimization criteria into a bi-objective optimization cases, the importance of the benefits of PV modules end-of-life management was confirmed. An economic study of the recycling strategy must be investigated in order to have a more comprehensive view for decision making.
|
Page generated in 0.0894 seconds