• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • Tagged with
  • 10
  • 10
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Novel variable influence on projection (VIP) methods in OPLS, O2PLS, and OnPLS models for single- and multi-block variable selection : VIPOPLS, VIPO2PLS, and MB-VIOP methods

Galindo-Prieto, Beatriz January 2017 (has links)
Multivariate and multiblock data analysis involves useful methodologies for analyzing large data sets in chemistry, biology, psychology, economics, sensory science, and industrial processes; among these methodologies, partial least squares (PLS) and orthogonal projections to latent structures (OPLS®) have become popular. Due to the increasingly computerized instrumentation, a data set can consist of thousands of input variables which contain latent information valuable for research and industrial purposes. When analyzing a large number of data sets (blocks) simultaneously, the number of variables and underlying connections between them grow very much indeed; at this point, reducing the number of variables keeping high interpretability becomes a much needed strategy. The main direction of research in this thesis is the development of a variable selection method, based on variable influence on projection (VIP), in order to improve the model interpretability of OnPLS models in multiblock data analysis. This new method is called multiblock variable influence on orthogonal projections (MB-VIOP), and its novelty lies in the fact that it is the first multiblock variable selection method for OnPLS models. Several milestones needed to be reached in order to successfully create MB-VIOP. The first milestone was the development of a single-block variable selection method able to handle orthogonal latent variables in OPLS models, i.e. VIP for OPLS (denoted as VIPOPLS or OPLS-VIP in Paper I), which proved to increase the interpretability of PLS and OPLS models, and afterwards, was successfully extended to multivariate time series analysis (MTSA) aiming at process control (Paper II). The second milestone was to develop the first multiblock VIP approach for enhancement of O2PLS® models, i.e. VIPO2PLS for two-block multivariate data analysis (Paper III). And finally, the third milestone and main goal of this thesis, the development of the MB-VIOP algorithm for the improvement of OnPLS model interpretability when analyzing a large number of data sets simultaneously (Paper IV). The results of this thesis, and their enclosed papers, showed that VIPOPLS, VIPO2PLS, and MB-VIOP methods successfully assess the most relevant variables for model interpretation in PLS, OPLS, O2PLS, and OnPLS models. In addition, predictability, robustness, dimensionality reduction, and other variable selection purposes, can be potentially improved/achieved by using these methods.
2

"Geometria das singularidades de projeções" / Geometry of singularities of projections

Dias, Fabio Scalco 16 September 2005 (has links)
Neste trabalho estudamos as singularidades de projeções no plano de curvas genéricas, introduzindo uma nova relação de equivalência para germes e multigermes de curvas planas, denominada A_h-equivalência. / In this work singularities of projections to the plane of curves are studied. We introduce a new equivalence relation for germs of plane curves, called A_h-equivalence.
3

Latent variable based computational methods for applications in life sciences : Analysis and integration of omics data sets

Bylesjö, Max January 2008 (has links)
With the increasing availability of high-throughput systems for parallel monitoring of multiple variables, e.g. levels of large numbers of transcripts in functional genomics experiments, massive amounts of data are being collected even from single experiments. Extracting useful information from such systems is a non-trivial task that requires powerful computational methods to identify common trends and to help detect the underlying biological patterns. This thesis deals with the general computational problems of classifying and integrating high-dimensional empirical data using a latent variable based modeling approach. The underlying principle of this approach is that a complex system can be characterized by a few independent components that characterize the systematic properties of the system. Such a strategy is well suited for handling noisy, multivariate data sets with strong multicollinearity structures, such as those typically encountered in many biological and chemical applications. The main foci of the studies this thesis is based upon are applications and extensions of the orthogonal projections to latent structures (OPLS) method in life science contexts. OPLS is a latent variable based regression method that separately describes systematic sources of variation that are related and unrelated to the modeling aim (for instance, classifying two different categories of samples). This separation of sources of variation can be used to pre-process data, but also has distinct advantages for model interpretation, as exemplified throughout the work. For classification cases, a probabilistic framework for OPLS has been developed that allows the incorporation of both variance and covariance into classification decisions. This can be seen as a unification of two historical classification paradigms based on either variance or covariance. In addition, a non-linear reformulation of the OPLS algorithm is outlined, which is useful for particularly complex regression or classification tasks. The general trend in functional genomics studies in the post-genomics era is to perform increasingly comprehensive characterizations of organisms in order to study the associations between their molecular and cellular components in greater detail. Frequently, abundances of all transcripts, proteins and metabolites are measured simultaneously in an organism at a current state or over time. In this work, a generalization of OPLS is described for the analysis of multiple data sets. It is shown that this method can be used to integrate data in functional genomics experiments by separating the systematic variation that is common to all data sets considered from sources of variation that are specific to each data set. / Funktionsgenomik är ett forskningsområde med det slutgiltiga målet att karakterisera alla gener i ett genom hos en organism. Detta inkluderar studier av hur DNA transkriberas till mRNA, hur det sedan translateras till proteiner och hur dessa proteiner interagerar och påverkar organismens biokemiska processer. Den traditionella ansatsen har varit att studera funktionen, regleringen och translateringen av en gen i taget. Ny teknik inom fältet har dock möjliggjort studier av hur tusentals transkript, proteiner och små molekyler uppträder gemensamt i en organism vid ett givet tillfälle eller över tid. Konkret innebär detta även att stora mängder data genereras även från små, isolerade experiment. Att hitta globala trender och att utvinna användbar information från liknande data-mängder är ett icke-trivialt beräkningsmässigt problem som kräver avancerade och tolkningsbara matematiska modeller. Denna avhandling beskriver utvecklingen och tillämpningen av olika beräkningsmässiga metoder för att klassificera och integrera stora mängder empiriskt (uppmätt) data. Gemensamt för alla metoder är att de baseras på latenta variabler: variabler som inte uppmätts direkt utan som beräknats från andra, observerade variabler. Detta koncept är väl anpassat till studier av komplexa system som kan beskrivas av ett fåtal, oberoende faktorer som karakteriserar de huvudsakliga egenskaperna hos systemet, vilket är kännetecknande för många kemiska och biologiska system. Metoderna som beskrivs i avhandlingen är generella men i huvudsak utvecklade för och tillämpade på data från biologiska experiment. I avhandlingen demonstreras hur dessa metoder kan användas för att hitta komplexa samband mellan uppmätt data och andra faktorer av intresse, utan att förlora de egenskaper hos metoden som är kritiska för att tolka resultaten. Metoderna tillämpas för att hitta gemensamma och unika egenskaper hos regleringen av transkript och hur dessa påverkas av och påverkar små molekyler i trädet poppel. Utöver detta beskrivs ett större experiment i poppel där relationen mellan nivåer av transkript, proteiner och små molekyler undersöks med de utvecklade metoderna.
4

[en] INVERSION OF GEOPHISYCS PARAMETERS IN THREE DIMENSIONS FROM SEISMIC REFLECTION DATA BY HYBRID GENETIC ALGORITHMS / [pt] INVERSÃO DE PARÂMETROS GEOFÍSICOS EM TRÊS DIMENSÕES A PARTIR DE DADOS DE REFLEXÃO SÍSMICA POR ALGORITMOS GENÉTICOS HÍBRIDOS

SAMUEL GUSTAVO HUAMAN BUSTAMANTE 27 February 2009 (has links)
[pt] Este trabalho tem por objetivo investigar um método para auxiliar na quantificação de características sísmicas do subsolo. O modelo sísmico bidimensional de reflexão usa a equação Normal Move Out (NMO), para calcular os tempos de trânsito das ondas sísmicas, tipo P, refletidas em camadas isotrópicas e inclinadas. Essa equação usa a velocidade raiz quadrática média RMS como valor representativo das velocidades intervalares das camadas unidas. No processo de inversão para múlltiplas camadas, as velocidades RMS representam o problema principal para estimar as velocidades intervalares. Conseqüentemente, o método proposto estima sequencialmente os parâmetros do modelo sísmico, para resolver esse problema a partir dos tempos de trânsito com Algoritmos Genéticos Híbridos (algoritmo genético e algoritmo Nelder Mead Simplex). Os tempos de trânsito são sintéticos e a estimação de parâmetros é tratada como um problema de minimização. Com o método proposto foi obtido um alto grau de exatidão, além de reduzir o tempo de computação em 98,4 % em comparação com um método de estimação simultânea de parâmetros. Para aliviar a complexidade e a demora na geração de um modelo em três dimensões se contrói um modelo sísmico em três dimensões formado com modelos bidimensionais, sob cada unidade retangular da malha de receptores do levantamento sísmico, para camadas isotrópicas curvadas, com variações suaves das pendentes e sem descontinuidades. Os modelos bidimensionais formam polígonos que representam as superfícies de interface que são projetadas sob os retângulos da malha. Dois conjuntos de superfícies poligonais são gerados para auxiliar na localização das camadas. / [en] The objective of the present work is to investigate a method to help in the quantification of seismic characteristics underground. The two-dimensional seismic model of reflection employs the equation Normal Move Out (NMO) to calculate the travel times of P waves reflected on inclined and isotropic layers. This equation uses the root mean square velocity as a representative value of the joined layers velocities. At the inversion process, for multiple layers, the root mean square velocities are the main problem to estimate the layer velocities. Consequently, to solve that problem, the proposed method estimates sequentially the parameters of the seismic model using travel times and the Hybrid Genetic Algorithms (Genetic algorithm and the Nelder Mead Simplex algorithm). The travel times are synthetic and the estimation of parameters is treated as a minimization problem. With proposed method was obtained high grade of accurate, and the reduction of 98.4 % of computing time when it was compared to a simultaneous parameters estimation method. For decreasing the complexity and the delay to generate the models in three dimensions is proposed the construction of a three-dimensional seismic model formed with two-dimensional models, under every rectangular cell of the mesh of receptors of the seismic survey, for curved isotropic layers with soft variations in the gradient and without discontinuities. The two-dimensional models form polygons that represent the surfaces of interfaces that are designed under the rectangles of the surface or soil. Two sets of polygonal surfaces are generated to help at the geometric localization of layers.
5

Produto interno e ortogonalidade / Domestic product and orthogonality

Paulo Rafael de Lima e Souza 13 August 2013 (has links)
Neste trabalho, consideramos o produto interno de vetores de um espaÃo vetorial com especiais aplicaÃÃes no Ensino MÃdio atravÃs de conceitos como Matrizes, Sistemas Lineares e OperaÃÃes com Vetores no ℝ2 e ℝ3 . Verificamos, tambÃm, caracterÃsticas de operadores lineares definidos por projeÃÃes ortogonais. TambÃm estabelecemos relaÃÃes entre vetores e matrizes formadas por bases do ℝ2 com o intuito de melhorar e fortalecer os conhecimentos dos professores do ensino bÃsico, proporcionando-lhes mais seguranÃa e clareza ao ministrar suas aulas, como tambÃm procuramos incentivar os professores a se atualizarem e fazer com que os seus alunos se motivem para o ensino superior, em Ãreas que a MatemÃtica, em particular, a Ãlgebra Linear, està presente. Conhecendo a definiÃÃo de produtos internos e espaÃos vetoriais, acreditamos que o professor poderà compreender melhor as tÃcnicas e operaÃÃes algÃbricas dos conteÃdos por ele ensinados. Acreditamos que o nÃo conhecimento desta estrutura de Ãlgebra, faz com que o professor exponha de forma limitada e sem motivaÃÃo futura, em termos de outros estudos por parte dos seus alunos no ensino mÃdio, e à claro, que està visÃo ou esta abordagem nÃo à interessante; à preciso melhorar esta visÃo em sala de aula, à preciso que o professor tenha uma visÃo panorÃmica daquilo que ensina. Assim, pretendemos com este trabalho apresentar os conceitos de produto interno e de espaÃos vetoriais expondo-os de forma didÃtica, mostrando que de algum modo està associado aos conceitos estudados no ensino bÃsico atravÃs de exercÃcios aplicados. / In this paper, we consider the vector inner product of a vector space with special applications in high school through concepts such as matrices, Linear Systems and Vector Operations in ℝ and ℝÂ. We also verified linear operators characteristics defined by orthogonal projections. We have also established relationships between vectors and matrices formed by ℝ bases in order to improve and strengthen the knowledge of primary school teachers, providing them with more certainty and clarity to teach their classes, but also seek to encourage teachers to update and make with their students be motivated for higher education in areas that mathematics, in particular, Linear Algebra is present. Knowing the definition of domestic products and vector spaces, we believe that the teacher can better understand the techniques and algebraic operations the content taught by him. We believe that not aware of this algebra structure, makes the teacher expose a limited way and without further motivation, in terms of other studies by students in high school, and of course, that this view or this approach is not interesting; is necessary to improve the vision in the classroom, it is necessary that the teacher has a panoramic view of what he teaches. Thus, we intend to work with this present domestic product concepts and vector spaces exposing them in a didactic way, showing that somehow is associated with the concepts studied in basic education through applied exercises.
6

"Geometria das singularidades de projeções" / Geometry of singularities of projections

Fabio Scalco Dias 16 September 2005 (has links)
Neste trabalho estudamos as singularidades de projeções no plano de curvas genéricas, introduzindo uma nova relação de equivalência para germes e multigermes de curvas planas, denominada A_h-equivalência. / In this work singularities of projections to the plane of curves are studied. We introduce a new equivalence relation for germs of plane curves, called A_h-equivalence.
7

Estimação de estado em sistemas elétricos de potência: composição de erros de medidas / State estimation in power systems: measurement error composition

Piereti, Saulo Augusto Ribeiro 10 August 2011 (has links)
Bretas et al. (2009) prova matematicamente, e através da interpretação geométrica, que o erro de medida se divide em componentes detectáveis e não-detectáveis. Demonstra ainda que as metodologias até então utilizadas, para o processamento de Erros Grosseiros (EGs), consideram apenas a componente detectável do erro. Assim, dependendo da amplitude das componentes do erro, essas metodologias podem falhar. Face ao exposto, neste trabalho é proposto uma nova metodologia para processar as medidas portadoras de EGs. Essa proposição será obtida decompondo o erro da medida em duas componentes: a primeira, é ortogonal ao espaço da imagem da matriz jacobiana cuja amplitude é igual ao resíduo da medida, a outra, pertence ao espaço da imagem da matriz jacobiana e que, por conseguinte, não contribui para o resíduo da medida. A relação entre a norma dessas componentes, aqui denominado Índice de Inovação (II), prevê uma nova informação, isto é, informação não contida nas outras medidas. Usando o II, calcula-se um valor limiar (TV) para cada medida, esse limiar será utilizado para inferir se a medida é ou não suspeita de possuir EG. Em seguida, com as medidas suspeitas em mãos, desenvolve-se um índice de filtragem (FI) que será utilizado para identificar qual daquelas medidas tem maior probabilidade de possuir EG. Os sistemas de 14 e 30 barras do IEEE, e o sistema sul reduzido do Brasil de 45 barras, serão utilizados para mostrar a eficiência da metodologia proposta. Os testes realizados com os sistemas acima são: i) O teste de nível de detecção de EG, que consisti em encontrar o valor mínimo de EG que seja detectado usando o TV da medida; ii) O teste onde é adicionado EG de 10 desvios padrões em cada medida, uma por vez, nesse teste o FI da medida é usado para identificar qual medida possui o erro, em seguida à medida com erro é corrigida através do erro normalizado composto (ENC); iii) O teste de EG simples. / Bretas et al. (2009) has proved, using geometric background, that the measurement error can be decomposed into two components the detectable and the undetectable component respectively. Bretas has also demonstrated that the current methodologies used for processing of gross errors (GE), consider only the detectable component of the error. Thus, depending on the magnitude of the undetectable error components, such methods may fail. Given the above explanation, in this work a new methodology for processing the measurements with GE is proposed. This proposition is obtained by decomposing each measurement error into two components: the first, orthogonal to the Jacobian range space, whose magnitude is equal to the measurement residual and the other contained in that space, which does not contribute to the measurement residual. The ratio between the norms of those components was proposed as the measurement Innovation Index (II) which provides the new information a measurement contains regarding the other measurements. Using the II, a threshold value (TV) for each measurement is computed so that one can declare a measurement suspicious of having a GE. Then a filtering index (FI) is proposed to filter up, from the suspicious measurements, the one that has more chances of containing a GE. The IEEE-14 bus system, IEEE-30 bus system, and reduced 45-bus power system of south of Brazil, will be used to demonstrate the accuracy and efficiency of the proposed methodology. Tests conducted with the above systems were: i) The level test for GE detection, which consists in finding the minimum GE value in order it can be detected using the measurement TV; ii) The test where GE of 10 standard deviations is added to each measurement, once at a time, and using the measurement FI to identify which measurement has the error ant the using the composed measurement error (CNE) to correct measurement value; iii) The GE simple test.
8

Estimação de estado em sistemas elétricos de potência: composição de erros de medidas / State estimation in power systems: measurement error composition

Saulo Augusto Ribeiro Piereti 10 August 2011 (has links)
Bretas et al. (2009) prova matematicamente, e através da interpretação geométrica, que o erro de medida se divide em componentes detectáveis e não-detectáveis. Demonstra ainda que as metodologias até então utilizadas, para o processamento de Erros Grosseiros (EGs), consideram apenas a componente detectável do erro. Assim, dependendo da amplitude das componentes do erro, essas metodologias podem falhar. Face ao exposto, neste trabalho é proposto uma nova metodologia para processar as medidas portadoras de EGs. Essa proposição será obtida decompondo o erro da medida em duas componentes: a primeira, é ortogonal ao espaço da imagem da matriz jacobiana cuja amplitude é igual ao resíduo da medida, a outra, pertence ao espaço da imagem da matriz jacobiana e que, por conseguinte, não contribui para o resíduo da medida. A relação entre a norma dessas componentes, aqui denominado Índice de Inovação (II), prevê uma nova informação, isto é, informação não contida nas outras medidas. Usando o II, calcula-se um valor limiar (TV) para cada medida, esse limiar será utilizado para inferir se a medida é ou não suspeita de possuir EG. Em seguida, com as medidas suspeitas em mãos, desenvolve-se um índice de filtragem (FI) que será utilizado para identificar qual daquelas medidas tem maior probabilidade de possuir EG. Os sistemas de 14 e 30 barras do IEEE, e o sistema sul reduzido do Brasil de 45 barras, serão utilizados para mostrar a eficiência da metodologia proposta. Os testes realizados com os sistemas acima são: i) O teste de nível de detecção de EG, que consisti em encontrar o valor mínimo de EG que seja detectado usando o TV da medida; ii) O teste onde é adicionado EG de 10 desvios padrões em cada medida, uma por vez, nesse teste o FI da medida é usado para identificar qual medida possui o erro, em seguida à medida com erro é corrigida através do erro normalizado composto (ENC); iii) O teste de EG simples. / Bretas et al. (2009) has proved, using geometric background, that the measurement error can be decomposed into two components the detectable and the undetectable component respectively. Bretas has also demonstrated that the current methodologies used for processing of gross errors (GE), consider only the detectable component of the error. Thus, depending on the magnitude of the undetectable error components, such methods may fail. Given the above explanation, in this work a new methodology for processing the measurements with GE is proposed. This proposition is obtained by decomposing each measurement error into two components: the first, orthogonal to the Jacobian range space, whose magnitude is equal to the measurement residual and the other contained in that space, which does not contribute to the measurement residual. The ratio between the norms of those components was proposed as the measurement Innovation Index (II) which provides the new information a measurement contains regarding the other measurements. Using the II, a threshold value (TV) for each measurement is computed so that one can declare a measurement suspicious of having a GE. Then a filtering index (FI) is proposed to filter up, from the suspicious measurements, the one that has more chances of containing a GE. The IEEE-14 bus system, IEEE-30 bus system, and reduced 45-bus power system of south of Brazil, will be used to demonstrate the accuracy and efficiency of the proposed methodology. Tests conducted with the above systems were: i) The level test for GE detection, which consists in finding the minimum GE value in order it can be detected using the measurement TV; ii) The test where GE of 10 standard deviations is added to each measurement, once at a time, and using the measurement FI to identify which measurement has the error ant the using the composed measurement error (CNE) to correct measurement value; iii) The GE simple test.
9

M?todo de Proje??es Ortogonais

Araujo, Francinario Oliveira de 15 December 2011 (has links)
Made available in DSpace on 2015-03-03T15:28:32Z (GMT). No. of bitstreams: 1 FrancinarioOA_DISSERT.pdf: 2024986 bytes, checksum: 9f49537413fc37dd20232d05db6223d5 (MD5) Previous issue date: 2011-12-15 / Universidade Federal do Rio Grande do Norte / The problem treated in this dissertation is to establish boundedness for the iterates of an iterative algorithm in <d which applies in each step an orthogonal projection on a straight line in <d, indexed in a (possibly infinite) family of lines, allowing arbitrary order in applying the projections. This problem was analyzed in a paper by Barany et al. in 1994, which found a necessary and suficient condition in the case d = 2, and analyzed further the case d > 2, under some technical conditions. However, this paper uses non-trivial intuitive arguments and its proofs lack suficient rigor. In this dissertation we discuss and strengthen the results of this paper, in order to complete and simplify its proofs / O problema abordado nesta disserta??o e a prova da propriedade de limita??o para os iterados de um algoritmo iterativo em Rd que aplica em cada passo uma proje??o ortogonal sobre uma reta em Rd, indexada em uma fam?lia de retas dada (possivelmente infinita) e permitindo ordem arbitr?ria na aplica??o das v?rias proje??es. Este problema foi abordado em um artigo de Barany et al. em 1994, que encontrou uma condi??o necess?ria e suficiente para o caso d = 2 e analisou tamb?m o caso d > 2 sob algumas condi??es t?cnicas. Por?m, este artigo usa argumentos intuitivos n?o triviais e nas suas demonstra??es nos parece faltar rigor. Nesta disserta??o detalhamos e completamos as demonstra??es do artigo de Barany, fortalecendo e clareando algumas de suas proposi??es, bem como propiciando pontos de vista complementares em alguns aspectos do artigo em tela
10

Multivariate Synergies in Pharmaceutical Roll Compaction : The quality influence of raw materials and process parameters by design of experiments

Souihi, Nabil January 2014 (has links)
Roll compaction is a continuous process commonly used in the pharmaceutical industry for dry granulation of moisture and heat sensitive powder blends. It is intended to increase bulk density and improve flowability. Roll compaction is a complex process that depends on many factors, such as feed powder properties, processing conditions and system layout. Some of the variability in the process remains unexplained. Accordingly, modeling tools are needed to understand the properties and the interrelations between raw materials, process parameters and the quality of the product. It is important to look at the whole manufacturing chain from raw materials to tablet properties. The main objective of this thesis was to investigate the impact of raw materials, process parameters and system design variations on the quality of intermediate and final roll compaction products, as well as their interrelations. In order to do so, we have conducted a series of systematic experimental studies and utilized chemometric tools, such as design of experiments, latent variable models (i.e. PCA, OPLS and O2PLS) as well as mechanistic models based on the rolling theory of granular solids developed by Johanson (1965). More specifically, we have developed a modeling approach to elucidate the influence of different brittle filler qualities of mannitol and dicalcium phosphate and their physical properties (i.e. flowability, particle size and compactability) on intermediate and final product quality. This approach allows the possibility of introducing new fillers without additional experiments, provided that they are within the previously mapped design space. Additionally, this approach is generic and could be extended beyond fillers. Furthermore, in contrast to many other materials, the results revealed that some qualities of the investigated fillers demonstrated improved compactability following roll compaction. In one study, we identified the design space for a roll compaction process using a risk-based approach. The influence of process parameters (i.e. roll force, roll speed, roll gap and milling screen size) on different ribbon, granule and tablet properties was evaluated. In another study, we demonstrated the significant added value of the combination of near-infrared chemical imaging, texture analysis and multivariate methods in the quality assessment of the intermediate and final roll compaction products. Finally, we have also studied the roll compaction of an intermediate drug load formulation at different scales and using roll compactors with different feed screw mechanisms (i.e. horizontal and vertical). The horizontal feed screw roll compactor was also equipped with an instrumented roll technology allowing the measurement of normal stress on ribbon. Ribbon porosity was primarily found to be a function of normal stress, exhibiting a quadratic relationship. A similar quadratic relationship was also observed between roll force and ribbon porosity of the vertically fed roll compactor. A combination of design of experiments, latent variable and mechanistic models led to a better understanding of the critical process parameters and showed that scale up/transfer between equipment is feasible.

Page generated in 0.0177 seconds