• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 616
  • 433
  • 114
  • 100
  • 53
  • 45
  • 40
  • 17
  • 11
  • 11
  • 11
  • 9
  • 7
  • 7
  • 7
  • Tagged with
  • 1996
  • 343
  • 314
  • 310
  • 239
  • 161
  • 116
  • 114
  • 91
  • 90
  • 86
  • 85
  • 85
  • 79
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Integrated Hydrological Modeling in Glaciated Mountain Basins: A Case Study in the Tien-Shan Mountains of Kyrgyzstan / 氷河山地流域における統合水文モデリング:キルギスの天山山脈における事例研究

Sadyrov, Sanjar 25 March 2024 (has links)
京都大学 / 新制・課程博士 / 博士(工学) / 甲第25261号 / 工博第5220号 / 京都大学大学院工学研究科都市社会工学専攻 / (主査)教授 田中 賢治, 教授 佐山 敬洋, 教授 市川 温 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
552

Scheduling optimal maintenance times for a system based on component reliabilities

Rao, Naresh Krishna 04 May 2006 (has links)
This dissertation extends the work done on single component maintenance planning to a multi-component series system. An attempt is made to develop a function which represents the expected cost rate (cost per unit time) of any maintenance plan. Three increasingly complex cases are considered. The first and simplest case assumes that the component is restored to an “as good as new” condition after a maintenance operation. The second case assumes that an occasional imperfect maintenance Operation may occur. During this period of time, the failure rate of the component is higher. Hence, the likelihood of a failure is greater until the component is properly maintained in a subsequent maintenance operation. The final case assumes that there is some deterioration in the component behavior even after a maintenance operation. Therefore, it is necessary to replace the system at some point in time. Models for all three cases are developed. Based on these models, cost rate functions are constructed. The cost rate functions reflect the cost rates of maintaining a component at a particular time. In addition, the savings obtained through the simultaneous maintenance of components is also accounted for in the cost rate functions. A series of approximations are made in order to make the cost rate functions mathematically tractable. Finally, an algorithmic procedure for optimizing the cost rate functions for all three cases is given. / Ph. D.
553

Principal components based techniques for hyperspectral image data

Fountanas, Leonidas 12 1900 (has links)
Approved for public release; distribution in unlimited. / PC and MNF transforms are two widely used methods that are utilized for various applications such as dimensionality reduction, data compression and noise reduction. In this thesis, an in-depth study of these two methods is conducted in order to estimate their performance in hyperspectral imagery. First the PCA and MNF methods are examined for their effectiveness in image enhancement. Also, the various methods are studied to evaluate their ability to determine the intrinsic dimension of the data. Results indicate that, in most cases, the scree test gives the best measure of the number of retained components, as compared to the cumulative variance, the Kaiser, and the CSD methods. Then, the applicability of PCA and MNF for image restoration are considered using two types of noise, Gaussian and periodic. Hyperspectral images are corrupted by noise using a combination of ENVI and MATLAB software, while the performance metrics used for evaluation of the retrieval algorithms are visual interpretation, rms correlation coefficient spectral comparison, and classification. In Gaussian noise, the retrieved images using inverse transforms indicate that the basic PC and MNF transform perform comparably. In periodic noise, the MNF transform shows less sensitivity to variations in the number of lines and the gain factor. / Lieutenant, Hellenic Navy
554

Risk Measurement of Mortgage-Backed Security Portfolios via Principal Components and Regression Analyses

Motyka, Matt 29 April 2003 (has links)
Risk measurement of mortgage-backed security portfolios presents a very involved task for analysts and portfolio managers of such investments. A strong predictive econometric model that can account for the variability of these securities in the future would prove a very useful tool for anyone in this financial market sector due to the difficulty of evaluating the risk of mortgage cash flows and prepayment options at the same time. This project presents two linear regression methods that attempt to explain the risk within these portfolios. The first study involves a principal components analysis on absolute changes in market data to form new sets of uncorrelated variables based on the variability of original data. These principal components then serve as the predictor variables in a principal components regression, where the response variables are the day-to-day changes in the net asset values of three agency mortgage-backed security mutual funds. The independence of each principal component would allow an analyst to reduce the number of observable sets in capturing the risk of these portfolios of fixed income instruments. The second idea revolves around a simple ordinary least squares regression of the three mortgage funds on the sets of the changes in original daily, weekly and monthly variables. While the correlation among such predictor variables may be very high, the simplicity of utilizing observable market variables is a clear advantage. The goal of either method was to capture the largest amount of variance in the mortgage-backed portfolios through these econometric models. The main purpose was to reduce the residual variance to less than 10 percent, or to produce at least 90 percent explanatory power of the original fund variances. The remaining risk could then be attributed to the nonlinear dependence in the changes in these net asset values on the explanatory variables. The primary cause of this nonlinearity is due to the prepayment put option inherent in these securities.
555

Pareto atsitiktinių dydžių geometrinis maks stabilumas / Geometric max stability of Pareto random variables

Juozulynaitė, Gintarė 30 August 2010 (has links)
Šiame darbe nagrinėjau vienmačių ir dvimačių Pareto atsitiktinių dydžių geometrinį maks stabilumą. Įrodžiau, kad vienmatis Pareto skirstinys yra geometriškai maks stabilus, kai alfa=1. Tačiau nėra geometriškai maks stabilus, kai alfa nelygu 1. Naudodamasi geometrinio maks stabilumo kriterijumi dvimačiams Pareto atsitiktiniams dydžiams, įrodžiau, kad dvimatė Pareto skirstinio funkcija nėra geometriškai maks stabili, kai vektoriaus komponentės nepriklausomos (kai alfa=1, beta=1 ir alfa nelygu 1, beta nelygu 1). Taip pat dvimatė Pareto skirstinio funkcija nėra geometriškai maks stabili, kai vektoriaus komponentės priklausomos (kai alfa=1, beta=1 ir alfa nelygu 1, beta nelygu 1). Dvimačių Pareto skirstinių tyrimas pateikė nelauktus rezultatus. Gauta, kad dvimatė Pareto skirstinio funkcija nėra geometriškai maks stabili, kai alfa=1, beta=1. Tačiau vienmatės marginaliosios Pareto skirstinio funkcijos yra geometriškai maks stabilios, kai alfa=1, beta=1. / In this work I analyzed geometric max stability of univariate and bivariate Pareto random variables. I have proved, that univariate Pareto distribution is geometrically max stable when alpha=1. But it is not geometrically max stable when alpha unequal 1. Using the criterion of geometric max stability for bivariate Pareto random variables, I have proved, that bivariate Pareto distribution function is not geometrically max stable, when vectors’ components are independent (when alpha=1, beta=1 and alpha unequal 1, beta unequal 1). Also bivariate Pareto distribution function is not geometrically max stable, when vectors’ components are dependent (when alpha=1, beta=1 and alpha unequal 1, beta unequal 1). Research of bivariate Pareto distributions submitted unexpected results. Bivariate Pareto distribution function is not geometrically max stable, when alpha=1, beta=1. But marginal Pareto distribution functions are geometrically max stable, when alpha=1, beta=1.
556

Rastreamento de componentes conexas em vídeo 3D para obtenção de estruturas tridimensionais / Tracking of connected components from 3D video in order to obtain tridimensional structures

Pires, David da Silva 17 August 2007 (has links)
Este documento apresenta uma dissertação sobre o desenvolvimento de um sistema de integração de dados para geração de estruturas tridimensionais a partir de vídeo 3D. O trabalho envolve a extensão de um sistema de vídeo 3D em tempo real proposto recentemente. Esse sistema, constituído por projetor e câmera, obtém imagens de profundidade de objetos por meio da projeção de slides com um padrão de faixas coloridas. Tal procedimento permite a obtenção, em tempo real, tanto do modelo 2,5 D dos objetos quanto da textura dos mesmos, segundo uma técnica denominada luz estruturada. Os dados são capturados a uma taxa de 30 quadros por segundo e possuem alta qualidade: resoluções de 640 x 480 pixeis para a textura e de 90 x 240 pontos (em média) para a geometria. A extensão que essa dissertação propõe visa obter o modelo tridimensional dos objetos presentes em uma cena por meio do registro dos dados (textura e geometria) dos diversos quadros amostrados. Assim, o presente trabalho é um passo intermediário de um projeto maior, no qual pretende-se fazer a reconstrução dos modelos por completo, bastando para isso apenas algumas imagens obtidas a partir de diferentes pontos de observação. Tal reconstrução deverá diminuir a incidência de pontos de oclusão (bastante comuns nos resultados originais) de modo a permitir a adaptação de todo o sistema para objetos móveis e deformáveis, uma vez que, no estado atual, o sistema é robusto apenas para objetos estáticos e rígidos. Até onde pudemos averiguar, nenhuma técnica já foi aplicada com este propósito. Este texto descreve o trabalho já desenvolvido, o qual consiste em um método para detecção, rastreamento e casamento espacial de componentes conexas presentes em um vídeo 3D. A informação de imagem do vídeo (textura) é combinada com posições tridimensionais (geometria) a fim de alinhar partes de superfícies que são vistas em quadros subseqüentes. Esta é uma questão chave no vídeo 3D, a qual pode ser explorada em diversas aplicações tais como compressão, integração geométrica e reconstrução de cenas, dentre outras. A abordagem que adotamos consiste na detecção de características salientes no espaço do mundo, provendo um alinhamento de geometria mais completo. O processo de registro é feito segundo a aplicação do algoritmo ICP---Iterative Closest Point---introduzido por Besl e McKay em 1992. Resultados experimentais bem sucedidos corroborando nosso método são apresentados. / This document presents a MSc thesis focused on the development of a data integration system to generate tridimensional structures from 3D video. The work involves the extension of a recently proposed real time 3D video system. This system, composed by a video camera and a projector, obtains range images of recorded objects using slide projection of a coloured stripe pattern. This procedure allows capturing, in real time, objects´ texture and 2,5 D model, at the same time, by a technique called structured light. The data are acquired at 30 frames per second, being of high quality: the resolutions are 640 x 480 pixels and 90 x 240 points (in average), respectively. The extension that this thesis proposes aims at obtaining the tridimensional model of the objects present in a scene through data matching (texture and geometry) of various sampled frames. Thus, the current work is an intermediary step of a larger project with the intent of achieving a complete reconstruction from only a few images obtained from different viewpoints. Such reconstruction will reduce the incidence of occlusion points (very common on the original results) such that it should be possible to adapt the whole system to moving and deformable objects (In the current state, the system is robust only to static and rigid objects.). To the best of our knowledge, there is no method that has fully solved this problem. This text describes the developed work, which consists of a method to perform detection, tracking and spatial matching of connected components present in a 3D video. The video image information (texture) is combined with tridimensional sites (geometry) in order to align surface portions seen on subsequent frames. This is a key step in the 3D video that may be explored in several applications such as compression, geometric integration and scene reconstruction, to name but a few. Our approach consists of detecting salient features in both image and world spaces, for further alignment of texture and geometry. The matching process is accomplished by the application of the ICP---Iterative Closest Point---algorithm, introduced by Besl and McKay in 1992. Succesful experimental results corroborating our method are shown.
557

The Effect of Mismatch of Total Knee Replacement Components with Knee Joint : A Finite Element Analysis

Kanyal, Rahul January 2016 (has links) (PDF)
It has been noticed that the need for total knee replacement surgery is increasing for Asian region. A total knee replacement is a permanent surgical solution for a patient having debilitating pain in knee joint suffering from arthritis. In this surgery, knee joint is replaced with components made up of bio-compatible materials after which the patient can resume the normal day to day activities. Western population has bigger build compared to Asian population. Most of the total knee replacement prosthesis are designed for western population. When these total knee prosthesis are used for Asian population, they cause a mismatch leading to various clinical complications such as reduced range of motion and pain. The studies have been limited to clinical complications caused by the mismatch. To address this limitation, current study is aimed to find the mechanical implications such as stress distribution, maximum stresses, maximum displacements etc., caused by mismatch of total knee replacement components with knee. A surgeon selects total knee components for a patient based on some critical dimensions of femur and tibia bone of knee. In addition, a method to accurately calculate these dimensions of the femur and tibia bone of a real knee was developed in the current study. This method calculated the points of curvature greater than a threshold (decided based on the radius of the curvature) found out using the formula of curvature. Further, the highest point was calculated based on maximum height from a line drawn between initial and final point within the captured points, also the extreme points were calculated based on the sign change in slope of points within the captured points, giving multiple points on the boundary of bones extracted in an MRI image of a patient. The distance between two selected farthest points, out of these points, in specific direction was the basis for selection of the TKR components. Total knee replacement components were modeled in Geomatics Studio 12 software, bones were modeled in Rhinoceros 5 software, assembly of bones and total knee replacements components was done in Solid works 2013 software, the finite element model of the assembly was developed in Hyper mesh 11 software and, the stress analysis and post processing was done in ABAQUS 6.13 software. A static, implicit non linear analysis was performed. Simulations were performed for two conditions: at standing (0o of flexion) and at hyper-flexed (120o of flexion). In order to figure out if there were any mechanical implications of mismatch, the full model of assembly consisting of femur, tibia and fibula bones assembled with total knee replacement components, and the reduced model consisting of only total knee replacement components were simulated separately, results of which have been discussed in the current thesis. In this work, the effect of change of length of ligaments at 120o of flexion in detail was also studied. This study brought out various outcomes of contact mechanics and kinematics between the components of total knee replacement prosthesis.
558

Rastreamento de componentes conexas em vídeo 3D para obtenção de estruturas tridimensionais / Tracking of connected components from 3D video in order to obtain tridimensional structures

David da Silva Pires 17 August 2007 (has links)
Este documento apresenta uma dissertação sobre o desenvolvimento de um sistema de integração de dados para geração de estruturas tridimensionais a partir de vídeo 3D. O trabalho envolve a extensão de um sistema de vídeo 3D em tempo real proposto recentemente. Esse sistema, constituído por projetor e câmera, obtém imagens de profundidade de objetos por meio da projeção de slides com um padrão de faixas coloridas. Tal procedimento permite a obtenção, em tempo real, tanto do modelo 2,5 D dos objetos quanto da textura dos mesmos, segundo uma técnica denominada luz estruturada. Os dados são capturados a uma taxa de 30 quadros por segundo e possuem alta qualidade: resoluções de 640 x 480 pixeis para a textura e de 90 x 240 pontos (em média) para a geometria. A extensão que essa dissertação propõe visa obter o modelo tridimensional dos objetos presentes em uma cena por meio do registro dos dados (textura e geometria) dos diversos quadros amostrados. Assim, o presente trabalho é um passo intermediário de um projeto maior, no qual pretende-se fazer a reconstrução dos modelos por completo, bastando para isso apenas algumas imagens obtidas a partir de diferentes pontos de observação. Tal reconstrução deverá diminuir a incidência de pontos de oclusão (bastante comuns nos resultados originais) de modo a permitir a adaptação de todo o sistema para objetos móveis e deformáveis, uma vez que, no estado atual, o sistema é robusto apenas para objetos estáticos e rígidos. Até onde pudemos averiguar, nenhuma técnica já foi aplicada com este propósito. Este texto descreve o trabalho já desenvolvido, o qual consiste em um método para detecção, rastreamento e casamento espacial de componentes conexas presentes em um vídeo 3D. A informação de imagem do vídeo (textura) é combinada com posições tridimensionais (geometria) a fim de alinhar partes de superfícies que são vistas em quadros subseqüentes. Esta é uma questão chave no vídeo 3D, a qual pode ser explorada em diversas aplicações tais como compressão, integração geométrica e reconstrução de cenas, dentre outras. A abordagem que adotamos consiste na detecção de características salientes no espaço do mundo, provendo um alinhamento de geometria mais completo. O processo de registro é feito segundo a aplicação do algoritmo ICP---Iterative Closest Point---introduzido por Besl e McKay em 1992. Resultados experimentais bem sucedidos corroborando nosso método são apresentados. / This document presents a MSc thesis focused on the development of a data integration system to generate tridimensional structures from 3D video. The work involves the extension of a recently proposed real time 3D video system. This system, composed by a video camera and a projector, obtains range images of recorded objects using slide projection of a coloured stripe pattern. This procedure allows capturing, in real time, objects´ texture and 2,5 D model, at the same time, by a technique called structured light. The data are acquired at 30 frames per second, being of high quality: the resolutions are 640 x 480 pixels and 90 x 240 points (in average), respectively. The extension that this thesis proposes aims at obtaining the tridimensional model of the objects present in a scene through data matching (texture and geometry) of various sampled frames. Thus, the current work is an intermediary step of a larger project with the intent of achieving a complete reconstruction from only a few images obtained from different viewpoints. Such reconstruction will reduce the incidence of occlusion points (very common on the original results) such that it should be possible to adapt the whole system to moving and deformable objects (In the current state, the system is robust only to static and rigid objects.). To the best of our knowledge, there is no method that has fully solved this problem. This text describes the developed work, which consists of a method to perform detection, tracking and spatial matching of connected components present in a 3D video. The video image information (texture) is combined with tridimensional sites (geometry) in order to align surface portions seen on subsequent frames. This is a key step in the 3D video that may be explored in several applications such as compression, geometric integration and scene reconstruction, to name but a few. Our approach consists of detecting salient features in both image and world spaces, for further alignment of texture and geometry. The matching process is accomplished by the application of the ICP---Iterative Closest Point---algorithm, introduced by Besl and McKay in 1992. Succesful experimental results corroborating our method are shown.
559

Evolving Legacy System's Features into Fine-grained Components Using Regression Test-Cases

Mehta, Alok 11 December 2002 (has links)
"Because many software systems used for business today are considered legacy systems, the need for software evolution techniques has never been greater. We propose a novel evolution methodology for legacy systems that integrates the concepts of features, regression testing, and Component-Based Software Engineering (CBSE). Regression test suites are untapped resources that contain important information about the features of a software system. By exercising each feature with its associated test cases using code profilers and similar tools, code can be located and refactored to create components. The unique combination of Feature Engineering and CBSE makes it possible for a legacy system to be modernized quickly and affordably. We develop a new framework to evolve legacy software that maps the features to software components refactored from their feature implementation. In this dissertation, we make the following contributions: First, a new methodology to evolve legacy code is developed that improves the maintainability of evolved legacy systems. Second, the technique describes a clear understanding between features and functionality, and relationships among features using our feature model. Third, the methodology provides guidelines to construct feature-based reusable components using our fine-grained component model. Fourth, we bridge the complexity gap by identifying feature-based test cases and developing feature-based reusable components. We show how to reuse existing tools to aid the evolution of legacy systems rather than re-writing special purpose tools for program slicing and requirement management. We have validated our approach on the evolution of a real-world legacy system. By applying this methodology, American Financial Systems, Inc. (AFS), has successfully restructured its enterprise legacy system and reduced the costs of future maintenance. "
560

Multivariate Quality Control Using Loss-Scaled Principal Components

Murphy, Terrence Edward 24 November 2004 (has links)
We consider a principal components based decomposition of the expected value of the multivariate quadratic loss function, i.e., MQL. The principal components are formed by scaling the original data by the contents of the loss constant matrix, which defines the economic penalty associated with specific variables being off their desired target values. We demonstrate the extent to which a subset of these ``loss-scaled principal components", i.e., LSPC, accounts for the two components of expected MQL, namely the trace-covariance term and the off-target vector product. We employ the LSPC to solve a robust design problem of full and reduced dimensionality with deterministic models that approximate the true solution and demonstrate comparable results in less computational time. We also employ the LSPC to construct a test statistic called loss-scaled T^2 for multivariate statistical process control. We show for one case how the proposed test statistic has faster detection than Hotelling's T^2 of shifts in location for variables with high weighting in the MQL. In addition we introduce a principal component based decomposition of Hotelling's T^2 to diagnose the variables responsible for driving the location and/or dispersion of a subgroup of multivariate observations out of statistical control. We demonstrate the accuracy of this diagnostic technique on a data set from the literature and show its potential for diagnosing the loss-scaled T^2 statistic as well.

Page generated in 0.049 seconds