• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 17
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 69
  • 13
  • 11
  • 11
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Hipersuperfícies com Hessiano Nulo em P4

Freitas, Gersica Valesca Lima de 15 August 2013 (has links)
Submitted by ANA KARLA PEREIRA RODRIGUES (anakarla_@hotmail.com) on 2017-08-11T13:12:20Z No. of bitstreams: 1 arquivototal.pdf: 1245634 bytes, checksum: e10d5add0ac7fd6fd557ebc178b4b142 (MD5) / Made available in DSpace on 2017-08-11T13:12:20Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1245634 bytes, checksum: e10d5add0ac7fd6fd557ebc178b4b142 (MD5) Previous issue date: 2013-08-15 / Hesse claimed in [9] that an irreducible projective hypersurface in Pn de ned by an equation with vanishing hessian determinant is necessarily a cone. Gordan and Noether proved in [6] that this is true for n 3 and constructed counterexamples for every n 4. Gordan-Noether and Franchetta gave a classi cation of hypersurfaces in P4 with vanishing hessian and which are not cones, see [6] and [3]. Here we give a geometric approach to the classi cation proposed by Gordan-Noether, providing a classi cation of hypersurfaces with zero Hessian in P4, following the lines of Garbagnati-Reppeto in [4]. / Hesse afirmou em [9] que uma hipersuperfície projetiva irredutível em Pn definida por uma equação com hessiano nulo necessariamente é um cone. Gordan e Noether provaram em [6] que isso é verdade para n 3 e exibiram contra-exemplos para cada n 4. Gordan-Noether e Franchetta deram uma classi ca c~ao das hipersuperf cies em P4 com hessiano nulo e que n~ao s~ao cones, ver [6] e [3]. Aqui vamos dar uma abordagem geom etrica a classi ca c~ao das hipersuperf cies com hessiano nulo em P4 proposta por Gordan-Noether, seguindo as linhas de Garbagnati-Reppeto em [4].
42

Anisotropic mesh construction and error estimation in the finite element method

Kunert, Gerd 13 January 2000 (has links)
In an anisotropic adaptive finite element algorithm one usually needs an error estimator that yields the error size but also the stretching directions and stretching ratios of the elements of a (quasi) optimal anisotropic mesh. However the last two ingredients can not be extracted from any of the known anisotropic a posteriori error estimators. Therefore a heuristic approach is pursued here, namely, the desired information is provided by the so-called Hessian strategy. This strategy produces favourable anisotropic meshes which result in a small discretization error. The focus of this paper is on error estimation on anisotropic meshes. It is known that such error estimation is reliable and efficient only if the anisotropic mesh is aligned with the anisotropic solution. The main result here is that the Hessian strategy produces anisotropic meshes that show the required alignment with the anisotropic solution. The corresponding inequalities are proven, and the underlying heuristic assumptions are given in a stringent yet general form. Hence the analysis provides further inside into a particular aspect of anisotropic error estimation.
43

Anisotropic mesh construction and error estimation in the finite element method

Kunert, Gerd 27 July 2000 (has links)
In an anisotropic adaptive finite element algorithm one usually needs an error estimator that yields the error size but also the stretching directions and stretching ratios of the elements of a (quasi) optimal anisotropic mesh. However the last two ingredients can not be extracted from any of the known anisotropic a posteriori error estimators. Therefore a heuristic approach is pursued here, namely, the desired information is provided by the so-called Hessian strategy. This strategy produces favourable anisotropic meshes which result in a small discretization error. The focus of this paper is on error estimation on anisotropic meshes. It is known that such error estimation is reliable and efficient only if the anisotropic mesh is aligned with the anisotropic solution. The main result here is that the Hessian strategy produces anisotropic meshes that show the required alignment with the anisotropic solution. The corresponding inequalities are proven, and the underlying heuristic assumptions are given in a stringent yet general form. Hence the analysis provides further inside into a particular aspect of anisotropic error estimation.
44

Static And Transient Voltage Stability Assessment Of Hybrid Ac/Dc Power Systems

Lin, Minglan 10 December 2010 (has links)
Voltage stability is a challenging problem in the design and operation of terrestrial and shipboard power systems. DC links can be integrated in the AC systems to increase the transmission capacity or to enhance the distribution performance. However, DC links introduce voltage stability issues related to the reactive power shortage due to power converters. Multi-infeed DC systems make this existing phenomenon more complicated. In addition, shipboard power systems have unique characteristics, and some concepts and methodologies developed for terrestrial power systems need to be investigated and modified before they are extended for shipboard power systems. One goal of this work was to develop a systematic method for voltage stability assessment of hybrid AC/DC systems, independent of system configuration. The static and dynamic approaches have been used as complementary methods to address different aspects in voltage stability. The other goal was to develop or to apply voltage stability indicators for voltage stability assessment. Two classical indicators (the minimum eigenvalue and loading margin) and an improvement (the 2nd order performance indicator) have been jointly used for the prediction of voltage stability, providing information on the system state and proximity to and mechanism of instability. The eliminated variable method has been introduced to calculate the partial derivatives of AC/DC systems for modal analysis. The previously mentioned methodologies and the associated indicators have been implemented for the application of integrated shipboard power system including DC zonal arrangement. The procedure of voltage stability assessment has been performed for three test systems, the WSCC 3-machine 9-bus system, the benchmark integrated shipboard power system, and the modified I RTS-96. The static simulation results illustrate the critical location and the contributing factors to the voltage instability, and screen the critical contingencies for dynamic simulation. The results obtained from various static methods have been compared. The dynamic simulation results demonstrate the response of dynamic characteristics of system components, and benchmark the static simulation results.
45

Comparative study of table layout analysis : Layout analysis solutions study for Swedish historical hand-written document

Liang, Xusheng January 2019 (has links)
Background. Nowadays, information retrieval system become more and more popular, it helps people retrieve information more efficiently and accelerates daily task. Within this context, Image processing technology play an important role that help transcribing content in printed or handwritten documents into digital data in information retrieval system. This transcribing procedure is called document digitization. In this transcribing procedure, image processing technique such as layout analysis and word recognition are employed to segment the document content and transcribe the image content into words. At this point, a Swedish company (ArkivDigital® AB) has a demand to transcribe their document data into digital data. Objectives. In this study, the aim is to find out effective solution to extract document layout regard to the Swedish handwritten historical documents, which are featured by their tabular forms containing the handwritten content. In this case, outcome of application of OCRopus, OCRfeeder, traditional image processing techniques, machine learning techniques on Swedish historical hand-written document is compared and studied. Methods. Implementation and experiment are used to develop three comparative solutions in this study. One is Hessian filtering with mask operation; another one is Gabor filtering with morphological open operation; the last one is Gabor filtering with machine learning classification. In the last solution, different alternatives were explored to build up document layout extraction pipeline. Hessian filter and Gabor filter are evaluated; Secondly, filter images with the better filter evaluated at previous stage, then refine the filtered image with Hough line transform method. Third, extract transfer learning feature and custom feature. Fourth, feed classifier with previous extracted features and analyze the result. After implementing all the solutions, sample set of the Swedish historical handwritten document is applied with these solutions and compare their performance with survey. Results. Both open source OCR system OCRopus and OCRfeeder fail to deliver the outcome due to these systems are designed to handle general document layout instead of table layout. Traditional image processing solutions work in more than a half of the cases, but it does not work well. Combining traditional image process technique and machine leaning technique give the best result, but with great time cost. Conclusions. Results shows that existing OCR system cannot carry layout analysis task in our Swedish historical handwritten document. Traditional image processing techniques are capable to extract the general table layout in these documents. By introducing machine learning technique, better and more accurate table layout can be extracted, but comes with a bigger time cost. / Scalable resource-efficient systems for big data analytics
46

SIR、SAVE、SIR-II、pHd等四種維度縮減方法之比較探討

方悟原, Fang, Wu-Yuan Unknown Date (has links)
本文以維度縮減(dimension reduction)為主題,介紹其定義以及四種目前較被廣為討論的處理方式。文中首先針對Li (1991)所使用的維度縮減定義型式y = g(x,ε) = g1(βx,ε),與Cook (1994)所採用的定義型式「條件密度函數f(y | x)=f(y |βx)」作探討,並就Cook (1994)對最小維度縮減子空間的相關討論作介紹。此外文中也試圖提出另一種適用於pHd的可能定義(E(y | x)=E(y |βx),亦即縮減前後y的條件期望值不變),並發現在此一新定義下所衍生而成的子空間會包含於Cook (1994)所定義的子空間。 有關現有四種維度縮減方法(SIR、SAVE、SIR-II、pHd)的理論架構,則重新予以說明並作必要的補充證明,並以兩個機率模式(y = bx +ε及y = |z| +ε)為例,分別測試四種方法能否縮減出正確的方向。文中同時也分別找出對應於這四種方法的等價條件,並利用這些等價條件相互比較,得到彼此間的關係。我們發現當解釋變數x為多維常態情形下,四種方法理論上都不會保留可以被縮減的方向,而該保留住的方向卻不一定能夠被保留住,但是使用SAVE所可以保留住的方向會比單獨使用其他三者之一來的多(或至少一樣多),而如果SIR與SIR-II同時使用則恰好等同於使用SAVE。另外使用pHd似乎時並不需要「E(y│x)二次可微分」這個先決條件。 / The focus of the study is on the dimension reduction and the over-view of the four methods frequently cited in the literature, i.e. SIR, SAVE, SIR-II, and pHd. The definitions of dimension reduction proposed by Li (1991)(y = g( x,ε) = g1(βx,ε)), and by Cook (1994)(f(y | x)=f(y|βx)) are briefly reviewed. Issues on minimum dimension reduction subspace (Cook (1994)) are also discussed. In addition, we propose a possible definition (E(y | x)=E(y |βx)), i.e. the conditional expectation of y remains the same both in the original subspace and the reduced subspace), which seems more appropriate when pHd is concerned. We also found that the subspace induced by this definition would be contained in the subspace generated based on Cook (1994). We then take a closer look at basic ideas behind the four methods, and supplement some more explanations and proofs, if necessary. Equivalent conditions related to the four methods that can be used to locate "right" directions are presented. Two models (y = bx +ε and y = |z| +ε) are used to demonstrate the methods and to see how good they can be. In order to further understand the possible relationships among the four methods, some comparisons are made. We learn that when x is normally distributed, directions that are redundant will not be preserved by any of the four methods. Directions that contribute significantly, however, may be mistakenly removed. Overall, SAVE has the best performance in terms of saving the "right" directions, and applying SIR along with SIR-II performs just as well. We also found that the prerequisite, 「E(y | x) is twice differentiable」, does not seem to be necessary when pHd is applied.
47

Uncalibrated robotic visual servo tracking for large residual problems

Munnae, Jomkwun 17 November 2010 (has links)
In visually guided control of a robot, a large residual problem occurs when the robot configuration is not in the neighborhood of the target acquisition configuration. Most existing uncalibrated visual servoing algorithms use quasi-Gauss-Newton methods which are effective for small residual problems. The solution used in this study switches between a full quasi-Newton method for large residual case and the quasi-Gauss-Newton methods for the small case. Visual servoing to handle large residual problems for tracking a moving target has not previously appeared in the literature. For large residual problems various Hessian approximations are introduced including an approximation of the entire Hessian matrix, the dynamic BFGS (DBFGS) algorithm, and two distinct approximations of the residual term, the modified BFGS (MBFGS) algorithm and the dynamic full Newton method with BFGS (DFN-BFGS) algorithm. Due to the fact that the quasi-Gauss-Newton method has the advantage of fast convergence, the quasi-Gauss-Newton step is used as the iteration is sufficiently near the desired solution. A switching algorithm combines a full quasi-Newton method and a quasi-Gauss-Newton method. Switching occurs if the image error norm is less than the switching criterion, which is heuristically selected. An adaptive forgetting factor called the dynamic adaptive forgetting factor (DAFF) is presented. The DAFF method is a heuristic scheme to determine the forgetting factor value based on the image error norm. Compared to other existing adaptive forgetting factor schemes, the DAFF method yields the best performance for both convergence time and the RMS error. Simulation results verify validity of the proposed switching algorithms with the DAFF method for large residual problems. The switching MBFGS algorithm with the DAFF method significantly improves tracking performance in the presence of noise. This work is the first successfully developed model independent, vision-guided control for large residual with capability to stably track a moving target with a robot.
48

Design of nearly linear-phase recursive digital filters by constrained optimization

Guindon, David Leo 24 December 2007 (has links)
The design of nearly linear-phase recursive digital filters using constrained optimization is investigated. The design technique proposed is expected to be useful in applications where both magnitude and phase response specifications need to be satisfied. The overall constrained optimization method is formulated as a quadratic programming problem based on Newton’s method. The objective function, its gradient vector and Hessian matrix as well as a set of linear constraints are derived. In this analysis, the independent variables are assumed to be the transfer function coefficients. The filter stability issue and convergence efficiency, as well as a ‘real axis attraction’ problem are solved by integrating the corresponding bounds into the linear constraints of the optimization method. Also, two initialization techniques for providing efficient starting points for the optimization are investigated and the relation between the zero and pole positions and the group delay are examined. Based on these ideas, a new objective function is formulated in terms of the zeros and poles of the transfer function expressed in polar form and integrated into the optimization process. The coefficient-based and polar-based objective functions are tested and compared and it is shown that designs using the polar-based objective function produce improved results. Finally, several other modern methods for the design of nearly linear-phase recursive filters are compared with the proposed method. These include an elliptic design combined with an optimal equalization technique that uses a prescribed group delay, an optimal design method with robust stability using conic-quadratic-programming updates, and an unconstrained optimization technique that uses parameterization to guarantee filter stability. It was found that the proposed method generates similar or improved results in all comparative examples suggesting that the new method is an attractive alternative for linear-phase recursive filters of orders up to about 30.
49

Design of nearly linear-phase recursive digital filters by constrained optimization

Guindon, David Leo 24 December 2007 (has links)
The design of nearly linear-phase recursive digital filters using constrained optimization is investigated. The design technique proposed is expected to be useful in applications where both magnitude and phase response specifications need to be satisfied. The overall constrained optimization method is formulated as a quadratic programming problem based on Newton’s method. The objective function, its gradient vector and Hessian matrix as well as a set of linear constraints are derived. In this analysis, the independent variables are assumed to be the transfer function coefficients. The filter stability issue and convergence efficiency, as well as a ‘real axis attraction’ problem are solved by integrating the corresponding bounds into the linear constraints of the optimization method. Also, two initialization techniques for providing efficient starting points for the optimization are investigated and the relation between the zero and pole positions and the group delay are examined. Based on these ideas, a new objective function is formulated in terms of the zeros and poles of the transfer function expressed in polar form and integrated into the optimization process. The coefficient-based and polar-based objective functions are tested and compared and it is shown that designs using the polar-based objective function produce improved results. Finally, several other modern methods for the design of nearly linear-phase recursive filters are compared with the proposed method. These include an elliptic design combined with an optimal equalization technique that uses a prescribed group delay, an optimal design method with robust stability using conic-quadratic-programming updates, and an unconstrained optimization technique that uses parameterization to guarantee filter stability. It was found that the proposed method generates similar or improved results in all comparative examples suggesting that the new method is an attractive alternative for linear-phase recursive filters of orders up to about 30.
50

Detecção e rastreamento de leucócitos em imagens de microscopia intravital via processamento espaçotemporal

Silva, Bruno César Gregório da 19 February 2016 (has links)
Submitted by Luciana Sebin (lusebin@ufscar.br) on 2016-10-06T13:29:02Z No. of bitstreams: 1 DissBCGS.pdf: 7250050 bytes, checksum: df4b2203e5e586a2cba2f75ff4af7f2f (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-13T20:24:11Z (GMT) No. of bitstreams: 1 DissBCGS.pdf: 7250050 bytes, checksum: df4b2203e5e586a2cba2f75ff4af7f2f (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-13T20:24:21Z (GMT) No. of bitstreams: 1 DissBCGS.pdf: 7250050 bytes, checksum: df4b2203e5e586a2cba2f75ff4af7f2f (MD5) / Made available in DSpace on 2016-10-13T20:24:30Z (GMT). No. of bitstreams: 1 DissBCGS.pdf: 7250050 bytes, checksum: df4b2203e5e586a2cba2f75ff4af7f2f (MD5) Previous issue date: 2016-02-19 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Over the last few years, a large number of researchers have directed their efforts and interests for the in vivo study of the cellular and molecular mechanisms of leukocyte-endothelial interactions in the microcirculation of many tissues under different inflammatory conditions. The main goal of these studies is to develop more effective therapeutic strategies for the treatment of inflammatory and autoimmune diseases. Nowadays, analysis of the leukocyte-endothelial interactions in small animals is performed by visual assessment from an intravital microscopy image sequences. Besides being time consuming, this procedure may cause visual fatigue of the observer and, therefore, generate false statistics. In this context, this work aims to study and develop computational techniques for the automatic detection and tracking of leukocytes in intravital video microscopy. For that, results from frame to frame processing (2D – spatial analysis) will be combined with those from the three-dimensional analysis (3D=2D+t – spatio-temporal analysis) of the volume formed by stacking the video frames. The main technique addressed for both processings is based on the analysis of the eigenvalues of the local Hessian matrix. While the 2D image processing aims the leukocyte detection without worrying about their tracking, 2D+t processing is intended to assist on the dynamic analysis of cell movement (tracking), being able to predict cell movements in cases of occlusion, for example. In this work we used intravital video microscopy obtained from a study of Multiple Sclerosis in mice. Noise reduction and registration techniques comprise the preprocessing step. In addition, techniques for the analysis and definition of cellular pathways comprise the post processing step. Results of 2D and 2D+t processing steps, compared with conventional visual analysis, have shown the effectiveness of the proposed approach. / Nos últimos anos, um grande número de pesquisadores tem voltado seus esforços e interesses para o estudo in vivo dos mecanismos celulares e moleculares da interação leucócitoendotélio na microcirculação de vários tecidos e em várias condições inflamatórias. O principal objetivo desses estudos é desenvolver estratégias terapêuticas mais eficazes para o tratamento de doenças inflamatórias e autoimunes. Atualmente, a análise de interações leucócito-endotélio em pequenos animais é realizada de maneira visual a partir de uma sequência de imagens de microscopia intravital. Além de ser demorado, esse procedimento pode levar à fadiga visual do observador e, portanto, gerar falsas estatísticas. Nesse contexto, este trabalho de pesquisa tem como objetivo estudar e desenvolver técnicas computacionais para a detecção e rastreamento automáticos de leucócitos em vídeos de microscopia intravital. Para isso, resultados do processamento quadro a quadro do vídeo (2D – análise espacial) serão combinados com os resultados da análise tridimensional (3D=2D+t – análise espaço-temporal) do volume formado pelo empilhamento dos quadros do vídeo. A principal técnica abordada para ambos os processamentos é baseada na análise local dos autovalores da matriz Hessiana. Enquanto o processamento de imagens 2D tem como objetivo a detecção de leucócitos sem se preocupar com seu rastreamento, o processamento 2D+t pretende auxiliar na análise dinâmica de movimentação das células (rastreamento), sendo capaz de prever movimentos celulares em casos de oclusão, por exemplo. Neste trabalho foram utilizados vídeos de microscopia intravital obtidos a partir de um estudo da Esclerose Múltipla realizado com camundongos. Técnicas de redução de ruído e estabilização do movimento das sequências de imagens compõem a etapa de pré-processamento, assim como técnicas para a definição e análise dos caminhos celulares compõem a etapa de pós-processamento. Resultados das etapas de processamento 2D e 2D+t, comparados com a convencional análise visual, demonstraram a eficácia da abordagem proposta. / FAPESP: 2013/26171-6

Page generated in 0.0498 seconds