141 |
Factors affecting store brand purchase in the Greek grocery marketSarantidis, Paraskevi January 2012 (has links)
This study is an in-depth investigation of the factors that affect store brand purchases. It aims to help both retailers and manufacturers predict store brand purchases through an improved understanding of the effects of three latent variables: customer satisfaction and loyalty with the store; which is expressed through word-of-mouth; and trust in store brands. An additional aim is to explore variations in the level of store brand adoption and the inter-relationships between the selected constructs. Data was collected through a telephone survey of those responsible for household grocery shopping, and who shop at the nine leading grocery retailers in Greece. A total of 904 respondents completed the questionnaire based upon a quota of 100 respondents for each of the nine retailers. Data were analyzed through chi-square, analysis of variance and partial least square. The proposed model was tested by partial least square path modeling, which related the latent variables to the dependent manifest variable: store brand purchases. The findings provide empirical support that store brand purchases are positively influenced by the consumers’ perceived level of trust in store brands. The consumer decision-making process for store brands is complex and establishing customer satisfaction and loyalty with the store does not appear to influence store brand purchases or the level of trust in the retailer’s store brands in the specific context under study. Consequently the most appropriate way to influence store brand purchases in the Greek market is through increasing in the level of trust in the retailer’s store brands. It is suggested that retailers should therefore invest in trust building strategies for their own store brands and try to capitalize on their brand equity by using a family brand policy. Theoretical and managerial implications of the findings are discussed and opportunities for further research are suggested.
|
142 |
Multivariate design of molecular docking experiments : An investigation of protein-ligand interactionsAndersson, David January 2010 (has links)
To be able to make informed descicions regarding the research of new drug molecules (ligands), it is crucial to have access to information regarding the chemical interaction between the drug and its biological target (protein). Computer-based methods have a given role in drug research today and, by using methods such as molecular docking, it is possible to investigate the way in which ligands and proteins interact. Despite the acceleration in computer power experienced in the last decades many problems persist in modelling these complicated interactions. The main objective of this thesis was to investigate and improve molecular modelling methods aimed to estimate protein-ligand binding. In order to do so, we have utilised chemometric tools, e.g. design of experiments (DoE) and principal component analysis (PCA), in the field of molecular modelling. More specifically, molecular docking was investigated as a tool for reproduction of ligand poses in protein 3D structures and for virtual screening. Adjustable parameters in two docking software were varied using DoE and parameter settings were identified which lead to improved results. In an additional study, we explored the nature of ligand-binding cavities in proteins since they are important factors in protein-ligand interactions, especially in the prediction of the function of newly found proteins. We developed a strategy, comprising a new set of descriptors and PCA, to map proteins based on their cavity physicochemical properties. Finally, we applied our developed strategies to design a set of glycopeptides which were used to study autoimmune arthritis. A combination of docking and statistical molecular design, synthesis and biological evaluation led to new binders for two different class II MHC proteins and recognition by a panel of T-cell hybridomas. New and interesting SAR conclusions could be drawn and the results will serve as a basis for selection of peptides to include in in vivo studies.
|
143 |
以基因演算法優化最小二乘支持向量機於坐標轉換之研究 / Coordinate Transformation Using Genetic Algorithm Based Least Square Support Vector Machine黃鈞義 Unknown Date (has links)
由於採用的地球原子不同,目前,台灣地區有兩種坐標系統存在,TWD67(Taiwan Datum 1967) 和TWD97(Taiwan Datum 1997)。在應用上,必須進行不同地球原子間之坐標轉換。坐標轉換方面,有許多方法可供選擇,如六參數轉換、支持向量機(Support Vector Machine, SVM)轉換等。
最小二乘支持向量機(Least Square Support Vector Machine, LSSVM),為SVM的一種演算法,是一種非線性模型。LSSVM在運用上所需之參數少,能夠解決小樣本、非線性、高維度和局部極小點等問題。目前,LSSVM,已經被成功運用在影像分類和統計迴歸等領域上。
本研究將利用LSSVM採用不同之核函數:線性核函數(LIN)、多項式核函數(POLY)及徑向基核函數(RBF)進行TWD97和TWD67之坐標轉換。研究中並使用基因演算法來調整LSSVM的RBF核函數之系統參數(後略稱RBF+GA),找出較佳之系統參數組合以進行坐標轉換。模擬與實測之地籍資料,將被用以測試LSSVM及六參數坐標轉換方法的轉換精度。
研究結果顯示,RBF+GA在各實驗區之轉換精度優於參數優化前RBF之轉換精度,且RBF+GA之轉換精度也較六參數轉換之轉換精度高。
進行參數優化後,RBF+GA相對於RBF的精度提升率如下:(1)模擬實驗區:參考點與檢核點數量比分別為1:1、2:1、3:1、1:2及1:3時,精度提升率分別為15.2%、21.9%、33.2%、12.0%、11.7%;(2)真實實驗區:花蓮縣、台中市及台北市實驗區之精度提升率分別為20.1%、32.4% 、22.5%。 / There are two coordinate systems with different geodetic datum in Taiwan region, i.e., TWD67 (Taiwan Datum 1967) and TWD97 (Taiwan Datum 1997). In order to maintain the consistency of cadastral coordinates, it is necessary to transform from one coordinate system to another. There are many coordinate transformation methods, such as, 2-dimension 6-parameter transformation, and support vector machine (SVM). Least Square Support Vector Machine (LSSVM), is one type of SVM algorithms, and it is also a non-linear model。LSSVM needs a few parameters to solve non-linear, high-dimension problems, and it has been successfully applied to the fields of image classification, and statistical regression. The goal of this paper is to apply LSSVM with different kernel functions (POLY、LIN、RBF) to cadastral coordinate transformation between TWD67 and TWD97.
Genetic Algorithm will be used to find out an appropriate set of system parameters for LSSVM with RBF kernel to transform the cadastral coordinates. The simulated and real data sets will be used to test the performances, and coordinate transformation accuracies of LSSVM with different kernel functions and 6-parameter transformation.
According to the test results, it is found that after optimizing the RBF parameters (RBF+GA), the transformation accuracies using RBF+GA are better than RBF, and even better than those of 6-parameter transformation.
Comparing with the transformation accuracies using RBF, the transformation accuracy improving rate of RBF+GA are : (1) The simulated data sets: when the amount ratio of reference points and check points comes to 1:1, 2:1, 3:1, 1:2 and 1:3, the transformation accuracy improving rate are 15.2%, 21.9%, 33.2%, 12.0% and 11.7%, respectively; (2) The real data sets: the transformation accuracy improving rate of RBF+GA for the Hualien, Taichung and Taipei data sets are 20.1%, 32.4% and 22.5%, respectively.
|
144 |
多期最適資產配置:一般化最小平方法之應用劉家銓 Unknown Date (has links)
本文主要是針對保險業及退休基金的資產負債管理議題為研究重心,延續Huang (2004)的研究,其研究是以理論求解的方式求出多期最適資產配置的唯一解,而其研究也衍生出兩個議題:首先是文中允許資產買賣空;再者其模型僅解決單期挹注資金的問題,而不考慮多期挹注資金。但這對於實際市場操作上會有一些的問題。因此本文延續了其研究,希望解決這兩個議題,讓模型更能解出一般化的資產負債管理問題。
本文所選擇的投資的標的是以一般退休基金與保險業所採用,分別是短債(short-term bonds)、永續債卷(consols)、指數連結型債券(index-linked gilts(ILG))、股票(equity)為四種投資標的,以蒙地卡羅模型模擬出4000組Wilkie 投資模型(1995)下的四種標的年報酬率以及負債年成長率,利用這些預期的模擬值找出最適的投資比例以及應該挹注的金額。而本文主要將問題化為決策變數的二次函數,並以一般化最小平方法(generalized least square,GLS)來求出決策變數,而用此方法最大的優點在於一般化最小平方法具有唯一解,且在利用軟體求解的速度相當快,因此是非常有效率的。本文探討的問題可以分成兩個部分。我們首先討論「單期挹注資金」的問題,只考慮在期初挹注資金。接著我們考慮「多期挹注資金」的問題,是在計畫期間內能將資金分成多期投入。兩者都能將目標函數化為最小平方的形式,因此本文除了找出合理的資產配置以及解決多期挹注資金的問題之外,也將重點著重於找一個能快速且精準的方法來解決資產配置的問題。 / This paper deals with the insurance and pension asset liability management issue. Huang (2004) derives a theoretical close solution of multi-period asset allocation. However, there are two further problems in his paper. First, short selling is allowable. Second, multi-period investing is not acceptable. These two restrictions sometimes are big problems in practice. This paper extends his paper and releases these two restrictions. In other words, we intend to find a solution of multi-period asset allocation so that we can invest money and change proportion of investment in each period without problems of short selling.
In this paper, we use the standard asset classes used by pension or insurance funds such as short-term bonds, consols, index-linked gilts and equities. We generate thousand times of Monte Caro simulations of Wilkie investment model (1995) to predict future asset returns. Furthermore, in order to improve time-efficiency and accuracy, we derive a quadratic objective function and obtain a unique solution using sequential quadratic programming.
|
145 |
Feature Selection under Multicollinearity & Causal Inference on Time SeriesBhattacharya, Indranil January 2017 (has links) (PDF)
In this work, we study and extend algorithms for Sparse Regression and Causal Inference problems. Both the problems are fundamental in the area of Data Science.
The goal of regression problem is to nd out the \best" relationship between an output variable and input variables, given samples of the input and output values. We consider sparse regression under a high-dimensional linear model with strongly correlated variables, situations which cannot be handled well using many existing model selection algorithms. We study the performance of the popular feature selection algorithms such as LASSO, Elastic Net, BoLasso, Clustered Lasso as well as Projected Gradient Descent algorithms under this setting in terms of their running time, stability and consistency in recovering the true support. We also propose a new feature selection algorithm, BoPGD, which cluster the features rst based on their sample correlation and do subsequent sparse estimation using a bootstrapped variant of the projected gradient descent method with projection on the non-convex L0 ball. We attempt to characterize the efficiency and consistency of our algorithm by performing a host of experiments on both synthetic and real world datasets.
Discovering causal relationships, beyond mere correlation, is widely recognized as a fundamental problem. The Causal Inference problems use observations to infer the underlying causal structure of the data generating process. The input to these problems is either a multivariate time series or i.i.d sequences and the output is a Feature Causal Graph where the nodes correspond to the variables and edges capture the direction of causality. For high dimensional datasets, determining the causal relationships becomes a challenging task because of the curse of dimensionality. Graphical modeling of temporal data based on the concept of \Granger Causality" has gained much attention in this context. The blend of Granger methods along with model selection techniques, such as LASSO, enables efficient discovery of a \sparse" sub-set of causal variables in high dimensional settings. However, these temporal causal methods use an input parameter, L, the maximum time lag. This parameter is the maximum gap in time between the occurrence of the output phenomenon and the causal input stimulus. How-ever, in many situations of interest, the maximum time lag is not known, and indeed, finding the range of causal e ects is an important problem. In this work, we propose and evaluate a data-driven and computationally efficient method for Granger causality inference in the Vector Auto Regressive (VAR) model without foreknowledge of the maximum time lag. We present two algorithms Lasso Granger++ and Group Lasso Granger++ which not only constructs the
hypothesis feature causal graph, but also simultaneously estimates a value of maxlag (L) for each variable by balancing the trade-o between \goodness of t" and \model complexity".
|
146 |
Métodos sem malha: aplicações do Método de Galerkin sem elementos e do Método de Interpolação de Ponto em casos estruturais. / Meshless methods: applications of Galerkin method and point interpolation method in structural cases.Franklin Delano Cavalcanti Leitão 19 February 2010 (has links)
Apesar de serem intensamente estudados em muitos países que caminham
na vanguarda do conhecimento, os métodos sem malha ainda são pouco explorados
pelas universidades brasileiras. De modo a gerar uma maior difusão ou, para
a maioria, fazer sua introdução, esta dissertação objetiva efetuar o entendimento
dos métodos sem malha baseando-se em aplicações atinentes à mecânica dos
sólidos. Para tanto, são apresentados os conceitos primários dos métodos sem
malha e o seu desenvolvimento histórico desde sua origem no método smooth
particle hydrodynamic até o método da partição da unidade, sua forma mais
abrangente. Dentro deste contexto, foi investigada detalhadamente a forma mais
tradicional dos métodos sem malha: o método de Galerkin sem elementos, e
também um método diferenciado: o método de interpolação de ponto. Assim,
por meio de aplicações em análises de barras e chapas em estado plano de
tensão, são apresentadas as características, virtudes e deficiências desses métodos
em comparação aos métodos tradicionais, como o método dos elementos
finitos. É realizado ainda um estudo em uma importante área de aplicação dos
métodos sem malha, a mecânica da fratura, buscando compreender como é efetuada
a representação computacional da trinca, com especialidade, por meio dos
critérios de visibilidade e de difração. Utilizando-se esses critérios e os conceitos
da mecânica da fratura, é calculado o fator de intensidade de tensão através do
conceito da integral J. / Meshless are certainly very researched in many countries that are in state
of art of scientific knowledge. However these methods are still unknown by many
brazilian universities. To create more diffusion or, for many people, to introduce
them, this work tries to understand the meshless based on solid mechanic applications.
So basic concepts of meshless and its historic development are introduced
since its origin, with smooth particle hydrodynamic until partition of unity, its
more general form. In this context, most traditional form of meshless was investigated
deeply: element free Galerkin method and also another different method:
point interpolation method. This way characteristics, advantages and disadvantages,
comparing to finite elements methods, are introduced by applications in
analyses in bars and plates in state of plane stress. This work still researched an
important area of meshless application, fracture mechanical, to understand how
a crack is computationally represented, particularly, with visibility and diffraction
criterions. By these criterions and using fracture mechanical concepts, stress intensity
factor is calculated by J-integral concept.
|
147 |
Caractérisation du rayonnement acoustique d'un rail à l'aide d'un réseau de microphones / Spatial characterization of the wheel/rail contact noise by a multi-sensors methodFaure, Baldrik 22 September 2011 (has links)
Le secteur des transports ferroviaires en France est marqué par un dynamisme lié notamment à l'essor du réseau à grande vitesse et à la réimplantation du tramway dans de nombreuses agglomérations. Dans ce contexte, la réduction des nuisances sonores apparaît comme un enjeu majeur pour son développement. Afin d'agir efficacement à la source, il est indispensable d'identifier et d'étudier précisément les sources responsables de ces nuisances au passage des véhicules. Parmi les approches possibles, les antennes microphoniques et les traitements associés sont particulièrement adaptés à la caractérisation des sources ponctuelles mobiles, omnidirectionnelles et décorrélées.Pour les vitesses inférieures à 300 km/h, le bruit de roulement constitue la source principale du bruit ferroviaire ; il résulte du rayonnement acoustique des éléments tels que les roues, le rail et les traverses. Le rail, dont la contribution au bruit de roulement est prépondérante aux moyennes fréquences (entre 500 He et 1000 Hz environ), est une source étendue et cohérente pour laquelle les principes classiques de traitement d'antenne ne sont pas adaptés.La méthode de caractérisation proposée dans cette thèse est une méthode inverse d'optimisation paramétrique utilisant les signaux acoustiques issus d'une antenne microphonique. Les paramètres inconnus d'un modèle vibro-acoustique sont estimés par minimisation d'un critère des moindres carrés sur les matrices spectrales mesurée et modélisée au niveau de l'antenne. Dans le modèle vibro-acoustique, le rail est assimilé à un monopôle cylindrique dont la distribution longitudinale d'amplitude est liée à celle des vitesses vibratoires. Pour le calcul de ces vitesses, les différents modèles proposés mettent en évidence des ondes vibratoires se propageant dans le rail de part et d'autre de chaque excitation. Chacune de ces ondes est caractérisée par une amplitude au niveau de l'excitation, un nombre d'onde structural réel et une atténuation. Ces paramètres sont estimés par minimisation du critère, puis utilisés pour reconstruire le champ acoustique.Dans un premier temps, des simulations sont réalisées pour juger des performances de la méthode proposée, dans le cas d'excitations ponctuelles verticales. En particulier, sa robustesse est testée en présence de bruit ou d'incertitudes sur les paramètres supposés connus du modèle. Les effets de l'utilisation de modèles dégradés sont également étudiés. Concernant l'estimation des amplitudes, les résultats ont montré que la méthode est particulièrement robuste et efficace pour les excitations les plus proches de l'antenne. En revanche, pour l'estimation des autres paramètres, les performances sont supérieures pour les positions d'antenne excentrées. De manière générale, le nombre d'onde est correctement estimé sur l'ensemble des fréquences étudiées. Dans les cas à faible atténuation, un traitement classique par formation de voies en ondes planes suffit. En ce qui concerne l'estimation de l'atténuation, la faible sensibilité du critère limite l'efficacité de la méthode proposée.Enfin, certains résultats obtenus à partir des simulations ont été vérifiés lors de mesures in situ. L'excitation d'un rail expérimental par un marteau de chocs a tout d'abord permis de valider le modèle vibratoire pour la flexion verticale. Pour tester la méthode d'optimisation paramétrique, le rail a également été excité verticalement à l'aide d'un pot vibrant. Les principaux résultats des simulations ont été retrouvés, et des comportements particuliers relatifs à la présence de plusieurs ondes dans le rail ont été observés, ouvrant des perspectives de généralisation du modèle vibratoire utilisé. / In France, railway transport has been boosted by the expansion of the high-speed rail service and the resurgent implantation of tram networks in many city centers. In this context, the reduction of noise pollution becomes a crucial issue for its development. In order to directly act on the source area, it is necessary to precisely identify and study the sources responsible for this nuisance at train pass-by. Among all the potential approaches, microphone arrays and related signal processing techniques are particularly adapted to the characterization of omnidirectional and uncorrelated moving point sources. For speeds up to 300 km/h, rolling noise is the main railway noise source. It arises from the acoustic radiation of various elements such as wheels, rail or sleepers. The rail, which mainly contributes to rolling noise at mid-frequencies (from 500 Hz to 1000 Hz approximately), is an extended coherent source for which classical array processing methods are inappropriate. The characterization method proposed in this thesis is an inverse parametric optimization method that uses the acoustical signals measured by a microphone array. The unknown parameters of a vibro-acoustical model are estimated through the minimization of a least square criterion applied to the entries of the measured and modelled spectral matrices. In this vibro-acoustical model, the rail is considered as a cylindrical monopole whose lengthwise amplitude distribution is obtained from the vibratory velocity one. The different models proposed to obtain this velocity highlight the propagation of vibration waves towards both sides of every forcing point. Each wave is characterized by an amplitude at the forcing point, a real structural wavenumber and a decay rate. These parameters are estimated by the minimization of the least square criterion, and are then used in the vibro-acoustical model to rebuild the acoustical field radiated by the rail. First, simulations are performed in order to appraise the performances of the proposed method, in the case of vertical point excitations. In particular, its robustness to additive noise and to uncertainties in the model parameters that are supposed to be known is tested. The effect of using simplified models is also investigated. Results show that the method is efficient and robust for the amplitude estimation of the nearest contacts to the array. On the other hand, the estimation of the other parameters is improved when the array is shifted away from the contact points. The wavenumber is generally well estimated over the entire frequency range, and when the decay rate is low, a single beamforming technique may be sufficient. Concerning the decay rate estimation, the efficiency of the method is limited by the low sensitivity of the criterion. At last, measurements are performed in order to verify some results obtained from the simulations. The vibratory model is first validated for the vertical flexural waves trough the use of an impact hammer. Then, the parametric optimization method is tested by the vertical excitation of the rail with a modal shaker. The main simulation results are found, and some particular behavior due to other waves existing in the rail can be observed, opening the perspective of a generalized method including more complex vibratory modelings.
|
148 |
Analysis, Diagnosis and Design for System-level Signal and Power Integrity in Chip-package-systemsAmbasana, Nikita January 2017 (has links) (PDF)
The Internet of Things (IoT) has ushered in an age where low-power sensors generate data which are communicated to a back-end cloud for massive data computation tasks. From the hardware perspective this implies co-existence of several power-efficient sub-systems working harmoniously at the sensor nodes capable of communication and high-speed processors in the cloud back-end. The package-board system-level design plays a crucial role in determining the performance of such low-power sensors and high-speed computing and communication systems. Although there exist several commercial solutions for electromagnetic and circuit analysis and verification, problem diagnosis and design tools are lacking leading to longer design cycles and non-optimal system designs. This work aims at developing methodologies for faster analysis, sensitivity based diagnosis and multi-objective design towards signal integrity and power integrity of such package-board system layouts.
The first part of this work aims at developing a methodology to enable faster and more exhaustive design space analysis. Electromagnetic analysis of packages and boards can be performed in time domain, resulting in metrics like eye-height/width and in frequency domain resulting in metrics like s-parameters and z-parameters. The generation of eye-height/width at higher bit error rates require longer bit sequences in time domain circuit simulation, which is compute-time intensive. This work explores learning based modelling techniques that rapidly map relevant frequency domain metrics like differential insertion-loss and cross-talk, to eye-height/width therefore facilitating a full-factorial design space sweep. Numerical results performed with artificial neural network as well as least square support vector machine on SATA 3.0 and PCIe Gen 3 interfaces generate less than 2% average error with order of magnitude speed-up in eye-height/width computation.
Accurate power distribution network design is crucial for low-power sensors as well as a cloud sever boards that require multiple power level supplies. Achieving target power-ground noise levels for low power complex power distribution networks require several design and analysis cycles. Although various classes of analysis tools, 2.5D and 3D, are commercially available, the presence of design tools is limited. In the second part of the thesis, a frequency domain mesh-based sensitivity formulation for DC and AC impedance (z-parameters) is proposed. This formulation enables diagnosis of layout for maximum impact in achieving target specifications. This sensitivity information is also used for linear approximation of impedance profile updates for small mesh variations, enabling faster analysis.
To enable designing of power delivery networks for achieving target impedance, a mesh-based decoupling capacitor sensitivity formulation is presented. Such an analytical gradient is used in gradient based optimization techniques to achieve an optimal set of decoupling capacitors with appropriate values and placement information in package/boards, for a given target impedance profile. Gradient based techniques are far less expensive than the state of the art evolutionary optimization techniques used presently for a decoupling capacitor network design. In the last part of this work, the functional similarities between package-board design and radio frequency imaging are explored. Qualitative inverse-solution methods common to the radio frequency imaging community, like Tikhonov regularization and Landweber methods are applied to solve multi-objective, multi-variable signal integrity package design problems. Consequently a novel Hierarchical Search Linear Back Projection algorithm is developed for an efficient solution in the design space using piecewise linear approximations. The presented algorithm is demonstrated to converge to the desired signal integrity specifications with minimum full wave 3D solve iterations.
|
149 |
Métodos sem malha: aplicações do Método de Galerkin sem elementos e do Método de Interpolação de Ponto em casos estruturais. / Meshless methods: applications of Galerkin method and point interpolation method in structural cases.Franklin Delano Cavalcanti Leitão 19 February 2010 (has links)
Apesar de serem intensamente estudados em muitos países que caminham
na vanguarda do conhecimento, os métodos sem malha ainda são pouco explorados
pelas universidades brasileiras. De modo a gerar uma maior difusão ou, para
a maioria, fazer sua introdução, esta dissertação objetiva efetuar o entendimento
dos métodos sem malha baseando-se em aplicações atinentes à mecânica dos
sólidos. Para tanto, são apresentados os conceitos primários dos métodos sem
malha e o seu desenvolvimento histórico desde sua origem no método smooth
particle hydrodynamic até o método da partição da unidade, sua forma mais
abrangente. Dentro deste contexto, foi investigada detalhadamente a forma mais
tradicional dos métodos sem malha: o método de Galerkin sem elementos, e
também um método diferenciado: o método de interpolação de ponto. Assim,
por meio de aplicações em análises de barras e chapas em estado plano de
tensão, são apresentadas as características, virtudes e deficiências desses métodos
em comparação aos métodos tradicionais, como o método dos elementos
finitos. É realizado ainda um estudo em uma importante área de aplicação dos
métodos sem malha, a mecânica da fratura, buscando compreender como é efetuada
a representação computacional da trinca, com especialidade, por meio dos
critérios de visibilidade e de difração. Utilizando-se esses critérios e os conceitos
da mecânica da fratura, é calculado o fator de intensidade de tensão através do
conceito da integral J. / Meshless are certainly very researched in many countries that are in state
of art of scientific knowledge. However these methods are still unknown by many
brazilian universities. To create more diffusion or, for many people, to introduce
them, this work tries to understand the meshless based on solid mechanic applications.
So basic concepts of meshless and its historic development are introduced
since its origin, with smooth particle hydrodynamic until partition of unity, its
more general form. In this context, most traditional form of meshless was investigated
deeply: element free Galerkin method and also another different method:
point interpolation method. This way characteristics, advantages and disadvantages,
comparing to finite elements methods, are introduced by applications in
analyses in bars and plates in state of plane stress. This work still researched an
important area of meshless application, fracture mechanical, to understand how
a crack is computationally represented, particularly, with visibility and diffraction
criterions. By these criterions and using fracture mechanical concepts, stress intensity
factor is calculated by J-integral concept.
|
150 |
AJUSTAMENTO E CONTROLES DE QUALIDADE DAS LINHAS POLIGONAIS / LEAST-SQUARES ADJUSTMENT METHOD AND QUALITY CONTROLS OF TRAVERSESStringhini, Mário 03 March 2005 (has links)
Brazil is a country of great dimension that still holds unexplored areas of little population nowadays. Actually these lands are state-owned, but they have been appropriated as a private property, especially in the Amazon basin. The need for a governmental more effective territorial management was formally implemented through the establishment of a geodetic referenced landed property register. This system will integrate the public administration organization and the land register services in a common basis of data. The geodetic science and the advanced technological resources like the Global Positioning System, which receives radio rays transmitted by satellites that surround the Earth and the electronic tachymeter, which capture light rays reflected on locations over the surface of the Earth, enable reliable geodetic references for the database graphic. Brazilian government takes into consideration the set of a wide basis of data in 10 years in order to control the undue appropriation of lands in the agricultural border, as well as great management over the territorial organization in the states of consolidated agrarian structure. The present work has been thought with the purpose of improving topographic survey quality by means of adjustment and quality survey estimation to serve geodetic referencing principles. This survey of a 9-points traverse being used one control point. It was implanted at Federal University of Santa Maria campus with electronic tachymeter and the obtained data worked in a digital calculation form. The qui-square test of the quadratic form of misclosures was put into practice to evaluate the survey as a whole. Then the adjustment by the least square method was done, utilizing the observation equations taken from the variation of coordinates. The weight unit variance a posteriori was calculated in the qui-square test in the quadratic form of the residuals, which evaluates adjustment quality. The variance-covariance matrix of the adjusted coordinates have enable the estimation for point of the trasverse by using
the calculation of the geometric parameters of the standard error ellipse, error ellipse, mean squared position error or radial error and circular error probable. Also, data snooping test of Baarda was developed to identify the observations that could present measuring problems. All the procedures were organized in a diagram to make a computer programming easier. In that research, instrumental systematic errors were verified, but they did not impair the method presentation. The adjustment has shown convergence after little iterations, the qui-square tests were in the range of acceptance of the hypotheses; the data snooping test has shown the most reliable observations while the estimations for point were quantified and graphically presented. The author hopes the work contributes to offer an accessible alternative scientifically based to surveyors, under the new paradigm presented by Topography. / O Brasil é um país de grande dimensão que ainda hoje possui vastas áreas virgens de pouca população e possuidoras de terras tidas como devolutas, especialmente na bacia amazônica. A necessidade de um gerenciamento territorial mais efetivo por parte do Estado foi implementada via burocrática com a criação de um cadastro nacional de imóveis rurais georreferenciados, sistema que integrará os órgãos públicos da administração direta e os serviços registrais de terras em uma base comum de dados. A ciência geodésica e os recursos tecnológicos avançados como o Sistema de Posicionamento Global que recebem sinais de rádio de atélites que circundam a Terra e os taquímetros eletrônicos que captam sinais infravermelho refletidos em locações sobre a superfície da Terra possibilitam levantamentos com georreferencias confiáveis para o banco de dados gráfico. O Estado Brasileiro considera que em 10 anos terá uma base de dados abrangente para controlar a apropriação indevida de terras na fronteira agrícola, bem como grande gerência sobre o ordenamento do território nos estados de estrutura agrária consolidada. O presente trabalho foi pensado com o objetivo de agregar valor aos levantamentos topográficos com o uso do ajustamento e de estimativas de qualidade de levantamentos visando atender os preceitos do georrefenciamento. O levantamento de um polígono de 9 (nove) vértices, sendo um usado como ponto de controle, foi implantado no campus da Universidade Federal de Santa Maria com taquímetro eletrônico e os dados obtidos trabalhados em uma planilha de cálculo digital. O teste qui-quadrado da forma quadrática do erro de fechamento foi aplicado para avaliação do levantamento como um todo. A seguir foi praticado o ajustamento pelo método dos mínimos quadrados, com o emprego de equações de observação obtidas por variação de coordenadas. A variância da unidade peso a posteriori foi calculada no teste qui-quadrado da forma quadrática dos resíduos que avalia a qualidade do ajustamento. A matriz de variância-covariância das coordenadas ajustadas possibilitou a estimação por ponto da poligonal com o cálculo dos parâmetros das figuras geométricas das elipses dos erros e de confiança e dos círculos do erro de posição e do erro médio. Também o teste data snooping de Baarda foi aplicado para identificação das observações que poderiam apresentar problemas de medições. Todos os procedimentos foram organizados em um fluxograma de forma a facilitar uma programação computacional. No trabalho desenvolvido foram constatados erros sistemáticos instrumentais, que não prejudicaram a apresentação do método. O ajustamento mostrou convergência após poucas iterações, os testes qui-quadrados ficaram na região de aceitação das hipóteses, o teste data snooping mostrou as observações mais confiáveis enquanto que as estimações por ponto foram quantificadas e apresentadas graficamente. O autor espera que a obra contribua para oferecer uma alternativa acessível e fundamentada cientificamente aos profissionais em seus levantamentos, sob o novo paradigma vivido pela Topografia.
|
Page generated in 0.0797 seconds