• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 27
  • 11
  • 8
  • 6
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 128
  • 19
  • 16
  • 15
  • 14
  • 14
  • 14
  • 12
  • 11
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Dynamische Rissdetektion mittels photogrammetrischer Verfahren – Entwicklung und Anwendung optimierter Algorithmen

Hampel, Uwe, Maas, Hans-Gerd 03 June 2009 (has links) (PDF)
Die digitale Nahbereichsphotogrammetrie ermöglicht eine effiziente Erfassung dreidimensionaler Objektoberflächen bei experimentellen Untersuchungen. Besonders für die flächenhafte Erfassung von Verformungen und die Rissdetektion sind photogrammetrische Verfahren – unter Beachtung entsprechender Randbedingungen – prinzipiell geeignet. Der Beitrag geht unter Einbeziehung aktueller Untersuchungen an textilbewehrten Betonproben auf die Problematik der Rissdetektion ein und gibt einen Überblick über den Entwicklungsstand und das erreichbare Genauigkeitspotential. In Bezug auf die praktische Anwendung der vorgestellten Verfahren wird abschließend auf verschiedene Möglichkeiten der Optimierung eingegangen.
82

On error-robust source coding with image coding applications

Andersson, Tomas January 2006 (has links)
<p>This thesis treats the problem of source coding in situations where the encoded data is subject to errors. The typical scenario is a communication system, where source data such as speech or images should be transmitted from one point to another. A problem is that most communication systems introduce some sort of error in the transmission. A wireless communication link is prone to introduce individual bit errors, while in a packet based network, such as the Internet, packet losses are the main source of error.</p><p>The traditional approach to this problem is to add error correcting codes on top of the encoded source data, or to employ some scheme for retransmission of lost or corrupted data. The source coding problem is then treated under the assumption that all data that is transmitted from the source encoder reaches the source decoder on the receiving end without any errors. This thesis takes another approach to the problem and treats source and channel coding jointly under the assumption that there is some knowledge about the channel that will be used for transmission. Such joint source--channel coding schemes have potential benefits over the traditional separated approach. More specifically, joint source--channel coding can typically achieve better performance using shorter codes than the separated approach. This is useful in scenarios with constraints on the delay of the system.</p><p>Two different flavors of joint source--channel coding are treated in this thesis; multiple description coding and channel optimized vector quantization. Channel optimized vector quantization is a technique to directly incorporate knowledge about the channel into the source coder. This thesis contributes to the field by using channel optimized vector quantization in a couple of new scenarios. Multiple description coding is the concept of encoding a source using several different descriptions in order to provide robustness in systems with losses in the transmission. One contribution of this thesis is an improvement to an existing multiple description coding scheme and another contribution is to put multiple description coding in the context of channel optimized vector quantization. The thesis also presents a simple image coder which is used to evaluate some of the results on channel optimized vector quantization.</p>
83

Estimation of Urban-Enhanced Infiltration and Groundwater Recharge, Sierra Vista Subbasin, Southeast Arizona USA

Stewart, Anne M. January 2014 (has links)
This dissertation reports on the methods and results of a three-phased investigation to estimate the annual volume of ephemeral-channel-focused groundwater recharge attributable to urbanization (urban-enhanced groundwater recharge) in the Sierra Vista subwatershed of southeastern Arizona, USA. Results were used to assess a prior estimate. The first research phase focused on establishment of a study area, installation of a distributed network of runoff gages, gaging for stage, and transforming 2008 stage data into time series of volumetric discharge, using the continuous slope-area method. Stage data were collected for water years 2008 - 2011. The second research phase used 2008 distributed runoff data with NWS DOPPLER RADAR data to optimize a rainfall-runoff computational model, with the aim of identifying optimal site-specific distributed hydraulic conductivity values and model-predicted infiltration. The third research phase used the period-of-record runoff stage data to identify study-area ephemeral flow characteristics and to estimate channel-bed infiltration of flow events. Design-storm modeling was used to identify study-area predevelopment ephemeral flow characteristics, given the same storm event. The difference between infiltration volumes calculated for the two cases was attributed to urbanization. Estimated evapotranspiration was abstracted and the final result was equated with study-area-scale urban-enhanced groundwater recharge. These results were scaled up to the Sierra Vista subwatershed: the urban-enhanced contribution to groundwater recharge is estimated to range between 3270 and 3635 cubic decameters (between 2650 and 2945 acre-feet) per year for the period of study. Evapotranspirational losses were developed from estimates made elsewhere in the subwatershed. This, and other sources of uncertainty in the estimates, are discussed and quantified if possible.
84

House Prices, Capital Inflows and Macroprudential Policy

Mendicino, Caterina, Punzi, Maria Teresa 08 1900 (has links) (PDF)
This paper evaluates the monetary and macroprudential policies that mitigate the procyclicality arising from the interlinkages between current account deficits and financial vulnerabilities. We develop a two-country dynamic stochastic general equilibrium (DSGE) model with heterogeneous households and collateralised debt. The model predicts that external shocks are important in driving current account deficits that are coupled with run-ups in house prices and household debt. In this context, optimal policy features an interest-rate response to credit and a LTV ratio that countercyclically responds to house price dynamics. By allowing an interest-rate response to changes in financial variables, the monetary policy authority improves social welfare, because of the large welfare gains accrued to the savers. The additional use of a countercyclical LTV ratio that responds to house prices, increases the ability of borrowers to smooth consumption over the cycle and is Pareto improving. Domestic and foreign shocks account for a similar fraction of the welfare gains delivered by such a policy. (authors' abstract) / Series: Department of Economics Working Paper Series
85

COPS: Cluster optimized proximity scaling

Rusch, Thomas, Mair, Patrick, Hornik, Kurt January 2015 (has links) (PDF)
Proximity scaling (i.e., multidimensional scaling and related methods) is a versatile statistical method whose general idea is to reduce the multivariate complexity in a data set by employing suitable proximities between the data points and finding low-dimensional configurations where the fitted distances optimally approximate these proximities. The ultimate goal, however, is often not only to find the optimal configuration but to infer statements about the similarity of objects in the high-dimensional space based on the the similarity in the configuration. Since these two goals are somewhat at odds it can happen that the resulting optimal configuration makes inferring similarities rather difficult. In that case the solution lacks "clusteredness" in the configuration (which we call "c-clusteredness"). We present a version of proximity scaling, coined cluster optimized proximity scaling (COPS), which solves the conundrum by introducing a more clustered appearance into the configuration while adhering to the general idea of multidimensional scaling. In COPS, an arbitrary MDS loss function is parametrized by monotonic transformations and combined with an index that quantifies the c-clusteredness of the solution. This index, the OPTICS cordillera, has intuitively appealing properties with respect to measuring c-clusteredness. This combination of MDS loss and index is called "cluster optimized loss" (coploss) and is minimized to push any configuration towards a more clustered appearance. The effect of the method will be illustrated with various examples: Assessing similarities of countries based on the history of banking crises in the last 200 years, scaling Californian counties with respect to the projected effects of climate change and their social vulnerability, and preprocessing a data set of hand written digits for subsequent classification by nonlinear dimension reduction. (authors' abstract) / Series: Discussion Paper Series / Center for Empirical Research Methods
86

Nouvelles utilisations des mesures de bassins de déflexion pour caractériser l’état structurel des chaussées / New uses of deflection bowls measurements to characterize the structural conditions of pavements

Le Boursicaud, Vinciane 08 November 2018 (has links)
L’évaluation des caractéristiques structurelles des chaussées permet l’optimisation de leur maintenance. La mesure de déflexion est une mesure de base de cette évaluation. Aujourd’hui,seuls la déflexion maximale et le rayon de courbures ont analysés. Pourtant, le curviamètre et le déflectographe relèvent le bassin de déflexion complet dont l’analyse permettrait d’extraire des paramètres plus sensibles à l’endommagement des chaussées. Actuellement, l’interprétation des mesures est seulement qualitative et aucun calcul inverse de l’état de dommage de la structure n’est réalisé. La thèse vise à améliorer l’interprétation des mesures de déflexion. Le fonctionnement des appareils a montré que leurs hypothèses de mesure conduisent à des biais de mesure. Pour pallier ce problème, une procédure de correction des mesures a été mise en place. La comparaison avec des bassins théoriques a montré que la méthode de correction était satisfaisante. Ensuite, une étude numérique visant à déterminer la sensibilité du bassin de déflexion à la présence de différents défauts a été conduite. Celle ci a montré que les indicateurs classiques de la mesure de déflexion étaient peu sensibles à un endommagement dans la chaussée. Une méthodologie a donc été développée pour la création d’indicateurs optimisés à un type de défauts spécifiques. L’étude sur cas théoriques a conduit à l’obtention de résultats concluants. L’ensemble de ces travaux a ensuite été validé sur sites expérimentaux pour des mesures de répétabilité, mais également sur site avec la présence avérée de défauts. Finalement, la thèse a envisagé l’utilisation des travaux précédents sur des mesures réelles recueillies sur itinéraire. / The evaluation of the structural characteristics of pavements is involved in their maintenance. The measurement of deflection is a key indicator of this evaluation. Currently, only the maximum deflection and the radius of curvature are analyzed. However, the curviameter and the deflectograph are able to record the whole deflection bowl and the parameters deduced from thismeasurement could help to better characterized damages on pavements. The interpretation of the measurements is only qualitative and back calculation of pavement layer moduli gives unsatisfactory results. The thesis aims to improve the interpretation of deflection measurements. The working principle of these apparatus and the measurement assumptions introduce several measurement biases.To overcome these issues, a correction process has been developed. The comparison with theoretical basins has given satisfactory results on bituminous or flexible pavements. Then, a numerical study has been conducted to determine the influence of pavements damages on the deflection measurement. By this study, it has been showed that the usual indicators of the deflection measurement are notable to detect all damages. So, a methodology hasbeen developed in order to create an optimized indicator specified to a special defect. A study on numerical results has been conducted to validate the implementation of these indicators. Then, the correction method and these new indicators have been tested on experimental sites with and without damages. At last, the research works have been studied at network level.
87

Comparação entre os métodos convencional e com bocal modificado de aplicação de fluido de corte no processo de retificação cilíndrica interna

Biscioni, Ricardo Pio Barakat [UNESP] 02 August 2010 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:28:21Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-08-02Bitstream added on 2014-06-13T20:17:59Z : No. of bitstreams: 1 biscioni_rpb_me_bauru.pdf: 662612 bytes, checksum: 0ffafb03e1fbe94047f45b65deb9ebd0 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Um processo muito utilizado na indústria metal-mecânica em geral para fabricação de componentes de responsabilidade é a operação de retificação cilíndrica interna de precisão, este processo é utilizado, por exemplo, na fabricação de anéis de rolamento e outros componentes. Atualmente a evolução das máquinas permitiu o aprimoramento deste processo no que diz respeito ao posicionamento e à rigidez do sistema máquina-peça-ferramenta, porém ainda são enfrentados grandes problemas quanto à lubrificação e refrigeração, principalmente no que diz respeito à utilização de fluidos de corte. Este trabalho visa o estudo do comportamento da operação de retificação cilíndrica interna de mergulho a alta velocidade, no processo de acabamento de um aço endurecido SAE 52100, utilizando-se um rebolo convencional e dois métodos de refrigeração: o método de refrigeração convencional e com um bocal modificado. O motivo para o emprego de um bocal modificado reside no fato de se encontrar uma alternativa viável para uma possível substituição do método convencional que consome grande quantidade de fluido, já que estes têm se tornando nas últimas décadas um grande problema para as indústrias, devido aos enormes dados ambientais e humanos que causam. Para efetivar a comparação entre os métodos de refrigeração foram analisados os dados a respeito da rugosidade, erros de circularidade e desgaste diametral do rebolo, além de análise de microdureza e MEV das amostras retificadas para os métodos de refrigeração citados anteriormente. Analisando os resultados a quebra de barreira aerodinâmica e a melhor eficiência da penetração do fluido de corte na região de contato entre o rebolo e a peça foi maior quando se utilizou o bocal modificado com vazão de 21 1/min (25 m/s); os resultados de rugosidade, erro de circularidade e desgaste do rebolo foram sempre... / Precision internal plunge grinding is a process much used in the mechanical industry in general
88

Análise de risco de obras subterrâneas em maciços rochosos fraturados / Risk analysis of underground structures in fractured rock masses

Gian Franco Napa García 11 June 2015 (has links)
Nesta tese o autor estabelece um método sistemático de quantificação de risco em obras subterrâneas em maciço rochoso fraturado utilizando de maneira eficiente conceitos de confiabilidade estrutural. O método é aplicado a um caso de estudo real da caverna da Usina Hidrelétrica Paulo Afonso IV, UHE-PAIV. Adicionalmente, um estudo de otimização de projeto com base em risco quantitativo também é apresentado para mostrar as potencialidades do método. A estimativa do risco foi realizada de acordo com as recomendações da Organização de Auxílio contra Desastres das Nações Unidas, UNDRO, onde o risco pode ser estimado como a convolução entre as funções de perigo, vulnerabilidade e perdas. Para a quantificação da confiabilidade foram utilizados os métodos de aproximação FORM e SORM com uso de acoplamento direto e de superfícies de resposta polinomial quadráticas. A simulação de Monte Carlo também foi utilizada para a quantificação da confiabilidade no estudo de caso da UHE-PAIV devido à ocorrência de múltiplos modos de falha simultâneos. Foram avaliadas as ameaças de convergência excessiva das paredes, colapso da frente de escavação e a queda de blocos. As funções de perigo foram estimadas em relação à intensidade da ameaça como razão de deslocamento da parede ou volume do bloco. No caso da convergência excessiva, um túnel circular profundo foi estudado com o intuito de comparar a qualidade de aproximação da técnica numérica (FLAC3D com acoplamento direto) em relação à solução exata. Erros inferiores a 0,1% foram encontrados na estimativa do índice de confiabilidade &#223;. Para o caso da estabilidade de frente foram comparadas duas soluções da análise limite da plasticidade contra a solução obtida numericamente. Já no caso de queda de bloco, verificou-se que as recomendações de parcialização do sistema de classificação geomecânica Q incrementa consideravelmente a segurança da escavação conduzindo a padrões da prática mais avançada, por exemplo, de um &#223; de 2,04 para a escavação a seção plena até 4,43 para o vão recomendado. No estudo de caso, a segurança da caverna da UHE-PAIV foi estudada perante a queda de blocos utilizando o software Unwedge. A probabilidade de falha individual foi integrada no comprimento da caverna e o conceito de sistema foi utilizado para estimar a probabilidade de falha global. A caverna apresentou uma probabilidade de falha global de 3,11 a 3,22% e um risco de 7,22x10-3 x C e 7,29x10-3 x C, sendo C o custo de falha de um bloco de grandes dimensões. O bloco mais crítico apresentou um &#223; de 3,63. No estudo de otimização foram utilizadas duas variáveis de projeto, a espessura do concreto projetado e o número de tirantes por metro quadrado. A configuração ótima foi encontrada como o par [t, nb] que minimiza a função de custo total. Também, um estudo de sensibilidade foi realizado para avaliar as influências de alguns parâmetros no projeto ótimo da escavação. Finalmente, os resultados obtidos sugerem que as análises quantitativas de risco, como base para a avaliação e gestão de risco, podem e devem ser consideradas como diretriz da prática da engenharia geotécnica, uma vez que estas análises conciliam os conceitos básicos de projeto como eficiência mecânica, segurança e viabilidade financeira. Assim, a quantificação de risco é plenamente possível. / In this thesis the author establishes a systematic method for quantifying the risk in underground structures in fractured rock masses using structural reliability concepts in an efficient way. The method is applied to the case study of the underground cavern of Paulo Afonso IV Hydroelectrical Power Station UHE-PAIV. Additionally, an optimization study was conducted in order to show a potential application of the method. The estimation of the risk was done according to the recommendations of the United Nations Disaster Relief Organization UNDRO where risk can be estimated as the convolution between the hazard, vulnerability and losses functions. FORM and SORM were used as approximation methods for the reliability quantification by means of Direct Coupling and Quadratic Polynomial Response Surfaces. A Monte Carlo simulation was also used to quantify the reliability of the cavern UHE-PAIV because of the presence of multiple failure modes in the numerical model. In this study 3 types of threads were evaluated: excessive wall convergence, face stability and wedge block fall. Hazard functions were built relative to the thread intensities such as wall convergence ratio or block size. In the case of excessive wall convergence a deep circular tunnel was studied meaning to compare the quality of the approximation of the reliability technique (FLAC3D with direct coupling) to the exact solution. Errors below 0.1% were found in the reliability index &#223; estimation. The reliability of the face stability was evaluated using two limit analysis solutions against the numeric estimation. For the block stability it was verified that the sequential excavation recommended by the Q system increases considerably the reliability of the excavation leading safety to modern standard levels, e.g. from a &#223; equal to 2.04 for a full section excavation to 4.43 for a partial excavation. In the case study of the UHE-PAIV, the reliability of the underground cavern was estimated using the commercial software Unwedge. The probability of failure of individual blocks was integrated along the length of the cavern and the concept of structural system was used to estimate the global probability of failure. The cavern presented a probability of failure of 3.11% to 3.22% and a risk of 7.22x10-3 x C and 7.29x10-3 x C - where C is the cost of failure of a large block. The critical individual block showed a &#223; equal to 3.63. The optimization was performed considering two design variables &#8722; liner thickness and number of bolt per square meter. The optimal design was found as the pair, [t, nb] which minimizes the total cost function. Also, a sensibility analysis was conducted to understand the influence of some parameters in the location of the optimal excavation design. Concluding, the results obtained here suggest that the quantitative risk analyses, as a base for the risk assessment and management, can and must be considered as a north for the practice of geotechnical engineering owing that these analyses reconcile the basic concepts of mechanical efficiency, safety and financial feasibility. Thus, risk quantification is fully affordable.
89

Novas metodologias para a análise de dados em ciências ômicas e para o controle de qualidade de amostras de biodiesel-diesel / New methodologies for data analysis in omics sciences and for quality control, of biodiesel-diesel samples

Sousa, Samuel Anderson Alves de, 1983- 25 August 2018 (has links)
Orientadores: Márcia Miguel Castro Ferreira, Alvicler Magalhães / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Química / Made available in DSpace on 2018-08-25T12:46:59Z (GMT). No. of bitstreams: 1 Sousa_SamuelAndersonAlvesde_D.pdf: 6563141 bytes, checksum: df96f3f360351f7d74d92a5834369ecf (MD5) Previous issue date: 2013 / Resumo: Neste trabalho são apresentadas duas novas metodologias multivariadas. Na primeira, é desenvolvida uma ferramenta denominada bucketing otimizado para a correção dos desalinhamentos dos espectros de RMN 1H. A análise de componentes principais em intervalos (iPCA) é utilizada para explorar espectros de RMN 1H e 13C. Para a diminuição de ruído destes últimos é utilizada a análise de componentes principais em múltiplas escalas (MSPCA). Os modelos iPCA são construídos para as classes de amostras, metropolitanas e não metropolitanas, em conjunto e separadas, atuando complementarmente na detecção de amostras não conformes. Neste contexto, os padrões espectrais apontaram amostras, previamente reprovadas pelos parâmetros físico-químicos próprios do campo de biocombustíveis. Adicionalmente, os modelos reprovaram amostras com padrões espectrais distintos, não reprovadas pelos parâmetros citados. De modo geral, o desempenho dos modelos utilizando os espectros de RMN 1H foi satisfatório. Uma exceção foi a detecção de amostras fora da especificação para o teor de biodiesel, onde as distinções nos espectros não permitiram a discriminação de amostras com teores próximo ao limite. Contudo, ao se estender um pouco a faixa sugerida na legislação, os modelos mostraram boa melhoria. Os modelos a partir dos espectros de RMN 13C obtiveram desempenho inferior àqueles citados acima. No segundo estudo é apresentado um novo método denominado escalamento de diferenças individuais multinível (ML-INDSCAL), para analisar a variação intra-individual em dados das ciências ômicas, focando em mudanças nas covariâncias dentro dos grupos experimentais e evidenciando as relações entre as variáveis (BVRs). Como somente a variação intra-individual é usada para revelar as BVRs associadas às mudanças dinâmicas, as interpretações sobre o fenômeno no qual os efeitos se baseiam são melhoradas. Um conjunto de dados simulado é explorado para demonstrar a força do método. O método é também aplicado a um conjunto real de dados de um estudo de expressões genéticas em células expressando a proteína viral R (Vpr) na forma nativa e com as mutações R80A e F72A/R73A. O procedimento jack-knife é explorado na validação dos modelos ML-INDSCAL. O método ML-INDSCAL é o primeiro da literatura que combina a exploração da estrutura multinível do conjunto de dados e a investigação de BVRs e pode fornecer valiosas contribuições no campo de seleção de características / Abstract: In this work, two new multivariate methodologies are presented. In the first approach, a tool named optimized bucketing is developed to correct 1H NMR spectra misalignments. The interval principal component analysis (iPCA) is used in order to explore 1H and 13C NMR spectra. The multiscale principal component analysis (MSPCA) is used for denoising of 13C NMR spectra. The iPCA models are built for two classes of samples, metropolitan and non-metropolitan, together and isolated, complementarily providing out-of-specification samples detections. In this context, the spectral profiles pointed out samples out of specification, in accordance to their previously known physical-chemical parameters from the field of biofuels. Additionally, the models were able to identify samples with distinct spectral profiles, but not rejected by the cited parameters. In general, the iPCA models using 1H NMR spectra presented good performances. An exception involves the detection of out-of-specification samples for biodiesel content, where the distinction on spectra profiles did not allow discrimination of samples when the biodiesel content was close to the allowed limit. Nevertheless, a small extension in the range, adopted by the Brazilian legislation, was enough to produce an improvement. The models from the 13C NMR spectra achieved worse performance than those cited above. In the second study is presented a novel method named multilevel individual differences scaling (ML-INDSCAL) to analyze within-individual variation in omic data, focusing on the changing covariances within groups and evidencing the between variables relationships (BVRs). Since only the within-individual variation is used to reveal the BVRs associated to dynamic changes, the interpretations about the real phenomena underlying the treatment are improved. A simulated data set is explored to demonstrate the strength of the method. Also, the method is applied to a real data set from a study of expression profiles in cell lines expressing wild-type and two mutated (R80A and F72A/R73A strains) Vpr. A version of the jack-knife procedure is explored in order to validate the ML-INDSCAL models. The ML-INDSCAL is the first method in literature that combines the exploration of the multilevel structure and the BVRs investigation and it can provide valuable insights on the feature selection field / Doutorado / Físico-Química / Doutor em Ciências
90

OPNET simulation of voice over MPLS With Considering Traffic Engineering

Radhakrishna, Deekonda, Keerthipramukh, Jannu January 2010 (has links)
Multiprotocol Label Switching (MPLS) is an emerging technology which ensures the reliable delivery of the Internet services with high transmission speed and lower delays. The key feature of MPLS is its Traffic Engineering (TE), which is used for effectively managing the networks for efficient utilization of network resources. Due to lower network delay, efficient forwarding mechanism, scalability and predictable performance of the services provided by MPLS technology makes it more suitable for implementing real-time applications such as voice and video. In this thesis performance of Voice over Internet Protocol (VoIP) application is compared between MPLS network and conventional Internet Protocol (IP) network. OPNET modeler 14.5 is used to simulate the both networks and the comparison is made based on some performance metrics such as voice jitter, voice packet end-to-end delay, voice delay variation, voice packet sent and received. The simulation results are analyzed and it shows that MPLS based solution provides better performance in implementing the VoIP application. In this thesis, by using voice packet end-to-end delay performance metric an approach is made to estimate the minimum number of VoIP calls that can be maintained, in MPLS and conventional IP networks with acceptable quality. This approach can help the network operators or designers to determine the number of VoIP calls that can be maintained for a given network by imitating the real network on the OPNET simulator. / 0046737675303

Page generated in 0.0527 seconds