• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 16
  • 11
  • 8
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 130
  • 130
  • 67
  • 28
  • 24
  • 19
  • 18
  • 15
  • 15
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Deterministic and Probabilistic Assessment of Tunnel Face Stability / Evaluation déterministe et probabiliste de la stabilité du front de taille des tunnels

Pan, Qiujing 21 July 2017 (has links)
The main work for Qiujing PAN’s PhD thesis is to develop the stability analysis for underground structures, which contains two parts, deterministic model and probabilistic analysis. During his 1st year of PhD research, he has mainly finished the deterministic model study. In the 2nd year, I developed a probabilistic model for high dimensional problems. / In the contemporary society, the utilization and exploitation of underground space has become an inevitable and necessary measure to solve the current urban congestion. One of the most important requirements for successful design and construction in tunnels and underground engineering is to maintain the stability of the surrounding soils of the engineering. But the stability analysis requires engineers to have a clear ideal of the earth pressure, the pore water pressure, the seismic effects and the soil variability. Therefore, the research aimed at employing an available theory to design tunnels and underground structures which would be a hot issue with high engineering significance. Among these approaches employed to address the above problem, limit analysis is a powerful tool to perform the stability analysis and has been widely used for real geotechnical works. This research subject will undertake further research on the application of upper bound theorem to the stability analysis of tunnels and underground engineering. Then this approach will be compared with three dimensional analysis and experimental available data. The final goal is to validate new simplified mechanisms using limit analysis to design the collapse and blow-out pressure at the tunnel face. These deterministic models will then be used in a probabilistic framework. The Collocation-based Stochastic Response Surface Methodology will be used, and generalized in order to make possible at a limited computational cost a complete parametric study on the probabilistic properties of the input variables. The uncertainty propagation through the models of stability and ground movements will be evaluated, and some methods of reliability-based design will be proposed. The spatial variability of the soil will be taken into account using the random field theory, and applied to the tunnel face collapse. This model will be developed in order to take into account this variability for much smaller computation times than numerical models, will be validated numerically and submitted to extensive random samplings. The effect of the spatial variability will be evaluated.
62

Magnetização remanente em sistemas antiferromagnéticos\" / Remanent magnetization in the antiferromagnetic systems

Zulmara Virgínia de Carvalho 17 March 2006 (has links)
No contexto de sistemas de baixa anisotropia, medidas de magnetização para verificar os efeitos magnéticos induzidos pela substituição do íon Mn+2 por Cu+2 em um quase unidimensional antiferromagneto tipo Heisenberg CsMn1-xCuxA3.2H2O (A = Cl, Br) foram feitas. Nas amostras diluídas com derivados de Br observamos o aparecimento de uma magnetização remanente abaixo de TN quando elas são resfriadas em um pequeno campo axial aplicado ao longo do eixo fácil. Isso não ocorre com as amostras diluídas com derivados de Cl. A troca intra-cadeia tanto com os compostos de Cl e Br é antiferromagnética, entretanto a troca entre-cadeias ao longo de eixo fácil é antiferromagnética no composto com Cl e ferromagnética com o Br. Esse fato parece ser determinístico no surgimento de momentos espontâneos abaixo de TN no composto com bromo. Além disso, medidas de magnetização do monocristal antiferromagnético de sítio diluído A2Fe1-xInxCl5.H2O (A = Cs) foram feitas em baixos campos magnéticos (H) aplicados ao longo do eixo de fácil magnetização. Os dados revelaram que uma magnetização remanente Mr se desenvolve abaixo a temperatura de Néel TN. Essa Mr(T) é paralela ao eixo fácil e satura para campos H ~ 1Oe e ela aumenta com decréscimo de T. Ela também possui uma dependência de temperatura como outros sistemas diluídos da mesma família (A = K, Rb). Para todos esses sistemas, a curva normalizada Mr(t)/ Mr(t = 0,3), onde t=T/TN é a temperatura reduzida, é independente de x e acompanha uma curva universal. No contexto de sistemas de alta anisotropia, a dependência da temperatura do excesso de magnetização em baixos e altos campos foi investigada para o antiferromagneto 3D de Ising FexZn1-xF2 (x = 0.72; 0.46 e 0.31) e também o sistema puro FeF2. Verificamos que Mr surge tanto paralela ou perpendicular ao eixo fácil. A magnitude de Mr, para baixos campos (H < 1 Oe) depende de H, mas satura para campos de alguns Oersted. O esperado comportamento de campo aleatório (RF), em campos altos, é observado quando H é aplicado ao longo do eixo fácil. / In the context of low anisotropy, the magnetization measurements to find out the magnetic effects induced by the substitution of Mn+2 by Cu+2 íons in the quaseone-dimensional Heisenberg-like antiferromagnets CsMn1-xCuxA3.2H2O (A = Cl,Br) were made. In the diluted samples of the Br derivative, we observe the appearance of a remanent magnetization (Mr) below TN when they are cooled in a small axial magnetic field applied along the easy axis. This does not occur in the diluted samples of the Cl derivative. The intra-chain exchange both in Cl and Br compounds is antiferromagnetic, however the inter-chain exhange along the easy axis is antiferromagnetic in the chloride compound and ferromagnetic in the bromide. This fact seems to be deterministic in the appearance of the net moments below TN in the bromide. Moreover, the magnetization measurements on single crystals of the sitediluted antiferromagnet A2Fe1-xInxCl5.H2O (A = Cs) were carried out at low magnetic fields (H) applied along the easy axis. The data revealed that a Mr develops below the Néel temperature TN. This Mr(T) is parallel to the easy axis , saturates for H ~ 1 Oe and it increases with decreasing T. It has also temperature dependence as another diluted systems of the same family (A = K, Rb). For all these systems the normalized Mr(t)/Mr(t = 0,3), where t = T/TN is the reduced temperature, is independet of x and follow a universal curve. In the context of high anisotropy, the temperature dependence of the excess magnetization at low and high fields was investigated for the diluted antiferromagnet FexZn1-xF2 (x = 0.72; 0.46 and 0.31) and pure system FeF2 as well. It was found that Mr is either along the easy axis or perpendicular to it. The size of Mr for very low fields (H < 1 Oe) depends on H but it sature for fields of the order of few Oersteds. The expected random field (RF) behaivor is observed when H is applied along the easy axis at higher fields.
63

"Segmentação de imagens e validação de classes por abordagem estocástica" / Image segmentation and class validation in a stochastic approach

Leandro Cavaleri Gerhardinger 13 April 2006 (has links)
Uma etapa de suma importância na análise automática de imagens é a segmentação, que procura dividir uma imagem em regiões cujos pixels exibem um certo grau de similaridade. Uma característica que provê similaridade entre pixels de uma mesma região é a textura, formada geralmente pela combinação aleatória de suas intensidades. Muitos trabalhos vêm sendo realizados com o intuito de estudar técnicas não-supervisionadas de segmentação de imagens por modelos estocásticos, definindo texturas como campos aleatórios de Markov. Um método com esta abordagem que se destaca é o EM/MPM, um algoritmo iterativo que combina a técnica EM para realizar uma estimação de parâmetros por máxima verossimilhança com a MPM, utilizada para segmentação pela minimização do número de pixels erroneamente classificados. Este trabalho desenvolveu um estudo sobre a modelagem e a implementação do algoritmo EM/MPM, juntamente com sua abordagem multiresolução. Foram propostas uma estimação inicial de parâmetros por limiarização e uma combinação com o algoritmo de Annealing. Foi feito também um estudo acerca da validação de classes, ou seja, a busca pelo número de regiões diferentes na imagem, mostrando as principais técnicas encontradas na literatura e propondo uma nova abordagem, baseada na distribuição dos níveis de cinza das classes. Por fim, foi desenvolvida uma extensão do modelo para a segmentação de malhas em duas e três dimensões. / An important stage of the automatic image analysis process is segmentation, that aims to split an image into regions whose pixels exhibit a certain degree of similarity. Texture is known as an efficient feature that provides enough discriminant power to differenciate pixels from distinct regions. It is usually defined as a random combination of pixel intensities. A considerable amount of researches has been done on non-supervised techniques for image segmentation based on stochastic models, in which texture is defined as Markov Random Fields. Such an important method in this category is the EM/MPM, an iterative algorithm that combines the maximum-likelihood parameter estimation model EM with the MPM segmentation algorithm, whose aim is to minimize the number of misclassified pixels in the image. This work has carried out a study on stochastic models for segmentation and shows an implementation for the EM/MPM algorithm, together with a multiresolution approach. A new threshold-based scheme for the estimation of initial parameters for the EM/MPM model has been proposed. This work also shows how to incorporate the concept of annealing to the current EM/MPM algorithm in order to improve segmentation. Additionally, a study on the class validity problem (search for the correct number of classes) has been done, showing the most important techniques available in the literature. As a consequence, a gray level distribution-based approach has been devised. Finally, the work shows an extension of the traditional EM/MPM technique for segmenting 2D and 3D meshes.
64

Segmentation et identification audiovisuelle de personnes dans des journaux télévisés / Audiovisual segmentation and identification of persons in broadcast news

Gay, Paul 25 March 2015 (has links)
Cette thèse traite de l’identification des locuteurs et des visages dans les journaux télévisés. L’identification est effectuée à partir des noms affichés à l’écran dans les cartouches qui servent couramment à annoncer les locuteurs. Puisque ces cartouches apparaissent parcimonieusement dans la vidéo, obtenir de bonnes performances d’identification demande une bonne qualité du regroupement audiovisuel des personnes. Par regroupement, on entend ici la tâche de détecteret regrouper tous les instants où une personne parle ou apparaît. Cependant les variabilités intra-personnes gênent ce regroupement. Dans la modalité audio, ces variabilités sont causées par la parole superposée et les bruits de fond. Dans la modalité vidéo, elles correspondent essentiellement à des variations de la pose des visages dans les scènes de plateaux avec, en plus, des variations de luminosité (notamment dans le cas des reportages). Dans cette thèse, nous proposons une modélisation du contexte de la vidéo est proposée afin d’optimiser le regroupement pour une meilleure identification. Dans un premier temps, un modèle basé sur les CRF est proposé afin d’effectuer le regroupement audiovisuel des personnes de manière jointe. Dans un second temps, un système d’identification est mis en place, basé sur la combinaison d’un CRF de nommage à l’échelle des classes, et du CRF développé précédemment pour le regroupement. En particulier, des informations de contexte extraites de l’arrière plan des images et des noms extraits des cartouches sont intégrées dans le CRF de regroupement. Ces éléments permettent d’améliorer le regroupement et d’obtenir des gains significatifs en identification dans les scènes de plateaux. / This Phd thesis is about speaker and face identification in broadcast news. The identification is relying on the names automatically extracted from overlaid texts which are used to announce the speakers. Since those names appear sparsely in the video, identification performance depends on the diarization performance i.e. the capacity of detecting and clustering together all the moments when a given person appears or speaks. However, intra-person variability in the video signal make this task difficult. In the audio modality, this variability comes from overlap speech and background noise. For the video, it consists in head pose variations and lighting conditions (especially in report scenes). A context-aware model is proposed to optimize the diarization for a better identification. Firstly, a Conditional Random Field (CRF) model isproposed to perform the diarization jointly over the speech segments and the face tracks. Secondly, an identifcation system is designed. It is based on the combination of a naming CRF at cluster level and the diarization CRF. In particular, context information extracted from the image background and the names extracted from the overlaid texts are integrated in the diarization CRF at segment level. The use of those elements enable us to obtain better performances in diarization and identification, especially in studio scenes.
65

Numerical Methods For Solving The Eigenvalue Problem Involved In The Karhunen-Loeve Decomposition

Choudhary, Shalu 02 1900 (has links) (PDF)
In structural analysis and design it is important to consider the effects of uncertainties in loading and material properties in a rational way. Uncertainty in material properties such as heterogeneity in elastic and mass properties can be modeled as a random field. For computational purpose, it is essential to discretize and represent the random field. For a field with known second order statistics, such a representation can be achieved by Karhunen-Lo`eve (KL) expansion. Accordingly, the random field is represented in a truncated series expansion using a few eigenvalues and associated eigenfunctions of the covariance function, and corresponding random coefficients. The eigenvalues and eigenfunctions of the covariance kernel are obtained by solving a second order Fredholm integral equation. A closed-form solution for the integral equation, especially for arbitrary domains, may not always be available. Therefore an approximate solution is sought. While finding an approximate solution, it is important to consider both accuracy of the solution and the cost of computing the solution. This work is focused on exploring a few numerical methods for estimating the solution of this integral equation. Three different methods:(i)using finite element bases(Method1),(ii) mid-point approximation(Method2), and(iii)by the Nystr¨om method(Method3), are implemented and numerically studied. The methods and results are compared in terms of accuracy, computational cost, and difficulty of implementation. In the first method an eigenfunction is first represented in a linear combination of a set of finite element bases. The resulting error in the integral equation is then minimized in the Galerkinsense, which results in a generalized matrix eigenvalue problem. In the second method, the domain is partitioned into a finite number of subdomains. The covariance function is discretized by approximating its value over each subdomain locally, and thereby the integral equation is transformed to a matrix eigenvalue problem. In the third method the Fredholm integral equation is approximated by a quadrature rule, which also results in a matrix eigenvalue problem. The methods and results are compared in terms of accuracy, computational cost, and difficulty of implementation. The first part of the numerical study involves comparing these three methods. This numerical study is first done in one dimensional domain. Then for study in two dimensions a simple rectangular domain(referred toasDomain1)is taken with an uncertain material property modeled as a Gaussian random field. For the chosen covariance model and domain, the analytical solutions are known, which allows verifying the accuracy of the numerical solutions. There by these three numerical methods are studied and are compared for a chosen target accuracy and different correlation lengths of the random field. It was observed that Method 2 and Method 3 are much faster than the Method 1. On the other hand, for Method 2 and 3, additional cost for discretizing the domain into nodes should be considered whereas for a mechanics-related problem, Method 1 can use the available finite element mesh used for solving the mechanics problem. The second part of the work focuses on studying on the effect of the geometry of the model on realizations of the random field. The objective of the study is to see the possibility of generating the random field for a complicated domain from the KL expansion for a simpler domain. For this purpose, two KL decompositions are obtained: one on the Domain1, and another on the same rectangular domain modified with a rectangular hole (referredtoasDomain2) inside it. The random process is generated and realizations are compared. It was observed from the studies that probability density functions at the nodes on both the domains, that is, on Domain 1 and Domain 2, are similar. This observation leads to a possibility that a complicated domain can be replaced by a corresponding simpler domain, thereby reducing the computational cost.
66

Generalized Survey Propagation

Tu, Ronghui January 2011 (has links)
Survey propagation (SP) has recently been discovered as an efficient algorithm in solving classes of hard constraint-satisfaction problems (CSP). Powerful as it is, SP is still a heuristic algorithm, and further understanding its algorithmic nature, improving its effectiveness and extending its applicability are highly desirable. Prior to the work in this thesis, Maneva et al. introduced a Markov Random Field (MRF) formalism for k-SAT problems, on which SP may be viewed as a special case of the well-known belief propagation (BP) algorithm. This result had sometimes been interpreted to an understanding that “SP is BP” and allows a rigorous extension of SP to a “weighted” version, or a family of algorithms, for k-SAT problems. SP has also been generalized, in a non-weighted fashion, for solving non-binary CSPs. Such generalization is however presented using statistical physics language and somewhat difficult to access by more general audience. This thesis generalizes SP both in terms of its applicability to non-binary problems and in terms of introducing “weights” and extending SP to a family of algorithms. Under a generic formulation of CSPs, we first present an understanding of non-weighted SP for arbitrary CSPs in terms of “probabilistic token passing” (PTP). We then show that this probabilistic interpretation of non-weighted SP makes it naturally generalizable to a weighted version, which we call weighted PTP. Another main contribution of this thesis is a disproof of the folk belief that “SP is BP”. We show that the fact that SP is a special case of BP for k-SAT problems is rather incidental. For more general CSPs, SP and generalized SP do not reduce from BP. We also established the conditions under which generalized SP may reduce as special cases of BP. To explore the benefit of generalizing SP to a wide family and for arbitrary, particularly non-binary, problems, we devised a simple weighted PTP based algorithm for solving 3-COL problems. Experimental results, compared against an existing non-weighted SP based algorithm, reveal the potential performance gain that generalized SP may bring.
67

Computation of High-Dimensional Multivariate Normal and Student-t Probabilities Based on Matrix Compression Schemes

Cao, Jian 22 April 2020 (has links)
The first half of the thesis focuses on the computation of high-dimensional multivariate normal (MVN) and multivariate Student-t (MVT) probabilities. Chapter 2 generalizes the bivariate conditioning method to a d-dimensional conditioning method and combines it with a hierarchical representation of the n × n covariance matrix. The resulting two-level hierarchical-block conditioning method requires Monte Carlo simulations to be performed only in d dimensions, with d ≪ n, and allows the dominant complexity term of the algorithm to be O(n log n). Chapter 3 improves the block reordering scheme from Chapter 2 and integrates it into the Quasi-Monte Carlo simulation under the tile-low-rank representation of the covariance matrix. Simulations up to dimension 65,536 suggest that this method can improve the run time by one order of magnitude compared with the hierarchical Monte Carlo method. The second half of the thesis discusses a novel matrix compression scheme with Kronecker products, an R package that implements the methods described in Chapter 3, and an application study with the probit Gaussian random field. Chapter 4 studies the potential of using the sum of Kronecker products (SKP) as a compressed covariance matrix representation. Experiments show that this new SKP representation can save the memory footprint by one order of magnitude compared with the hierarchical representation for covariance matrices from large grids and the Cholesky factorization in one million dimensions can be achieved within 600 seconds. In Chapter 5, an R package is introduced that implements the methods in Chapter 3 and show how the package improves the accuracy of the computed excursion sets. Chapter 6 derives the posterior properties of the probit Gaussian random field, based on which model selection and posterior prediction are performed. With the tlrmvnmvt package, the computation becomes feasible in tens of thousands of dimensions, where the prediction errors are significantly reduced.
68

Analýza výskytu extremálních hodnot v čase a prostoru / Analysis of occurrence of extremal values in time and space

Starý, Ladislav January 2015 (has links)
This thesis describes and compares methods for statistical modeling of spatio- temporal data. Methods are extended by examples and numerical studies on real world data. Basic point of interest is statistical analysis of spatial data with unknown correlation structure and known position in space. Further analysis is focused on spatial data with temporal component - spatio-temporal data. Fi- nally, extremal values and their occurrences are discussed. The main aspiration of my thesis is to provide statistical tools for spatio-temporal data and analysis of extremal values of prediction. 1
69

Extracting Particular Information from Swedish Public Procurement Using Machine Learning

Waade, Eystein January 2020 (has links)
The Swedish procurement process has a yearly value of 706 Billion SEK over approximately 18 000 procurements. With each process comes many documents written in different formats that need to be understood to be able to be a possible tender. With the development of new technology and the age of Machine Learning it is of huge interest to investigate how we can use this knowledge to enhance the way we procure. The goal of this project was to investigate if public procurements written in Swedish in PDF format can be parsed and segmented into a structured format. This process was divided into three parts; pre-processing, annotation, and training/evaluation. The pre-processing was accomplished using an open-source pdf-parser called pdfalto that produces structured XML-files with layout and lexical information. The annotation process consisted of generalizing a procurement into high-level segments that are applicable to different document structures as well as finding relevant features. This was accomplished by identifying frequent document formats so that many documents could be annotated using deterministic rules. Finally, a linear chain Conditional Random Field was trained and tested to segment the documents. The models showed a high performance when they were tested on documents of the same format as it was trained on. However, the data from five different documents were not sufficient or general enough to make the model able to make reliable predictions on a sixth format that it had not seen before. The best result was a total accuracy of 90,6% where two of the labels had a f1-score above 95% and the two other labels had a f1-score of 51,8% and 63,3%.
70

An AI-based System for Assisting Planners in a Supply Chain with Email Communication

Dantu, Sai Shreya Spurthi, Yadlapalli, Akhilesh January 2023 (has links)
Background: Communication plays a crucial role in supply chain management (SCM) as it facilitates the flow of information, materials, and goods across various stages of the supply chain. In the context of supply planning, each planner manages thousands of supply chain entities and spends a lot of time reading and responding to high volumes of emails related to part orders, delays, and backorders that can lead to information overload and hinder workflow and decision-making. Therefore, streamlining communication and enhancing email management are essential for optimizing supply chain efficiency. Objectives: This study aims to create an automated system that can summarize email conversations between planners, suppliers, and other stakeholders. The goal is to increase communication efficiency using Natural Language Processing (NLP) algorithms to extract important information from lengthy conversations. Additionally, the study will explore the effectiveness of using conditional random fields (CRF) to filter out irrelevant content during preprocessing. Methods: We chose four advanced pre-trained abstractive dialogue summarization models, BART, PEGASUS, T5, and CODS, and evaluation metrics, ROUGE and BERTScore, to compare their performance in effectively summarizing our email conversations. We used CRF to preprocess raw data from around 400 planner-supplier email conversations to extract important sentences in a dialogue format and label them with specific dialogue act tags. We then manually summarized the 400 conversations and fine-tuned the four chosen models. Finally, we evaluated the models using ROUGE and BERTScore metrics to determine their similarity to human references. Results: The results show that the performance of the summarization models has significantly improved after fine-tuning the models with domain-specific data. The BART model achieved the highest ROUGE-1 score of 0.65, ROUGE-L score of 0.56, and BERTScore of 0.95 compared to other models. Additionally, CRF-based preprocessing proved to be crucial in extracting essential information and minimizing unnecessary details for the summarization process. Conclusions: This study shows that advanced NLP techniques can make supply chain communication workflows more efficient. The BART-based email summarization tool that we created showed great potential in giving important insights and helping planners deal with information overload.

Page generated in 0.0718 seconds