• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 50
  • 46
  • 22
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 347
  • 84
  • 67
  • 66
  • 64
  • 44
  • 40
  • 38
  • 37
  • 36
  • 35
  • 31
  • 31
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Ambiguities in one-dimensional phase retrieval from Fourier magnitudes

Beinert, Robert 16 December 2015 (has links)
No description available.
222

Top polarization measurement in single top quark production with the ATLAS detector / Mesure de la polarisation dans la production électrofaible de quark top avec le détecteur ATLAS

Sun, Xiaohu 01 October 2013 (has links)
La polarisation du quark top produit par interaction électrofaible (single-top) en voie-t permet de tester la structure du vertex Wtb: le couplage vecteur gauche prévu dans le cadre du Modèle Standard (MS), ainsi que les couplages anormaux vecteur ou tenseur introduits par plusieurs théories au-delà du Modèle Standard. Le lot de données correspondant à une luminosité integrée de 4,7 fb-1 recolté dans les collisions proton-proton à une énergie de 7 TeV dans le centre de masse offre une chance de mesurer la polarisation du quark top. Cette thèse traite de la mesure de la polarisation du quark top grâce à l'étude des distributions angulaires polarisées dans des bases spécifiques avec les événements single-top produits en voie-t. Au début du document, le contexte théorique de production du quark top par interaction forte et électrofaible au LHC est introduit. Ensuite, le détecteur, les performances de reconstruction ainsi que la sélection d'événements avec une signature single-top en voie-t, sont décrits. Les méthodes d' "unfolding" et de "folding" sont présentées et testées avec différentes configurations afin de mesurer la polarisation du quark top. Les résultats obtenus, ainsi que les incertitudes théorique, expérimentales et statistiques, sont examinées. Il s'agit de la première mesure de polarisation du quark top avec le détecteur ATLAS. Les résultats sont compatibles avec les prédictions MS, et contribuent donc à contraindre de maniére significative les couplages anormaux sur le vertex Wtb. / The top quark polarization in electroweak production for single top t-channel allows to test the structure of the Wtb vertex: the left-handed vector coupling of the Standard Model (SM) as well as the anomalous couplings including the right-handed vector, the left-handed tensor and the right-handed tensor couplings. The 4.7 fb-1 data recorded by the ATLAS detector at the LHC with the center of mass energy at 7 TeV in 2011 provides a chance to measure the top polarization. This thesis discusses the measurement of the top polarization by studying the polarized angular distributions in specific bases with t-channel single top events. In the beginning of the thesis, a theoretical context of the top quark production via the strong interaction and the electroweak interaction at the LHC is introduced. Then the detector, the reconstruction performances as well as the event selections with a single top t-channel event signature are described. To measure the top polarization, the unfolding and folding methods are constructed and tested with different configurations. In the end, the measured results are examined with the estimated uncertainties from the theory, the detector response and modeling as well as the statistics. This is the first measurement of the top polarization with the ATLAS detector. The results are compatible with the SM predictions and contribute signicantly to constrain the anomalous couplings in the Wtb vertex
223

[en] SEISMIC ABSORPTION AND CORRECTION METHODS / [pt] ABSORÇÃO SÍSMICA E MÉTODOS DE CORREÇÃO

KARINE RIBEIRO PEREIRA 22 January 2016 (has links)
[pt] Este trabalho tem como objetivo analisar o problema das perdas por absorção dos dados de reflexão sísmica, bem como testar três métodos disponíveis na literatura para sua correção. Utilizamos a modelagem da absorção apresentada por Romanelli Rosa, com a noção de frequência instantânea, e analisamos os seguintes métodos de correção: a Compensação Q, o método de Varela et al. e o método de Duarte, que é um filtro recursivo. Observamos que o método de Duarte é computacionalmente mais rápido que os demais. Ainda assim, podemos utilizar a Transformada de Fourier para torná-lo mais rápido nos casos em que a recursão é interrompida em uma etapa M, menor que o número de amostras N do dado sísmico e maior que lnN. Por fim, testamos o desempenho dos métodos em uma linha de reflexão sísmica marítima da Bacia de Sergipe-Alagoas, fornecida pela Agência Nacional do Petróleo, Gás Natural e Biocombustíveis (ANP). A linha foi reprocessada, com a correção das perdas por absorção aplicada antes do empilhamento, para cada método estudado. Para comparar os resultados, o dado também foi processado sem correção da absorção. Verificamos que houve um aumento da resolução das camadas geológicas de subsuperfície em todos os métodos testados em comparação com o dado sem correção, porém o método de Duarte mostrou-se mais rápido que os demais. / [en] This work aims at analyzing the problem of losses by absorption in seismic reflection data and test three correction methods available in the literature. We use the modeling of the absorption presented by Romanelli Rosa, with the concept of instantaneous frequency, and analyzed the following correction methods: Q compensation, Varela s method and Duarte s method, which is a recursive filter. We note that Duarte s method is computationally faster than the others. However, we can use the Fourier Transform to make it faster in cases where the recursion is interrupted at a step M, smaller than the number of samples N in the seismic data and greater than ln N. Finally, we test the performance of the methods in a marine seismic line in Sergipe-Alagoas Basin, provided by the Agência Nacional do Petróleo, Gás Natural e Biocombustíveis (ANP). The line was reprocessed, with the correction of the losses by absorption applied before stacking, for each method studied. In order to compare the results, the data was also processed without correction of absorption. We observe an increase in the resolution of the geological subsurface in all methods tested in comparison with the data without correction. We also observe the computational advantage of Duarte s method.
224

[en] VISUALIZING FLOW IN BLACK-OIL RESERVOIRS USING VOLUMETRIC LIC / [pt] VISUALIZAÇÃO DE FLUXO EM RESERVATÓRIOS DE PETRÓLEO USANDO LIC VOLUMÉTRICO

ALLAN WERNER SCHOTTLER 13 December 2018 (has links)
[pt] Na indústria de petróleo, é imprescindível a visualização clara e desambigua de campos vetoriais resultantes de simulações numéricas de reservatórios de petróleo. Nesta dissertação, estudamos o uso da convolução de integral de linha (Line Integral Convolution – LIC) para gerar imagens de campos vetoriais 3D estacionários e aplicar o resultado em um visualizador volumétrico na GPU. Devido a densidade de informação presente na visualização volumétrica, estudamos os uso de texturas esparsas como entrada para o algoritmo de LIC e aplicamos funções de transferência para designar cor e opacidade a volumes de campos escalares, a fim de codificar informações visuais a voxels e aliviar o problema de oclusão. Além disso, tratamos o problema de codificação da direção de fluxo, inerente do LIC, usando uma extensão do algoritmo – Oriented LIC (OLIC). Por último, demonstramos um método de animação do volume a fim de ressaltar a direção do fluxo ainda mais. Comparamos então resultados do algoritmo LIC com o de OLIC. / [en] In the oil industry, clear and unambiguous visualization of vector fields resulting from numerical simulations of black-oil reservoirs is essential. In this dissertation, we study the use of line integral convolution techniques (LIC) for imaging 3D steady vector fields and apply the results to a GPU-based volume rendering algorithm. Due to the density of information present in volume renderings of LIC images, we study the use of sparse textures as input to the LIC algorithm and apply transfer functions to assign color and opacity to scalar fields in order to encode visual information to voxels and alleviate the occlusion problem. Additionally, we address the problem of encoding flow orientation, inherent to LIC, using an extension of the algorithm – Oriented LIC (OLIC). Finally, we present a method for volume animation in order to enhance the flow orientation. We then compare results obtained with LIC and with OLIC.
225

Lung-segmentering : Förbehandling av medicinsk data vid predicering med konvolutionella neurala nätverk / Lung-segmentation : A pre-processing technique for medical data when predicting with convolutional neural networks

Gustavsson, Robin, Jakobsson, Johan January 2018 (has links)
Svenska socialstyrelsen presenterade år 2017 att lungcancer är den vanligaste cancerrelaterade dödsorsaken bland kvinnor i Sverige och den näst vanligaste bland män. Ett sätt att ta reda på om en patient har lungcancer är att en läkare studerar en tredimensionell-röntgenbild av en patients lungor. För att förebygga misstag som kan orsakas av den mänskliga faktorn är det möjligt att använda datorer och avancerade algoritmer för att upptäcka lungcancer. En nätverksmodell kan tränas att upptäcka detaljer och avvikelser i en lungröntgenbild, denna teknik kallas deep structural learning. Det är både tidskrävande och avancerat att skapa en sådan modell, det är därför viktigt att modellen tränas korrekt. Det finns flera studier som behandlar olika nätverksarkitekturer, däremot inte vad förbehandlingstekniken lung-segmentering kan ha för inverkan på en modell av denna signifikans. Därför ställde vi frågan: hur påverkas accuracy och loss hos en konvolutionell nätverksmodell när lung-segmentering appliceras på modellens tränings- och testdata? För att besvara frågan skapade vi flera modeller som använt, respektive, inte använt lung-segmentering. Modellernas resultat evaluerades och jämfördes, tekniken visade sig motverka överträning. Vi anser att denna studie kan underlätta för framtida forskning inom samma och liknande problemområde. / In the year of 2017 the Swedish social office reported the most common cancer related death amongst women was lung cancer and the second most common amongst men. A way to find out if a patient has lung cancer is for a doctor to study a computed tomography scan of a patients lungs. This introduces the chance for human error and could lead to fatal consequences. To prevent mistakes from happening it is possible to use computers and advanced algorithms for training a network model to detect details and deviations in the scans. This technique is called deep structural learning. It is both time consuming and highly challenging to create such a model. This discloses the importance of decorous training, and a lot of studies cover this subject. What these studies fail to emphasize is the significance of the preprocessing technique called lung segmentation. Therefore we investigated how is the accuracy and loss of a convolutional network model affected when lung segmentation is applied to the model’s training and test data? In this study a number of models were trained and evaluated on data where lung segmentation was applied, in relation to when it was not. The final conclusion of this report shows that the technique counteracts overfitting of a model and we allege that this study can ease further research within the same area of study.
226

Improved Temporal Resolution Using Parallel Imaging in Radial-Cartesian 3D functional MRI

Ahlman, Gustav January 2011 (has links)
MRI (Magnetic Resonance Imaging) is a medical imaging method that uses magnetic fields in order to retrieve images of the human body. This thesis revolves around a novel acquisition method of 3D fMRI (functional Magnetic Resonance Imaging) called PRESTO-CAN that uses a radial pattern in order to sample the (kx,kz)-plane of k-space (the frequency domain), and a Cartesian sample pattern in the ky-direction. The radial sample pattern allows for a denser sampling of the central parts of k-space, which contain the most basic frequency information about the structure of the recorded object. This allows for higher temporal resolution to be achieved compared with other sampling methods since a fewer amount of total samples are needed in order to retrieve enough information about how the object has changed over time. Since fMRI is mainly used for monitoring blood flow in the brain, increased temporal resolution means that we can be able to track fast changes in brain activity more efficiently.The temporal resolution can be further improved by reducing the time needed for scanning, which in turn can be achieved by applying parallel imaging. One such parallel imaging method is SENSE (SENSitivity Encoding). The scan time is reduced by decreasing the sampling density, which causes aliasing in the recorded images. The aliasing is removed by the SENSE method by utilizing the extra information provided by the fact that multiple receiver coils with differing sensitivities are used during the acquisition. By measuring the sensitivities of the respective receiver coils and solving an equation system with the aliased images, it is possible to calculate how they would have looked like without aliasing.In this master thesis, SENSE has been successfully implemented in PRESTO-CAN. By using normalized convolution in order to refine the sensitivity maps of the receiver coils, images with satisfying quality was able to be reconstructed when reducing the k-space sample rate by a factor of 2, and images of relatively good quality also when the sample rate was reduced by a factor of 4. In this way, this thesis has been able to contribute to the improvement of the temporal resolution of the PRESTO-CAN method. / MRI (Magnetic Resonance Imaging) är en medicinsk avbildningsmetod som använder magnetfält för att framställa bilder av människokroppen. Detta examensarbete kretsar kring en ny inläsningsmetod för 3D-fMRI (functional Magnetic Resonance Imaging) vid namn PRESTO-CAN som använder ett radiellt mönster för att sampla (kx,kz)-planet av k-rummet (frekvensdomänen), och ett kartesiskt samplingsmönster i ky-riktningen. Det radiella samplingsmönstret möjliggör tätare sampling av k-rummets centrala delar, som innehåller den mest grundläggande frekvensinformationen om det inlästa objektets struktur. Detta leder till att en högre temporal upplösning kan uppnås jämfört med andra metoder eftersom det krävs ett mindre antal totala sampel för att få tillräcklig information om hur objektet har ändrats över tid. Eftersom fMRI framförallt används för att övervaka blodflödet i hjärnan innebär ökad temporal upplösning att vi kan följa snabba ändringar i hjärnaktivitet mer effektivt.Den temporala upplösningen kan förbättras ytterligare genom att minska scanningstiden, vilket i sin tur kan uppnås genom att tillämpa parallell avbildning. En metod för parallell avbildning är SENSE (SENSitivity Encoding). Scanningstiden minskas genom att minska samplingstätheten, vilket orsakar vikning i de inlästa bilderna. Vikningen tas bort med SENSE-metoden genom att utnyttja den extra information som tillhandahålls av det faktum att ett flertal olika mottagarspolar med sinsemellan olika känsligheter används vid inläsningen. Genom att mäta upp känsligheterna för de respektive mottagarspolarna och lösa ett ekvationssystem med de vikta bilderna är det möjligt att beräkna hur de skulle ha sett ut utan vikning.I detta examensarbete har SENSE framgångsrikt implementerats i PRESTO-CAN. Genom att använda normaliserad faltning för att förfina mottagarspolarnas känslighetskartor har bilder med tillfredsställande kvalitet varit möjliga att rekonstruera när samplingstätheten av k-rummet minskats med en faktor 2, och bilder med relativt bra kvalitet också när samplingstätheten minskats med en faktor 4. På detta sätt har detta examensarbete kunnat bidra till förbättrandet av PRESTO-CAN-metodens temporala upplösning.
227

Numerické metody registrace obrazů s využitím nelineární geometrické transformace / Numerical Method of Image Registration Using Nonlinear Geometric Transform

Stodola, Jakub Unknown Date (has links)
The goal of the thesis is to find general nonlinear geometric transformation, which compensates irregular deformation of images, so that they could be registered. In the introductory part, necessary mathematical tools are stated, especially convolution, correlation and Fourier transform. In the next part, method of phase correlation is stated, followed by algorithms used for finding the geometric transformation. Those algorithms are implemented in computer program, that is included.
228

The Weighted Space Odyssey

Křepela, Martin January 2017 (has links)
The common topic of this thesis is boundedness of integral and supremal operators between weighted function spaces. The first type of results are characterizations of boundedness of a convolution-type operator between general weighted Lorentz spaces. Weighted Young-type convolution inequalities are obtained and an optimality property of involved domain spaces is proved. Additional provided information includes an overview of basic properties of some new function spaces appearing in the proven inequalities. In the next part, product-based bilinear and multilinear Hardy-type operators are investigated. It is characterized when a bilinear Hardy operator inequality holds either for all nonnegative or all nonnegative and nonincreasing functions on the real semiaxis. The proof technique is based on a reduction of the bilinear problems to linear ones to which known weighted inequalities are applicable. Further objects of study are iterated supremal and integral Hardy operators, a basic Hardy operator with a kernel and applications of these to more complicated weighted problems and embeddings of generalized Lorentz spaces. Several open problems related to missing cases of parameters are solved, thus completing the theory of the involved fundamental Hardy-type operators. / Operators acting on function spaces are classical subjects of study in functional analysis. This thesis contributes to the research on this topic, focusing particularly on integral and supremal operators and weighted function spaces. Proving boundedness conditions of a convolution-type operator between weighted Lorentz spaces is the first type of a problem investigated here. The results have a form of weighted Young-type convolution inequalities, addressing also optimality properties of involved domain spaces. In addition to that, the outcome includes an overview of basic properties of some new function spaces appearing in the proven inequalities.  Product-based bilinear and multilinear Hardy-type operators are another matter of focus. It is characterized when a bilinear Hardy operator inequality holds either for all nonnegative or all nonnegative and nonincreasing functions on the real semiaxis. The proof technique is based on a reduction of the bilinear problems to linear ones to which known weighted inequalities are applicable.  The last part of the presented work concerns iterated supremal and integral Hardy operators, a basic Hardy operator with a kernel and applications of these to more complicated weighted problems and embeddings of generalized Lorentz spaces. Several open problems related to missing cases of parameters are solved, completing the theory of the involved fundamental Hardy-type operators. / <p>Artikel 9 publicerad i avhandlingen som manuskript med samma titel.</p>
229

Semantic Segmentation : Using Convolutional Neural Networks and Sparse dictionaries

Andersson, Viktor January 2017 (has links)
The two main bottlenecks using deep neural networks are data dependency and training time. This thesis proposes a novel method for weight initialization of the convolutional layers in a convolutional neural network. This thesis introduces the usage of sparse dictionaries. A sparse dictionary optimized on domain specific data can be seen as a set of intelligent feature extracting filters. This thesis investigates the effect of using such filters as kernels in the convolutional layers in the neural network. How do they affect the training time and final performance? The dataset used here is the Cityscapes-dataset which is a library of 25000 labeled road scene images.The sparse dictionary was acquired using the K-SVD method. The filters were added to two different networks whose performance was tested individually. One of the architectures is much deeper than the other. The results have been presented for both networks. The results show that filter initialization is an important aspect which should be taken into consideration while training the deep networks for semantic segmentation.
230

Constructions de sous-variétés legendriennes dans les espaces de jets d'ordre un de fonctions et fonctions génératrices / Constructions of Legendrian submanifolds in spaces of 1-jets of functions and generating functions

Limouzineau, Maÿlis 21 October 2016 (has links)
Dans cette thèse, on manipule deux types d'objets fondamentaux de la topologie de contact : les sous-variétés legendriennes des espaces de 1-jets de fonctions dé finies sur une variété M, noté J1(M;R), et la notion intimement liée de fonctions génératrices. On étudie des "opérations" que l'on peut faire sur ces objets, c'est-à-dire des procédures qui construisent (génériquement) de nouvelles sous-variétés legendriennes à partir d'anciennes. On dé finit en particulier les opérations somme et convolution des sous-variétés legendriennes, qui sont conjuguées par une transformation de type transformée de Legendre. Nous montrons que ces opérations se refl ètent harmonieusement dans le monde des fonctions génératrices. Ce second point de vue nous conduit en particulier à nous interroger sur l'effet de nos opérations sur le sélecteur, notion classique de géométrie symplectique dont on adapte la construction à ce contexte. Pour fi nir, on se concentre sur l'espace à trois dimensions J1(R;R) et sur les noeuds legendriens qui admettent (globalement) une fonction génératrice. C'est une condition forte sur les sous-variétés legendriennes, que l'on choisit d'étudier en proposant plusieurs constructions explicites. On termine avec l'étude des notions de cobordisme legendrien naturellement associées, où l'opération somme évoquée plus s'avère tenir une place centrale. / This thesis concerns two types of fundamental objects of the contact topology : Legendrian submanifolds in 1-jet spaces of functions de fined on a manifold M, denoted by J1(M;R), and the closed related notion of generating functions. We study "operations" that build (generically) new Legendrian submanifolds from old ones. In particular, we de fined the operations sum and convolution of Legendrian submanifolds, which are linked by a form of the Legendre transform. We show how the operations are well re flected in terms of generating functions. It offers a second point of view and leads us to wonder the effect of our operations on the selector, which is a classical notion of symplectic geometry, and we adapt its construction to this context. Finally, we focus on the three dimensional space J1(R;R) and Legendrian knots which admit a (global) generating function. It is a strong condition for Legendrian submanifolds, and we choose to examine it by proposing several explicit constructions. We conclude by studying the notions of Legendrian cobordism which are naturally related. The operation sum mentioned before finds there a central role.

Page generated in 0.0551 seconds