• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 50
  • 10
  • 8
  • 6
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 177
  • 54
  • 29
  • 26
  • 25
  • 21
  • 19
  • 19
  • 17
  • 17
  • 16
  • 15
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Study of analytical methods for electron track detection from heavy quark decays generated by sqrt(s)=8TeV pp collisions at ALICE / Estudo de métodos analíticos para detecção de traços de elétrons oriundos do decaimento de quarks pesados por colisões pp a raiz(s)=8TeV no ALICE

Marco Aurelio Luzio 03 May 2017 (has links)
A study of the usage of ALICEs time of flight (TOF), time projection chamber (TPC), and electromagnetic calorimeter (EMCal), aiming at detecting and separating electrons and positrons (e±) originated from different sources, was carried out. To accomplish the objectives of the research, data gathered from the 2012 proton-proton (pp) collision experiment were used. At a center of mass energy of sqrt(s)=8TeV, the collision of the proton beams liberates heavy quarks, charm and bottom, with approximate lifetimes of approximately 10^13s and 10^12s, respectively. The e± generated through weak semileptonic heavy flavor decays are of interest for studying quarks, therefore it served solely as motivation and incentive for the research carried out and described herein. The introduction of carefully selected cuts, with the purpose of separating partial data collected in the three detectors, permitted the understanding of their effect on the results. Furthermore, due to the fact that the TOFs design was not meant to separate e± from the other heavier particles, only the general effects of introducing a simple cut in the beta=v/c values were analyzed. The more specific cuts were only used for the data generated by the events detected by the TPC and the EMCal. A combination of cuts based on the particles energy loss as a function of traveled distance (dE/dx), with the ratio of energy to momentum (E/p) of the particle, was adopted to enable the separation process, thus allowing for the isolation of e± from the other particles, namely pi±, K±, and p/p. The analysis was performed for values of total particle momentum in the range 0<=p<=6GeV/c. A comparison of the raw data with the results obtained by applying this procedure, indicated a substantial increase in the e± yield and efficiency, reaching average values above 90% over the entire momentum range. / Um estudo do uso dos detetores de tempo de voo (TDV), câmara de projeção de tempo (CPT), e calorímetro eletromagnético (CalEM) do ALICE, visando detectar elétrons e pósitrons (e±) originados por diferentes fontes, foi realizado. Para atingir os objetivos da pesquisa, dados coletados durante o experimento de colisões próton-próton em 2012 foram utilizados. Com um nível de enegia do centro de mass igual a raiz(s)=8TeV, a colisão dos feixes de prótons libera quarks pesados, charm e bottom, com tempo de vida de aproximadamente 10^13s e 10^12s, respectivamente. Os e± gerados pelo decaimento fraco semileptônico de sabores pesados é de interesse para o estudo dos quarks, portanto, o conceito serviu somente como motivação e incentivo para a pesquisa realizada e descrita nesta dissertação. A introdução de cortes específicos, com a finalidade de parcialmente separar dados coletados nos três detectores, permitiu adquirir entendimento sobre os efeitos do cortes nos resultados. Adicionalmente, em virtude do projeto do TDV não ter sido feito visando separar e± oriundos do decaimento dos sabores pesados das demais partículas mais massivas, somente os efeitos gerais de introduzir um corte simples nos valores de beta=v/c, foram analisados. Os cortes mais específicos foram somente nos dados detectados pelo CPT e pelo CalEM. Uma combinação de cortes baseados na perda de energia da partícula em função da distância percorrida (dE/dx), com a razão entre a energia e o momento da partícula (E/p), foi adotada para viabilizar o processo de separação, desta forma permitindo a separação dos e± das demais partículas, ou seja, dos pi±, K±, and p/p. A análise foi realizada para valores de momento total das partículas na faixa 0<=p<=6GeV/c. Uma comparação dos dados originais com os resultados obtidos pela aplicação do procedimento, indicou um aumento substancial do rendimento e eficiência dos e±, atingindo valores médios acima de 90% na faixa inteira de momento.
172

Modern Stereo Correspondence Algorithms : Investigation and Evaluation

Olofsson, Anders January 2010 (has links)
Many different approaches have been taken towards solving the stereo correspondence problem and great progress has been made within the field during the last decade. This is mainly thanks to newly evolved global optimization techniques and better ways to compute pixel dissimilarity between views. The most successful algorithms are based on approaches that explicitly model smoothness assumptions made about the physical world, with image segmentation and plane fitting being two frequently used techniques. Within the project, a survey of state of the art stereo algorithms was conducted and the theory behind them is explained. Techniques found interesting were implemented for experimental trials and an algorithm aiming to achieve state of the art performance was implemented and evaluated. For several cases, state of the art performance was reached. To keep down the computational complexity, an algorithm relying on local winner-take-all optimization, image segmentation and plane fitting was compared against minimizing a global energy function formulated on pixel level. Experiments show that the local approach in several cases can match the global approach, but that problems sometimes arise – especially when large areas that lack texture are present. Such problematic areas are better handled by the explicit modeling of smoothness in global energy minimization. Lastly, disparity estimation for image sequences was explored and some ideas on how to use temporal information were implemented and tried. The ideas mainly relied on motion detection to determine parts that are static in a sequence of frames. Stereo correspondence for sequences is a rather new research field, and there is still a lot of work to be made.
173

Structural priors for multiobject semi-automatic segmentation of three-dimensional medical images via clustering and graph cut algorithms / A priori de structure pour la segmentation multi-objet d'images médicales 3d par partition d'images et coupure de graphes

Kéchichian, Razmig 02 July 2013 (has links)
Nous développons une méthode générique semi-automatique multi-objet de segmentation d'image par coupure de graphe visant les usages médicaux de routine, allant des tâches impliquant quelques objets dans des images 2D, à quelques dizaines dans celles 3D quasi corps entier. La formulation souple de la méthode permet son adaptation simple à une application donnée. En particulier, le modèle d'a priori de proximité que nous proposons, défini à partir des contraintes de paires du plus court chemin sur le graphe d'adjacence des objets, peut facilement être adapté pour tenir compte des relations spatiales entre les objets ciblés dans un problème donné. L'algorithme de segmentation peut être adapté aux besoins de l'application en termes de temps d'exécution et de capacité de stockage à l'aide d'une partition de l'image à segmenter par une tesselation de Voronoï efficace et contrôlable, établissant un bon équilibre entre la compacité des régions et le respect des frontières des objets. Des évaluations et comparaisons qualitatives et quantitatives avec le modèle de Potts standard confirment que notre modèle d'a priori apporte des améliorations significatives dans la segmentation d'objets distincts d'intensités similaires, dans le positionnement précis des frontières des objets ainsi que dans la robustesse de segmentation par rapport à la résolution de partition. L'évaluation comparative de la méthode de partition avec ses concurrentes confirme ses avantages en termes de temps d'exécution et de qualité des partitions produites. Par comparaison avec l'approche appliquée directement sur les voxels de l'image, l'étape de partition améliore à la fois le temps d'exécution global et l'empreinte mémoire du processus de segmentation jusqu'à un ordre de grandeur, sans compromettre la qualité de la segmentation en pratique. / We develop a generic Graph Cut-based semiautomatic multiobject image segmentation method principally for use in routine medical applications ranging from tasks involving few objects in 2D images to fairly complex near whole-body 3D image segmentation. The flexible formulation of the method allows its straightforward adaption to a given application.\linebreak In particular, the graph-based vicinity prior model we propose, defined as shortest-path pairwise constraints on the object adjacency graph, can be easily reformulated to account for the spatial relationships between objects in a given problem instance. The segmentation algorithm can be tailored to the runtime requirements of the application and the online storage capacities of the computing platform by an efficient and controllable Voronoi tessellation clustering of the input image which achieves a good balance between cluster compactness and boundary adherence criteria. Qualitative and quantitative comprehensive evaluation and comparison with the standard Potts model confirm that the vicinity prior model brings significant improvements in the correct segmentation of distinct objects of identical intensity, the accurate placement of object boundaries and the robustness of segmentation with respect to clustering resolution. Comparative evaluation of the clustering method with competing ones confirms its benefits in terms of runtime and quality of produced partitions. Importantly, compared to voxel segmentation, the clustering step improves both overall runtime and memory footprint of the segmentation process up to an order of magnitude virtually without compromising the segmentation quality.
174

[en] CONVEX ANALYSIS AND LIFT-AND-PROJECT METHODS FOR INTEGER PROGRAMMING / [es] ANÁLISIS CONVEXA Y MÉTODOS LIFT-AND-PROJECT PARA PROGRAMACIÓN ENTERA / [pt] ANÁLISE CONVEXA E MÉTODOS LIFT-AND-PROJECT PARA PROGRAMAÇÃO INTEIRA

PABLO ANDRES REY 06 August 2001 (has links)
[pt] Algoritmos para a resolução de problemas de programação mista 0-1 gerais baseados em cortes derivados dos métodos lift-and-project, tem se mostrado bastante eficientes na prática. Estes cortes são gerados resolvendo um problema que depende de uma certa normalização. Desde um ponto de vista teórico, o bom comportamento destes algoritmos não foi completamente compreendido, especialmente no que diz respeito à normalização. Neste trabalho consideramos normalizações gerais definidas por um conjunto convexo fechado arbitrário, estendendo assim a análise teórica desenvolvida nos anos noventa. Apresentamos um marco teórico que abarca todas as normalizações previamente estudadas e introduzimos novas normalizações, analisando as propriedades dos cortes associados.Introduzimos também uma nova fórmula de atualização do parâmetro proximal para uma variante dos métodos de feixes. Estes métodos são bem conhecidos pela sua eficiência na resolução de problemas de otimização não diferenciável. Por último, propomos uma metodologia para eliminr soluções redundantes de programas inteiros combinatórios. Nossa proposta baseia-se na utilização da informação de simetria do problema, eliminam a simetria sem prejudicar a solução do problema inteiro. / [en] Algorithms for general 0-1 mixed integer programs can be successfully developed by using lift-and-project methods to generate cuts. Cuts are generated by solving a cut- generation-program that depends on a certain normalization. From a theoretical point of view, the good numerical behavior of these cuts is not completely understood yet, specially, concerning to the normalization chosen. We consider a general normalization given by an arbitrary closed convex set, extending the theory developed in the 90's. We present a theoretical framework covering a wide group of already known normalizations. We also introduce new normalizations and analyze the properties of the associated cuts. In this work, we also propose a new updating rule for the prox parameter of a variant of the proximal bundle methods, making use of all the information available at each iteration. Proximal bundle methods are well known for their efficiency in nondifferentiable optimization. Finally, we introduce a way to eliminate redundant solutions ( due to geometrical symmetries ) of combinatorial integer program. This can be done by using the information about the problem symmetry in order to generate inequalities, which added to the formulation of the problem, eliminate this symmetry without affecting solution of the integer problem. / [es] Los algoritmos para la resolución de problemas de programación mixta 0-1 generales que utilizan cortes derivados de los métodos lift-and-project, se han mostrado bastante eficientes en la práctica. Estos cortes se generan resolviendo un problema que depende de una cierta normalización. Desde el punto de vista teórico, el buen comportamiento de estos algoritmos no fue completamente comprendido, especialmente respecto a la normalización. En este trabajo consideramos normalizaciones generales definidas por un conjunto convexo cerrado arbitrario, extendiendo así el análisis teórico desarrollado en los años noventa. Presentamos un marco teórico que abarca todas las normalizaciones previamente estudiadas e introducimos nuevas normalizaciones, analizando las propiedades de los cortes asociados. Introducimos una nueva fórmula de actualización del parámetro de. Estoss métodos son bien conocidos por su eficiencia en la resolución de problemas de optimización no diferenciable. Por último, proponemos una metodología para eliminar soluciones redundantes de programas enteros combinatorios. Nuestra propuesta se basa en la utilización de la información de simetría del problema, eliminan la simetría sin perjudicar la solución del problema entero.
175

Essays on business taxation

Zeida, Teega-Wendé Hervé 09 1900 (has links)
Cette thèse explore les effets macroéconomiques et distributionnels de la taxation dans l’économie américaine. Les trois premiers chapitres prennent en considération l’interaction entre l’entrepreneuriat et la distribution de richesse tandis que le dernier discute l’arbitrage du mode de financement d’une diminution d’impôt sur les sociétés sous la contrainte de neutralité fiscale pour le gouvernement. Spécifiquement, le chapitre 1 en utilisant les données du Panel Study of Income Dynamics (PSID) , fournit des évidences selon lesquelles le capital humain ou l’expérience entrepreneuriale est quantitativement important pour expliquer les disparités de revenu et de richesse entre les individus au cours de leur cycle de vie. Pour saisir ces tendances, je considère le modèle d’entrepreneuriat de Cagetti et De Nardi (2006), modifié pour prendre en compte la dynamique du cycle de vie. J’introduis également l’accumulation de l’experience entrepreneuriale, laquelle rend les entrepreneurs plus productifs. Je calibre ensuite deux versions du modèle (avec et sans accumulation d’expérience d’entreprise) en fonction des mêmes données américaines. Les résultats montrent que le modèle avec accumulation d’expérience réplique le mieux les données. La question de recherche du chapitre 2 est opportune à la réforme fiscale récente adoptée aux États-Unis, laquelle est un changement majeur du code fiscal depuis la loi de réforme fiscale de 1986. Le Tax Cuts and Jobs Act (TCJA) voté en décembre 2017 a significativement changé la manière dont le revenu d’affaires est imposé aux États-Unis. Je considère alors le modèle d’équilibre général dynamique avec choix d’occupations développé au Chapitre 1 pour une évaluation quantitative des effets macroéconomiques du TCJA, tant dans le court terme que dans le long terme. Le TCJA est modélisé selon ses trois provisions clés : un nouveau taux de déduction de 20% pour les firmes non- incorporées, une baisse du taux fiscal statutaire pour sociétés incorporées de 35% à 21% et la réduction de 39.6% à 37% du taux marginal supérieur pour les individus. Je trouve que l’économie connait un taux de croissance du PIB de 0.90% sur une fenêtre fiscale de dix ans et le stock de capital en moyenne augmente de 2.12%. Ces résultats sont consis- tants aux évaluations faites par le Congressional Budget Office et le Joint Committee on vi Taxation. Avec des provisions provisoires, le TCJA génère une réduction dans l’inégalité de la richesse et celle du revenu mais l’opposé se réalise une fois que les provisions sont faites permanentes. Dans les deux scénarios, la population subit une perte de bien-être et exprime un faible soutien. Le chapitre 3 répond à la question normative: Les entrepreneurs devraient-ils être imposés différemment? Par conséquent, j’analyse quantitativement la désirabilité d’une taxation basée sur l’occupation dans un modèle à générations imbriquées avec entrepreneuriat et une prise en compte explicite des cohortes transitionnelles. La reforme principale étudiée est le passage d’une taxation progressive fédérale identique tant pour les revenus du travail que pour le bénéfice d’entreprise au niveau individuel à un régime fiscal différentiel où le profit d’affaires fait face à un taux d’imposition proportionnel pendant que le revenu du travail est toujours soumis au code de taxation progressive. Je trouve qu’une taxe proportionnelle de 40% imposée aux entrepreneurs est optimale. Plus générale- ment, je montre que le taux d’imposition optimal varie entre 15% et 50%, augmentant avec l’aversion du planificateur pour les inégalités et diminuant avec son évaluation rel- ative du bien-être des générations futures. Dans le contexte de la réforme fiscalité des entreprises, le chapitre 4 évalue les compromis de neutralité fiscale de revenu dans le financement d’une réduction de l’impôt des sociétés. Pour respecter la neutralité fiscale, le gouvernement utilise trois instruments pour équilibrer son budget, à savoir l’impôt sur le revenu du travail, les dividendes et les gains en capital. Je construis ensuite un modèle d’équilibre général parcimonieux pour obtenir des multiplicateurs budgétaires équilibrés associés à une réforme de l’impôt sur les sociétés. En utilisant un calibration standard de l’économie américaine, je montre que les multiplicateurs liés à l’impôt sur le revenu du travail et l’impôt sur les dividendes sont négatifs, suggérant ainsi un compromis entre une réduction de l’impôt des sociétés et ces deux taux d’imposition. D’autre part, le multiplicateur lié à l’impôt sur les gains en capital est positif, ce qui prédit une coordination d’une double réduction des taux d’imposition des sociétés et des gains en capital. De plus, les gains de bien-être des différents scénarios sont mitigés. / This thesis explores the macroeconomic and distributional effects of taxation in the U.S. economy. The first three chapters take advantage of the interplay between entrepreneurship and wealth distribution while the last one discusses the trade-offs when financing a corporate tax cut under revenue neutrality. Specifically, Chapter 1 provides evidence using the Panel Study of Income Dynamics (PSID) that occupation-specific human capital or business experience is quantitatively important in explaining income and wealth disparities among individuals over their life cycle. To capture the data patterns, I build on Cagetti and De Nardi (2006) occupational choice model, modified to feature life-cycle dynamics. I also introduce managerial skill accumulation which leads entrepreneurs to become more productive with experience. I then calibrate two versions of the model (with and without accumulation of business experience) to the same U.S. data. Results show that the model with business experience margin is the closest one. Chapter 2's research question is timely to the recent tax reform enacted in the US, which is a major change of the tax code since the 1986 Tax Reform Act. The Tax Cuts and Jobs Act (TCJA) as of December 2017 significantly altered how business income is taxed in the US. I consider a dynamic general equilibrium model of entrepreneurship developed in Chapter 1 to provide a quantitative assessment of the macroeconomic effects of the TCJA, both in the short run and in the long run. The TCJA is modeled by its three key provisions: a new 20-percent-deduction rate for pass-throughs, a drop in the statutory tax rate for corporations from 35% to 21% and the reduction to 37% of the top marginal tax rate for individuals from 39.6%. I find that the economy experiences, a GDP growth rate of 0.90% over a ten-year window and average capital stock increases by 2.12%. These results are consistent with estimates made by the congressional budget office and the joint committee on taxation. With temporary provisions, the TCJA delivers a reduction in wealth and income inequality but the opposite occurs once provisions are made permanent. In both scenarios, the population suffers a welfare loss and finds them difficult to support. Chapter 3 answers the normative question: Should entrepreneurs be taxed differently? Accordingly, I quantitatively investigate the desirability of occupation-based taxation in the entrepreneurship model of Chapter 1, when transitional cohorts are explicitly taken into account. The main experiment is to move from the federal single progressive taxation for both labor income and business profit at the individual level to a differential tax regime where business income faces a proportional tax rate and labor income is still subject to the progressive scheme. I find that a tax rate of 40% is optimal. More generally, the optimal tax rate varies between 15% and 50%, increasing with the planner's aversion to inequality and decreasing with its relative valuation of future generations' welfare. In the context of business tax reform, chapter 4 assesses revenue-neutral trade-offs when financing a corporate tax cut. To meet revenue neutrality, the policymaker uses three instruments to balance the government budget, namely labor income tax, dividend tax, and capital gains tax. I then construct a parsimonious general equilibrium model to derive balanced fiscal multipliers associated with corporate tax reform. Using a standard calibration, I show that both labor income tax and dividend tax multipliers are negative, suggesting a trade-off between a corporate tax cut and these two tax rates. On the other hand, the multiplier related to the capital gains tax is positive, which predicts the coordination of a double cut in both corporate and capital gains tax rates. Moreover, the welfare gains of the different scenarios are mixed.
176

Phase Unwrapping MRI Flow Measurements / Fasutvikning av MRT-flödesmätningar

Liljeblad, Mio January 2023 (has links)
Magnetic resonance images (MRI) are acquired by sampling the current of induced electromotiveforce (EMF). EMF is induced due to flux of the net magnetic field from coherent nuclear spins with intrinsic magnetic dipole moments. The spins are excited by (non-ionizing) radio frequency electromagnetic radiation in conjunction with stationary and gradient magnetic fields. These images reveal detailed internal morphological structures as well as enable functional assessment of the body that can help diagnose a wide range of medical conditions. The aim of this project was to unwrap phase contrast cine magnetic resonance images, targeting the great vessels. The maximum encoded velocity (venc) is limited to the angular phase range [-π, π] radians. This may result in aliasing if the venc is set too low by the MRI personnel. Aliased images yield inaccurate cardiac stroke volume measurements and therefore require acquisition retakes. The retakes might be avoided if the images could be unwrapped in post-processing instead. Using computer vision, the angular phase of flow measurements as well as the angular phase of retrospectively wrapped image sets were unwrapped. The performances of three algorithms were assessed, Laplacian algorithm, sequential tree-reweighted message passing and iterative graph cuts. The associated energy formulation was also evaluated. Iterative graph cuts was shown to be the most robust with respect to the number of wraps and the energies correlated with the errors. This thesis shows that there is potential to reduce the number of acquisition retakes, although the MRI personnel still need to verify that the unwrapping performances are satisfactory. Given the promising results of iterative graph cuts, next it would be valuable to investigate the performance of a globally optimal surface estimation algorithm.
177

Graphic revolt! : Scandinavian artists' workshops, 1968-1975 : Røde Mor, Folkets Ateljé and GRAS

Glomm, Anna Sandaker January 2012 (has links)
This thesis examines the relationship between the three artists' workshops Røde Mor (Red Mother), Folkets Ateljé (The People's Studio) and GRAS, who worked between 1968 and 1975 in Denmark, Sweden and Norway. Røde Mor was from the outset an articulated Communist graphic workshop loosely organised around collective exhibitions. It developed into a highly productive and professionalised group of artists that made posters by commission for political and social movements. Its artists developed a familiar and popular artistic language characterised by imaginative realism and socialist imagery. Folkets Ateljé, which has never been studied before, was a close knit underground group which created quick and immediate responses to concurrent political issues. This group was founded on the example of Atelier Populaire in France and is strongly related to its practices. Within this comparative study it is the group that comes closest to collective practises around 1968 outside Scandinavia, namely the democratic assembly. The silkscreen workshop GRAS stemmed from the idea of economic and artistic freedom, although socially motivated and politically involved, the group never implemented any doctrine for participation. The aim of this transnational study is to reveal common denominators to the three groups' poster art as it was produced in connection with a Scandinavian experience of 1968. By ‘1968' it is meant the period from the late 1960s till the end of the 1970s. It examines the socio-political conditions under which the groups flourished and shows how these groups operated in conjunction with the political environment of 1968. The thesis explores the relationship between political movements and the collective art making process as it appeared in Scandinavia. To present a comprehensible picture of the impact of 1968 on these groups, their artworks, manifestos, and activities outside of the collective space have been discussed. The argument has presented itself that even though these groups had very similar ideological stances, their posters and techniques differ. This has impacted the artists involved to different degrees, yet made it possible to express the same political goals. It is suggested to be linked with the Scandinavian social democracies and common experience of the radicalisation that took place mostly in the aftermath of 1968 proper. By comparing these three groups' it has been uncovered that even with the same socio-political circumstances and ideological stance divergent styles did develop to embrace these issue.

Page generated in 0.0702 seconds