171 |
A global strategic financial analysis of the luxury retail industryLaVan, Lauren 01 May 2013 (has links)
A global strategic financial analysis of the luxury retail industry was conducted. The research entailed comprehensive analyses and forecasts of the global economy, the luxury retail industry and four of the most prominent, multi-national luxury goods firms in the world. These companies included: Coach, Michael Kors, Tiffany & Co., and LVMH Moet Hennessy, whom market among the world's finest personal luxury goods from handbags, clothing and accessories to diamonds, jewelry, watches, fragrances, cosmetics and wines. The macroeconomic analysis focused on factors pertinent to the luxury goods industry such as: (1) the lasting effects of the global financial crisis, our gradual emergence from the Great Recession and the impact these conditions have had on consumer spending and confidence; (2) the generational shift of consumers from the retiring baby boomers to the technologically savvy Generation Z and their unique demands for products as well as experiences; and (3) the growth and demand from emerging economies, especially China which is the globe's top luxury nationality accounting for 25% of all luxury purchases worldwide. Comprehensive financial ratio analyses, SWOT assessments, technical trends and forecasts of revenues, earnings and share prices for the four companies, resulted in recommendations to investors and advice to top management of the four firms. Luxury retail is a fascinating, recession resilient industry and it is expected to reach €1 trillion within the next 5 years. However, regardless of how successful firms in this industry have been in the past, to survive and continue to succeed, it is imperative that they remain flexible and adaptable in this ever-changing world.
|
172 |
Neural processing of chemosensory information from the locust ovipositor / Neural processing of chemosensory information from the locust ovipositorTousson, Ehab 03 May 2001 (has links)
No description available.
|
173 |
Iterative Methods for the Reconstruction of Tomographic Images with Unconventional Source-detector ConfigurationsMukkananchery, Abey 01 January 2005 (has links)
X-ray computed tomography (CT) holds a critical role in current medical practice for the evaluation of patients, particularly in the emergency department and intensive care units. Expensive high resolution stationary scanners are available in radiology departments of most hospitals. In many situations however, a small, inexpensive, portable CT unit would be of significant value. Several mobile or miniature CT scanners are available, but none of these systems have the range, flexibility or overall physical characteristics of a truly portable device. The main challenge is the design of a geometry that optimally trades image quality for system size. The goal of this work has been to develop analysis tools to help simulate and evaluate novel system geometries. To test the tools we have developed, three geometries have been considered in the thesis, namely, parallel projections, clam-shell and parallel plate geometries. The parallel projections geometry is commonly used in reconstruction of images by filtered back projection technique. A clam-shell structure consists of two semi-cylindrical braces that fold together over the patient's body and connect at the top. A parallel plate structure uses two fixed flat or curved plates on either side of the patient's body and image from fixed sources/detectors that are gated on and off so as to step the X-ray field through the body. The parallel plate geometry has been found to be the least reliable of the three geometries investigated, with the parallel projections geometry being the most reliable. For the targeted application, the clam-shell geometry seems to be the solution with more chances to succeed in the short term. We implemented the Van Cittert iterative technique for the reconstruction of images from projections. The thesis discusses a number of variations on the algorithm, such as the use of the Conjugate Gradient Method, several choices for the initial guess, and the incorporation of a priori information to handle the reconstruction of images with metal inserts.
|
174 |
THRESHOLDFinos, Marisa 06 May 2014 (has links)
Are the threshold experiences encountered between waking and sleeping similar to the liminal space between life and death? The sights, sounds, and bodily sensations experienced in the unconscious void blur the lines between the unknown and our conscious existence. Using the figure, I portray how the body might exist in these transitional moments. Through my investigations into sleep paralysis, dream states, and notions of an afterlife and the soul, I explore how we perceive the self in these altered states of consciousness.
|
175 |
[18F]Flutemetamol PET image processing, visualization and quantification targeting clinical routineLilja, Johan January 2017 (has links)
Alzheimer’s disease (AD) is the leading cause of dementia and is alone responsible for 60-70% of all cases of dementia. Though sharing clinical symptoms with other types of dementia, the hallmarks of AD are the abundance of extracellular depositions of β-amyloid (Aβ) plaques, intracellular neurofibrillary tangles of hyper phosphorylated tau proteins and synaptic depletion. The onset of the physiological hallmarks may precede clinical symptoms with a decade or more, and once clinical symptoms occur it may be difficult to separate AD from other types of dementia based on clinical symptoms alone. Since the introduction of radiolabeled Aβ tracer substances for positron emission tomography (PET) imaging it is possible to image the Aβ depositions in-vivo, strengthening the confidence in the diagnosis. Because the accumulation of Aβ may occur years before the first clinical symptoms are shown and even reach a plateau, Aβ PET imaging may not be feasible for disease progress monitoring. However, a negative scan may be used to rule out AD as the underlying cause to the clinical symptoms. It may also be used as a predictor to evaluate the risk of developing AD in patients with mild cognitive impairment (MCI) as well as monitoring potential effects of anti-amyloid drugs.Though currently validated for dichotomous visual assessment only, there is evidence to suggest that quantification of Aβ PET images may reduce inter-reader variability and aid in the monitoring of treatment effects from anti-amyloid drugs.The aim of this thesis was to refine existing methods and develop new ones for processing, quantification and visualization of Aβ PET images to aid in the diagnosis and monitoring of potential treatment of AD in clinical routine. Specifically, the focus for this thesis has been to find a way to fully automatically quantify and visualize a patient’s Aβ PET image in such way that it is presented in a uniform way and show how it relates to what is considered normal. To achieve the aim of the thesis registration algorithms, providing the means to register a patient’s Aβ PET image to a common stereotactic space avoiding the bias of different uptake patterns for Aβ- and Aβ+ images, a suitable region atlas and a 3-dimensional stereotactic surface projections (3D SSP) method, capable of projecting cortical activity onto the surface of a 3D model of the brain without sampling white matter, were developed and evaluated.The material for development and testing comprised 724 individual amyloid PET brain images from six distinct cohorts, ranging from healthy volunteers to definite AD. The new methods could be implemented in a fully automated workflow and were found to be highly accurate, when tested by comparisons to Standards of Truth, such as defining regional uptake from PET images co-registered to magnetic resonance images, post-mortem histopathology and the visual consensus diagnosis of imaging experts.
|
176 |
Kultivace prostorové představivosti žáků mladšího školního věku užitím třech průmětů - různé přístupy v českém a dánském školství / Fostering of spatial imagination of students using three projections - different approaches in Czech and Danish EducationPetráková, Barbora January 2014 (has links)
TITLE: Fostering of spatial imagination of students using three projections - different approaches in Czech and Danish Education SUMMARY: This master thesis is focused on description of different conception of teaching geometry in chosen czech and danish mathematics textbooks and on the possibilities of fostering the spatial imagination at elementary school. On example of one particular class from an elementary school in Prague, we will take an inside view of the advantages and disadvantages of using three projection in mathematics education and its influence on development of spatial imagination. In this class, a diagnostic interview was taken to determine the level of spatial imagination development and in order to find out, how these children can work with three projections. KEY WORDS: pupils of primary school age, spatial imagination, three projections, conceptions of geometry teaching
|
177 |
Transformações de Mobius e projeções na esfera de Riemann / Mobius Transformations and Riemann Sphere ProjectionsRaiz, Caio Eduardo Martins 06 November 2018 (has links)
Nessa dissertação exploramos os efeitos geométricos das Transformações de Möbius em C utilizando projeções na Esfera de Riemann. Como aplicação, apresentamos a ação de algumas transformações aplicadas em cônicas no plano. Uma atividade didática voltada aos alunos do Ensino Médio sobre Transformações de Möbius utilizando o Geogebra é apresentada. / In the course of this dissertation we explore the geometric effects of the Möbius Transforms in C using projections in the Riemann sphere. As an application, we present the action of some transformations applied on conics in the plane. A didactic activity aimed at high school students about Möbius Transformations using Geogebra is presented.
|
178 |
Estimação de estado em sistemas elétricos de potência: composição de erros de medidas / State estimation in power systems: measurement error compositionPiereti, Saulo Augusto Ribeiro 10 August 2011 (has links)
Bretas et al. (2009) prova matematicamente, e através da interpretação geométrica, que o erro de medida se divide em componentes detectáveis e não-detectáveis. Demonstra ainda que as metodologias até então utilizadas, para o processamento de Erros Grosseiros (EGs), consideram apenas a componente detectável do erro. Assim, dependendo da amplitude das componentes do erro, essas metodologias podem falhar. Face ao exposto, neste trabalho é proposto uma nova metodologia para processar as medidas portadoras de EGs. Essa proposição será obtida decompondo o erro da medida em duas componentes: a primeira, é ortogonal ao espaço da imagem da matriz jacobiana cuja amplitude é igual ao resíduo da medida, a outra, pertence ao espaço da imagem da matriz jacobiana e que, por conseguinte, não contribui para o resíduo da medida. A relação entre a norma dessas componentes, aqui denominado Índice de Inovação (II), prevê uma nova informação, isto é, informação não contida nas outras medidas. Usando o II, calcula-se um valor limiar (TV) para cada medida, esse limiar será utilizado para inferir se a medida é ou não suspeita de possuir EG. Em seguida, com as medidas suspeitas em mãos, desenvolve-se um índice de filtragem (FI) que será utilizado para identificar qual daquelas medidas tem maior probabilidade de possuir EG. Os sistemas de 14 e 30 barras do IEEE, e o sistema sul reduzido do Brasil de 45 barras, serão utilizados para mostrar a eficiência da metodologia proposta. Os testes realizados com os sistemas acima são: i) O teste de nível de detecção de EG, que consisti em encontrar o valor mínimo de EG que seja detectado usando o TV da medida; ii) O teste onde é adicionado EG de 10 desvios padrões em cada medida, uma por vez, nesse teste o FI da medida é usado para identificar qual medida possui o erro, em seguida à medida com erro é corrigida através do erro normalizado composto (ENC); iii) O teste de EG simples. / Bretas et al. (2009) has proved, using geometric background, that the measurement error can be decomposed into two components the detectable and the undetectable component respectively. Bretas has also demonstrated that the current methodologies used for processing of gross errors (GE), consider only the detectable component of the error. Thus, depending on the magnitude of the undetectable error components, such methods may fail. Given the above explanation, in this work a new methodology for processing the measurements with GE is proposed. This proposition is obtained by decomposing each measurement error into two components: the first, orthogonal to the Jacobian range space, whose magnitude is equal to the measurement residual and the other contained in that space, which does not contribute to the measurement residual. The ratio between the norms of those components was proposed as the measurement Innovation Index (II) which provides the new information a measurement contains regarding the other measurements. Using the II, a threshold value (TV) for each measurement is computed so that one can declare a measurement suspicious of having a GE. Then a filtering index (FI) is proposed to filter up, from the suspicious measurements, the one that has more chances of containing a GE. The IEEE-14 bus system, IEEE-30 bus system, and reduced 45-bus power system of south of Brazil, will be used to demonstrate the accuracy and efficiency of the proposed methodology. Tests conducted with the above systems were: i) The level test for GE detection, which consists in finding the minimum GE value in order it can be detected using the measurement TV; ii) The test where GE of 10 standard deviations is added to each measurement, once at a time, and using the measurement FI to identify which measurement has the error ant the using the composed measurement error (CNE) to correct measurement value; iii) The GE simple test.
|
179 |
Espaço incremental para a mineração visual de conjuntos dinâmicos de documentos / An incremental space for visual mining of dynamic document collectionsPinho, Roberto Dantas de 05 June 2009 (has links)
Representações visuais têm sido adotadas na exploração de conjuntos de documentos, auxiliando a extração de conhecimento sem que seja necessária a análise individual de milhares de textos. Mapas de documentos, em particular, apresentam documentos individualmente representados espalhados em um espaço visual, refletindo suas relações de similaridade ou conexões. A construção destes mapas de documentos inclui, entre outras tarefas, o posicionamento dos textos e a identificação automática de áreas temáticas. Um desafio é a visualização de conjuntos dinâmicos de documentos. Na visualização de informação, é comum que alterações no conjunto de dados tenham um forte impacto na organização do espaço visual, dificultando a manutenção, por parte do usuário, de um mapa mental que o auxilie na interpretação dos dados apresentados e no acompanhamento das mudanças sofridas pelo conjunto de dados. Esta tese introduz um algoritmo para a construção dinâmica de mapas de documentos, capaz de manter uma disposição coerente à medida que elementos são adicionados ou removidos. O processo, inerentemente incremental e de baixa complexidade, utiliza um espaço bidimensional dividido em células, análogo a um tabuleiro de xadrez. Resultados consistentes foram alcançados em comparação com técnicas não incrementais de projeção de dados multidimensionais, tendo sido a técnica aplicada também em outros domínios, além de conjuntos de documentos. A visualização resultante não está sujeita a problemas de oclusão. A identificação de áreas temáticas é alcançada com técnicas de extração de regras de associação representativas para a identificação automática de tópicos. A combinação da extração de tópicos com a projeção incremental de dados em um processo integrado de mineração visual de textos compõe um espaço visual em que tópicos e áreas de interesse são destacados e atualizados à medida que o conjunto de dados é modificado / Visual representations are often adopted to explore document collections, assisting in knowledge extraction, and avoiding the thorough analysis of thousands of documents. Document maps present individual documents in visual spaces in such a way that their placement reflects similarity relations or connections between them. Building these maps requires, among other tasks, placing each document and identifying interesting areas or subsets. A current challenge is to visualize dynamic data sets. In Information Visualization, adding and removing data elements can strongly impact the underlying visual space. That can prevent a user from preserving a mental map that could assist her/him on understanding the content of a growing collection of documents or tracking changes on the underlying data set. This thesis presents a novel algorithm to create dynamic document maps, capable of maintaining a coherent disposition of elements, even for completely renewed sets. The process is inherently incremental, has low complexity and places elements on a 2D grid, analogous to a chess board. Consistent results were obtained as compared to (non-incremental) multidimensional scaling solutions, even when applied to visualizing domains other than document collections. Moreover, the corresponding visualization is not susceptible to occlusion. To assist users in indentifying interesting subsets, a topic extraction technique based on association rule mining was also developed. Together, they create a visual space where topics and interesting subsets are highlighted and constantly updated as the data set changes
|
180 |
Uncertainties in land change modelingKrüger, Carsten 13 May 2016 (has links)
Der Einfluss des Menschen verändert die Erdoberfläche in gravierendem Maße. Die Anwendung von Landnutzungsmodellen ist etabliert, um derartige Prozesse zu analysieren und um Handlungsempfehlungen für Entscheidungsträger zu geben. Landnutzungsmodelle stehen unter dem Einfluss von Unsicherheiten, welche beim Interpretieren der Ergebnisse berücksichtigt werden müssen. Dennoch gibt es wenige Ansätze, die unterschiedliche Unsicherheitsquellen mit ihren Interdependenzen untersuchen und ihre Auswirkungen auf die projizierte Änderung der Landschaft analysieren. Aus diesem Grund ist das erste Ziel dieser Arbeit einen systematischen Ansatz zu entwickeln, der wesentliche Unsicherheitsquellen analysiert und ihre Fortentwicklung zur resultierenden Änderungskarte untersucht. Eine andere Herausforderung in der Landnutzungsmodellierung ist es, die Eignung von Projektionen abzuschätzen wenn keine Referenzdaten vorliegen. Bayes’sche Netze wurden als eine vielseitige Methode identifiziert, um das erste Ziel zu erreichen. Darüber hinaus wurden die Modellierungsschritte „Definition der Modellstruktur“, „Auswahl der Eingangsdaten“ und „Weiterverarbeitung der Daten“ als wesentliche Unsicherheitsquellen identifiziert. Um das zweite Ziel zu adressieren wurde eine Auswahl an Maßzahlen entwickelt. Diese quantifizieren Unsicherheit mit Hilfe einer projizierten Änderungskarte und ohne den Vergleich mit Referenzdaten. Mit diesen Maßzahlen ist es zusätzlich möglich zwischen quantitativer und räumlicher Unsicherheit zu unterscheiden. Vor allem in räumlichen Anwendungen wie der Landnutzungsmodellierung ist diese Möglichkeit wertvoll. Dennoch kann auch ein absolut sicheres Modell gleichzeitig ein falsches und nutzloses Modell sein. Deswegen wird ein Ansatz empfohlen, der die Beziehung zwischen Unsicherheit und Genauigkeit in bekannten Zeitschritten schätzt. Die entwickelten Ansätze geben wichtige Informationen um die Eignung von modellierten Entwicklungspfaden der Landnutzung zu verstehen. / Human influence has led to substantial changes to the Earth’s surface. Land change models are widely applied to analyze land change processes and to give recommendations for decision-making. Land change models are affected by uncertainties which have to be taken into account when interpreting their results. However, approaches which examine different sources of uncertainty with regard to their interdependencies and their influence on projected land change are rarely applied. The first objective of this thesis is therefore to develop a systematic approach which identifies major sources of uncertainty and the propagation to the resulting land change map. Another challenge in land change modeling is the estimation of the reliability of land change predictions when no reference data are available. Bayesian Belief Networks were identified as a useful technique to reach the first objective. Moreover, the modeling steps of “model structure definition”, “data selection” and “data preprocessing” were detected as relevant sources of uncertainty. To address the second research objective, a set of measures based on probabilities were developed. They quantify uncertainty by means of a single predicted land change map without using a reference map. It is additionally possible to differentiate uncertainty into its spatial and quantitative components by means of these measures. This is especially useful in spatial applications such as land change modeling. However, even a certain model can be wrong and therefore useless. Therefore, an approach is suggested which estimates the relationship between disagreement and uncertainty in known time steps to use this relationship in future time steps. The approaches give important information for understanding the reliability of modeled future development paths of land change.
|
Page generated in 0.0316 seconds