551 |
Arithmetic Aspects of Point Counting and Frobenius DistributionsShieh, Yih-Dar 17 December 2015 (has links)
Cette thèse se compose de deux parties. Partie 1 étudie la décomposition des groupes de cohomologie pour une famille de courbes non hyperelliptiques de genre 3 avec une involution, et le bénéfice d'une telle décomposition dans le calcul de Frobenius utilisant l'algorithme de Kedlaya. L'involution d'une telle courbe C induit un morphisme de degré 2 vers une courbe elliptique E, ce qui donne une décomposition de Jac(C) en E et en une surface abélienne A, à partir desquelles le Frobenius sur C peut être récupérée. En E, le polynôme caractéristique du Frobenius peut être calculé en utilisant un algorithme efficace et rapide en pratique. En travaillant avec le sous-groupe V de $H^1_{MW}(C)$, on obtient une meilleure constante que l'application directe de la méthode de Kedlaya à C. À ma connaissance, ceci est la première utilisation de la décomposition de la cohomologie induite par une décomposition (à isogénie près) de la jacobienne en l'algorithme de Kedlaya. Dans partie 2, je propose une nouvelle approche aux distributions de Frobenius et aux groupes de Sato-Tate, qui utilise les relations d'orthogonalité des caractères irréductibles du groupe de Lie USp(2g) et ses sous-groupes. Dans ce but, je présente d'abord une méthode simple pour calculer les caractères irréductibles de USp(2g), et puis je développe un algorithme basé sur la formule de Brauer-Klimyk. Les avantages de cette nouvelle approche sont examinés en détail. J'utilise aussi la famille de courbes dans partie 1 comme une étude de cas. Les analyses et les comparaisons montrent que l'approche par la théorie des caractères est un outil plus intrinsèque et très prometteur pour l'étude des groupes de Sato-Tate. / This thesis consists of two parts. Part 1 studies the decomposition of cohomology groups induced by automorphisms for a family of non-hyperelliptic genus 3 curves with involution, and I investigate the benefit of such decomposition in the computation of Frobenius using Kedlaya's algorithm. The involution of a curve C in this family induces a degree 2 map to an elliptic curve E, which gives a decomposition of the Jacobian of C into E and an abelian surface A, from which the Frobenius on C can be recovered. On E, the characteristic polynomial of the Frobenius can be computed using an efficient and fast algorithm. By working with the cohomology subgroup V of $H^1_{MW}(C)$, we get a constant speed-up over a straightforward application of Kedlaya's method to C. To my knowledge, this is the first use of decomposition of the cohomology induced by an isogeny decomposition of the Jacobian in Kedlaya's algorithm. In Part 2, I propose a new approach to Frobenius distributions and Sato-Tate groups, which uses the orthogonality relations of the irreducible characters of the compact Lie group USp(2g) and its subgroups. To this purpose, I first present a simple method to compute the irreducible characters of USp(2g), then I develop an algorithm based on the Brauer-Klimyk formula. The advantages of this new approach to Sato-Tate groups are examined in detail. The results show that the error grows slowly. I also use the family of genus 3 curves studied in Part 1 as a case study. The analyses and comparisons show that the character theory approach is a more intrinsic and very promising tool for studying Sato-Tate groups.
|
552 |
權益連結壽險之動態避險:風險極小化策略與應用 / Dynamic Hedging for Unit-linked Life Insurance Policies: Risk Minimization Strategy and Applications陳奕求, Chen, Yi-Chiu Unknown Date (has links)
傳統人壽保險契約之分析利用等價原則(principal of equivalience) 來對商品評價。即保險人所收保費之現值等於保險人未來責任(保險金額給付)之現值。然而對於權益連結壽險商品而言,其結合傳統商品之風險(如利率風險、死亡率風險等)與財務風險,故更增加其評價困難性。過去研究中在假設預定利率為常數與死亡率為給定的情況下,利用Black-Scholes (1973)評價公式推導出公式解。然而Black-Scholes評價公式是建構在完全市場上,對於權益連結壽險商品而言其已不符合完全市場之假設,因此本文放寬完全市場之假設來對此商品重新評價與避險。
在財務市場上,對於不完全市場(incomplete markets)下請求權(contingent claims)之評價與避險,已發展出數個不同評價方法。本文利用均數變異避險(mean-variance hedging)方法(Follmer&Sondermann ,1986)所衍生之風險極小化(risk-minimization)觀念來對此保險衍生性金融商品評價與避險,並找到一風險衡量測度(Moller , 1996、1998a、2000)來評估發行此商品保險人需承受多少風險。 / In this study, actuarial equivalent principle and no-arbitrage pricing theory are used in pricing and valuation for unit-linked life insurance policies. Since their market values cannot be replicated through the self-finance strategies due to market incompleteness, the theoretical setup in Black and Scholes (1973) and Follmer and Sondermann (1986) are adopted to develop the pricing and hedging strategies. Counting process is employed to characterize the transition pattern of the policyholder and the linked assets are modeled through the geometric Brownian motions. Equivalent martingale measures are adapted to derive the pricing formulas. Since the benefit payments depend on the performance of the underlying portfolios and the health status of the policyholder, mean-variance minimization criterion is employed to evaluate the financial risk. Finally pricing and hedging issues are examined through the numerical illustrations. Monte Carlo method is implemented to approximate the market premiums according to the payoff structures of the policies. In this paper, we show that the risk-minimization criterion can be used to determine the hedging strategies and access the minimal intrinsic risks for the insurers.
|
553 |
Spectral Mammography with X-Ray Optics and a Photon-Counting DetectorFredenberg, Erik January 2009 (has links)
Early detection is vital to successfully treating breast cancer, and mammography screening is the most efficient and wide-spread method to reach this goal. Imaging low-contrast targets, while minimizing the radiation exposure to a large population is, however, a major challenge. Optimizing the image quality per unit radiation dose is therefore essential. In this thesis, two optimization schemes with respect to x-ray photon energy have been investigated: filtering the incident spectrum with refractive x-ray optics (spectral shaping), and utilizing the transmitted spectrum with energy-resolved photon-counting detectors (spectral imaging). Two types of x-ray lenses were experimentally characterized, and modeled using ray tracing, field propagation, and geometrical optics. Spectral shaping reduced dose approximately 20% compared to an absorption-filtered reference system with the same signal-to-noise ratio, scan time, and spatial resolution. In addition, a focusing pre-object collimator based on the same type of optics reduced divergence of the radiation and improved photon economy by about 50%. A photon-counting silicon detector was investigated in terms of energy resolution and its feasibility for spectral imaging. Contrast-enhanced tumor imaging with a system based on the detector was characterized and optimized with a model that took anatomical noise into account. Improvement in an ideal-observer detectability index by a factor of 2 to 8 over that obtained by conventional absorption imaging was found for different levels of anatomical noise and breast density. Increased conspicuity was confirmed by experiment. Further, the model was extended to include imaging of unenhanced lesions. Detectability of microcalcifications increased no more than a few percent, whereas the ability to detect large tumors might improve on the order of 50% despite the low attenuation difference between glandular and cancerous tissue. It is clear that inclusion of anatomical noise and imaging task in spectral optimization may yield completely different results than an analysis based solely on quantum noise. / QC 20100714
|
554 |
Applications of Generating FunctionsTseng, Chieh-Mei 26 June 2007 (has links)
Generating functions express a sequence as coefficients arising from a power series in variables. They have many applications in combinatorics and probability. In this paper, we will investigate the important properties of four kinds of generating functions in one variables: ordinary generating unction, exponential generating function, probability generating function and moment generating function. Many examples with applications in combinatorics and probability, will be discussed. Finally, some
well-known contest problems related to generating functions will be addressed.
|
555 |
Multivariate Mixed Poisson Processes / Multivariate gemischte Poisson-ProzesseZocher, Mathias 19 November 2005 (has links) (PDF)
Multivariate mixed Poisson processes are special multivariate counting processes whose coordinates are, in general, dependent. The first part of this thesis is devoted to properties which multivariate counting processes may possess. Such properties are, for example, the Markov property, the multinomial property and regularity. With regard to regularity we study the properties of transition probabilities and intensities. The second part of this thesis restricts the class of all multivariate counting processes by additional assumptions leading to different types of multivariate mixed Poisson processes which, however, are connected with each other. Using a multivariate version of the Bernstein-Widder theorem, it is shown that multivariate mixed Poisson processes are characterized by the multinomial property. Furthermore, regularity of multivariate mixed Poisson processes and properties of their moments are studied in detail. Throughout this thesis, two types of stability of properties of multivariate counting processes are studied: It is shown that most properties of a multivariate counting process are stable under certain linear transformations including the selection of single coordinates and summation of all coordinates. It is also shown that the different types of multivariate mixed Poisson processes under consideration are in a certain sense stable in time.
|
556 |
Digitale Bildanalyse zur Messung fraktaler Eigenschaften der Bodenstruktur / Digital image analysis for measuring fractal properties of soil structureDathe, Annette 27 June 2001 (has links)
No description available.
|
557 |
Spectro-imagerie optique à faible flux et comparaison de la cinématique Ha et HI d'un échantillon de galaxies prochesDaigle, Olivier 02 1900 (has links)
Un nouveau contrôleur de EMCCD (Electron multiplying Charge Coupled Device) est présenté. Il permet de diminuer significativement le bruit qui domine lorsque la puce EMCCD est utilisé pour du comptage de photons: le bruit d'injection de charge. À l'aide de ce contrôleur, une caméra EMCCD scientifique a été construite, caractérisée en laboratoire et testée à l'observatoire du mont Mégantic. Cette nouvelle caméra permet, entre autres, de réaliser des observations de la cinématique des galaxies par spectroscopie de champ intégral par interférométrie de Fabry-Perot en lumière Ha beaucoup plus rapidement, ou de galaxies de plus faible luminosité, que les caméras à comptage de photon basées sur des tubes amplificateurs. Le temps d'intégration nécessaire à l'obtention d'un rapport signal sur bruit donné est environ 4 fois moindre qu'avec les anciennes caméras. Les applications d'un tel appareil d'imagerie sont nombreuses: photométrie rapide et faible flux, spectroscopie à haute résolution spectrale et temporelle, imagerie limitée par la diffraction à partir de télescopes terrestres (lucky imaging), etc. D'un point de vue technique, la caméra est dominée par le bruit de Poisson pour les flux lumineux supérieurs à 0.002 photon/pixel/image.
D'un autre côté, la raie d'hydrogène neutre (HI) à 21 cm a souvent été utilisée pour étudier la cinématique des galaxies. L'hydrogène neutre a l'avantage de se retrouver en quantité détectable au-delà du disque optique des galaxies. Cependant, la résolution spatiale de ces observations est moindre que leurs équivalents réalisés en lumière visible. Lors de la comparaison des données HI, avec des données à plus haute résolution, certaines différences étaient simplement attribuées à la faible résolution des observations HI. Le projet THINGS (The HI Nearby Galaxy Survey a observé plusieurs galaxies de l'échantillon SINGS (Spitzer Infrared Nearby Galaxies Survey). Les données cinématiques du projet THIGNS seront comparées aux données cinématiques obtenues en lumière Ha, afin de déterminer si la seule différence de résolution spatiale peut expliquer les différences observées. Les résultats montrent que des différences intrinsèques aux traceurs utilisées (hydrogène neutre ou ionisé), sont responsables de dissemblances importantes. La compréhension de ces particularités est importante: la distribution de la matière sombre, dérivée de la rotation des galaxies, est un test de certains modèles cosmologiques. / A new EMCCD (Electron multiplying Charge Coupled Device) controller is presented. It allows the EMCCD to be used for photon counting by drastically taking down its dominating source of noise : the clock induced charges. A new EMCCD camera was built using this controller. It has been characterized in laboratory and tested at the observatoire du mont Mégantic. When compared to the previous generation of photon counting cameras based on intensifier tubes, this new camera renders the observation of the galaxies kinematics with an integral field spectrometer with a Fabry-Perot interferometer in Ha light much faster, and allows fainter galaxies to be observed. The integration time required to reach a given signal-to-noise ratio is about 4 times less than with the intensifier tubes. Many applications could benefit of such a camera: fast, faint flux photometry, high spectral and temporal resolution spectroscopy, earth-based diffraction limited imagery (lucky imaging), etc. Technically, the camera is dominated by the shot noise for flux higher than 0.002 photon/pixel/image.
The 21 cm emission line of the neutral hydrogen (HI) is often used to map the galaxies kinematics. The extent of the distribution of the neutral hydrogen in galaxies, which goes well beyond the optical disk, is one of the reasons this line is used so often. However, the spatial resolution of such observations is limited when compared to their optical equivalents. When comparing the HI data to higher resolution ones, some differences were simply attributed to the beam smearing of the HI caused by its lower resolution. The THINGS (The HI Nearby Galaxy Survey) project observed many galaxies of the SINGS (Spitzer Infrared Nearby Galaxies Survey) project. The kinematics of THINGS will be compared to the kinematic data of the galaxies obtained in Ha light. The comparison will try to determine whether the sole beam smearing is responsible of the differences observed. The results shows that intrinsic dissimilarities between the kinematical tracers used are responsible of some of the observed disagreements. The understanding of theses differences is of a high importance as the dark matter distribution, inferred from the rotation of the galaxies, is a test to some cosmological models.
|
558 |
Statistical properties of parasite density estimators in malaria and field applicationsHammami, Imen 24 June 2013 (has links) (PDF)
Malaria is a devastating global health problem that affected 219 million people and caused 660,000 deaths in 2010. Inaccurate estimation of the level of infection may have adverse clinical and therapeutic implications for patients, and for epidemiological endpoint measurements. The level of infection, expressed as the parasite density (PD), is classically defined as the number of asexual parasites relative to a microliter of blood. Microscopy of Giemsa-stained thick blood smears (TBSs) is the gold standard for parasite enumeration. Parasites are counted in a predetermined number of high-power fields (HPFs) or against a fixed number of leukocytes. PD estimation methods usually involve threshold values; either the number of leukocytes counted or the number of HPFs read. Most of these methods assume that (1) the distribution of the thickness of the TBS, and hence the distribution of parasites and leukocytes within the TBS, is homogeneous; and that (2) parasites and leukocytes are evenly distributed in TBSs, and thus can be modeled through a Poisson-distribution. The violation of these assumptions commonly results in overdispersion. Firstly, we studied the statistical properties (mean error, coefficient of variation, false negative rates) of PD estimators of commonly used threshold-based counting techniques and assessed the influence of the thresholds on the cost-effectiveness of these methods. Secondly, we constituted and published the first dataset on parasite and leukocyte counts per HPF. Two sources of overdispersion in data were investigated: latent heterogeneity and spatial dependence. We accounted for unobserved heterogeneity in data by considering more flexible models that allow for overdispersion. Of particular interest were the negative binomial model (NB) and mixture models. The dependent structure in data was modeled with hidden Markov models (HMMs). We found evidence that assumptions (1) and (2) are inconsistent with parasite and leukocyte distributions. The NB-HMM is the closest model to the unknown distribution that generates the data. Finally, we devised a reduced reading procedure of the PD that aims to a better operational optimization and a practical assessing of the heterogeneity in the distribution of parasites and leukocytes in TBSs. A patent application process has been launched and a prototype development of the counter is in process.
|
559 |
Kompiliatorių optimizavimas IA-64 architektūroje / Compiler optimizations on ia-64 architectureValiukas, Tadas 01 July 2014 (has links)
Tradicinės x86 architektūros spartinimui artėjant prie galimybių ribos, kompanija Intel pradėjo kurti naują IA-64 architektūrą, paremtą EPIC – išreikštinai lygiagrečiai vykdomomis instrukcijomis vieno takto metu. Ši pagrindinė savybė leidžia vykdyti iki šešių instrukcijų per vieną taktą. Taipogi architektūra pasižymi tokiomis savybėmis, kurios leido efektyviai spręsti su kodo optimizavimu susijusias problemas tradicinėse architektūrose. Tačiau kompiliatorių optimizavimo algoritmai ilgą laiką buvo tobulinami tradicinėse architektūrose, todėl norint išnaudoti naująją architektūrą, reikia ieškoti būdų tobulinti esamus kompiliatorius. Vienas iš būdų – kompiliatoriaus vidinių parametrų atsakingų už optimizacijas reikšmių pritaikymas IA-64. Būtent toks yra šio darbo tikslas, kuriam pasiekti reikia išnagrinėti IA-64 savybes, jas vėliau eksperimentiškai taikyti realaus kodo pavyzdžiuose bei įvertinti jų įtaką kodo vykdymo spartai. Pagal gautus rezultatus nagrinėjami kompiliatoriaus vidiniai parametrai ir su specialia kompiliatorių testavimo programa randamas geriausias reikšmių rinkinys šiai architektūrai. Vėliau šis rinkinys išbandomas su taikomosiomis programomis. Gauto parametrų rinkinio reikšmės turėtų leisti generuoti efektyvesnį kodą IA-64 architektūrai. / After performance optimization of traditional architectures began to reach their limits, Intel corporation started to develop new architecture based on EPIC – Explicitly Parallel Instruction Counting. This main feature allowed up to six instructions to be executed in single CPU cycle. Also this architecture includes more features, which allowed efficient solution of traditional architectures code optimization problems. However for long time code optimization algorithms have been improved for traditional architectures only, as a result those algorithms should be adopted to new architecture. One of the ways to do that – exploration of internal compilers parameters, which are responsible for code optimizations. That is the primary target of this work and in order to reach it the features of the IA-64 architecture and impact to execution performance must be explored using real-life code examples. Tests results may be used later for internal parameters selection and further exploration of these parameters values by using special compiler performance testing benchmarks. The set of those new values could be tested with real life applications in order to prove efficiency of IA-64 architecture features.
|
560 |
New concepts for managing diabetes mellitus / Fred KeetKeet, Fred January 2003 (has links)
Preface -
Biotechnology is generally considered to be the wave of the future. To facilitate
accurate and rapid development of medication and treatments, it is critical that we are
able to simulate the human body. One section of this complex model would be the
human energy system.
Pharmaceutical companies are currently pouring vast amounts of capital into research
regarding general simulation of cellular structures, protein structures and bodily
processes. Their aim is to develop treatments and medication for major diseases.
Some of these diseases are epidemics like cancer, cardiovascular diseases, stress,
obesity, etc. One of the most important causes of these diseases is poor blood glucose
control.
Current management methods for insulin dependent diabetes are limited to trial and
error systems: clearly ineffective and prone to errors. It is critical that better
management systems be developed, to ease the diabetic epidemic.
The blood glucose control system is one of the major systems in the body, as we are
in constant need of energy to facilitate the optimum functioning of the human body.
This study makes use of a developed simulation model for the human energy system
to ease the management of Diabetes mellitus, which is a malfunction of the human
energy system.
This dissertation is presented in two parts: The first part discusses the human energy
simulation model, and the verification thereof, while the second presents possible
applications of this model to ease the management of Diabetes.
The human energy system simulation model -
This section discusses the development and verification of the model. It also touches
on the causes, and current methods, of managing diabetes, as well as the functioning
of the human energy system.
The human energy model is approached with the conservation of energy in mind. A
top down model is developed, using data from independent studies to verify the
model.
Application of human energy simulation model -
The human energy simulation model is of little use if the intended audience cannot
use it: people suffering from malfunctioning energy systems. These include people
having trouble with obesity, diabetes, cardiovascular disease, etc. To facilitate this, we
need to provide a variety of products useable by this group of people.
We propose a variety of ways in which the model can be used: Cellular phone
applications, Personal digital assistants (PDAs) applications, as well as computer
software.
By making use of current technology, we generate a basic proof-of-concept
application to demonstrate the intended functionality. / MIng (Mechanical Engineering) North-West University, Potchefstroom Campus, 2004
|
Page generated in 0.0485 seconds