541 |
Techniques for Uncertainty quantification, Risk minimization, with applications to risk-averse decision makingAshish Chandra (12975932) 27 July 2022 (has links)
<p>Optimization under uncertainty is the field of optimization where the data or the optimization model itself has uncertainties associated with it. Such problems are more commonly referred to as stochastic optimization problems. These problems capture the broad idea of making optimal decisions under uncertainty. An important class of these stochastic optimization problems is chance-constrained optimization problems, where the decision maker seeks to choose the best decision such that the probability of violating a set of uncertainty constraints is within a predefined probabilistic threshold risk level. Such stochastic optimization problems have found a lot of interest in the service industry as the service providers need to satisfy a minimum service level agreement (SLA) with their customers. Satisfying SLA in the presence of uncertainty in the form of probabilistic failure of infrastructure poses many interesting and challenging questions. In this thesis, we answer a few of these questions.</p>
<p>We first explore the problem of quantifying uncertainties that adversely impact the service provider's infrastructure, thereby hurting the service level agreements. In particular we address the probability quantification problem, where given an uncertainty set, the goal is to quantify the probability of an event, on which the optimal value of an optimization problem exceeds a predefined threshold value. The novel techniques we propose, use and develop ideas from diverse literatures such as mixed integer nonlinear program, chance-constrained programming, approximate sampling and counting techniques, and large deviation bounds. Our approach yields the first polynomial time approximation scheme for the specific probability quantification problem we consider. </p>
<p>Our next work is inspired by the ideas of risk averse decision making. Here, we focus on studying the problem of minimizing risk functions. As a special case we also explore the problem of minimizing the Value at Risk (VaR), which is a well know non-convex problem. For more than a decade, the well-known, best convex approximation to this problem has been obtained by minimizing the Conditional Value at Risk (CVaR). We proposed a new two-stage model which formulates these risk functions, which eventually leads to a bilinear optimization problem, a special case of which is the VaR minimization problem. We come up with enhancements to this bilinear formulation and use convexification techniques to obtain tighter lower and upper convex approximations to the problem. We also find that the approximation obtained by CVaR minimization is a special case of our method. The overestimates we construct help us to develop tighter convex inner approximations for the chance constraint optimization problems.</p>
|
542 |
TEMPORAL EVENT MODELING OF SOCIAL HARM WITH HIGH DIMENSIONAL AND LATENT COVARIATESXueying Liu (13118850) 09 September 2022 (has links)
<p> </p>
<p>The counting process is the fundamental of many real-world problems with event data. Poisson process, used as the background intensity of Hawkes process, is the most commonly used point process. The Hawkes process, a self-exciting point process fits to temporal event data, spatial-temporal event data, and event data with covariates. We study the Hawkes process that fits to heterogeneous drug overdose data via a novel semi-parametric approach. The counting process is also related to survival data based on the fact that they both study the occurrences of events over time. We fit a Cox model to temporal event data with a large corpus that is processed into high dimensional covariates. We study the significant features that influence the intensity of events. </p>
|
543 |
Development of Time-Resolved Diffuse Optical Systems Using SPAD Detectors and an Efficient Image Reconstruction AlgorithmAlayed, Mrwan January 2019 (has links)
Time-Resolved diffuse optics is a powerful and safe technique to quantify the optical properties (OP) for highly scattering media such as biological tissues. The OP values are correlated with the compositions of the measured objects, especially for the tissue chromophores such as hemoglobin. The OP are mainly the absorption and the reduced scattering coefficients that can be quantified for highly scattering media using Time-Resolved Diffuse Optical Spectroscopy (TR-DOS) systems. The OP can be retrieved using Time-Resolved Diffuse Optical Imaging (TR-DOI) systems to reconstruct the distribution of the OP in measured media. Therefore, TR-DOS and TR-DOI can be used for functional monitoring of brain and muscles, and to diagnose some diseases such as detection and localization for breast cancer and blood clot. In general, TR-DOI systems are non-invasive, reliable, and have a high temporal resolution.
TR-DOI systems have been known for their complexity, bulkiness, and costly equipment such as light sources (picosecond pulsed laser) and detectors (single photon counters). Also, TR-DOI systems acquire a large amount of data and suffer from the computational cost of the image reconstruction process. These limitations hinder the usage of TR-DOI for widespread potential applications such as clinical measurements.
The goals of this research project are to investigate approaches to eliminate two main limitations of TR-DOI systems. First, building TR-DOS systems using custom-designed free-running (FR) and time-gated (TG) SPAD detectors that are fabricated in low-cost standard CMOS technology instead of the costly photon counting and timing detectors. The FR-TR-DOS prototype has demonstrated comparable performance (for homogeneous objects measurements) with the reported TR-DOS prototypes that use commercial and expensive detectors. The TG-TR-DOS prototype has acquired raw data with a low level of noise and high dynamic range that enable this prototype to measure multilayered objects such as human heads. Second, building and evaluating TR-DOI prototype that uses a computationally efficient algorithm to reconstruct high quality 3D tomographic images by analyzing a small part of the acquired data.
This work indicates the possibility to exploit the recent advances in the technologies of silicon detectors, and computation to build low-cost, compact, portable TR-DOI systems. These systems can expand the applications of TR-DOI and TR-DOS into several fields such as oncology, and neurology. / Thesis / Doctor of Philosophy (PhD)
|
544 |
Lattice Point Counting through Fractal Geometry and Stationary Phase for Surfaces with Vanishing CurvatureCampolongo, Elizabeth Grace 02 September 2022 (has links)
No description available.
|
545 |
Frühzeitige Detektion von Eutergesundheitsstörungen durch die Beschreibung der Zellgrößenverhältnisse mittels einer modifizierten Coulter-Counter-MethodeElsholz, Sabrina 04 January 2017 (has links)
Ziel der Arbeit war es, die Eignung des Coulter Counters für das Eutergesundheitsmonitoring zu untersuchen. Hierfür wurde, als Voraussetzung für den Einsatz des Coulter Counters in der prozessnahen Milchuntersuchung, eine Probenaufbereitungsmethode entwickelt. Diese ermöglicht es durch Überkopfzentrifugation mit hoher Geschwindigkeit (1600 × g; 4 °C; 15 Minuten) die Proben binnen weniger Minuten so aufzubereiten, dass sie im Coulter Counter gemessen werden können. Die Überkopfzentrifugation garantiert dabei die Messbarkeit der Zellen sowohl des Zellpellets auch des Überstands ohne Fettrückstände in der Probe. Es wurde festgestellt, dass sich die Partikelgrößenverteilung im Falle einer Mastitis deutlich verändert. Basierend auf den Partikelgrößenverteilungsdaten konnten mittels eines Trainingsdatensatzes Algorithmen zur Differenzierung des Gesundheitsstatus entwickelt werden. Die Zuordnung des Gesundheitsstatus konnte im Testdatensatz mit einer Spezifität von 100 % für den Status „gesund“, 96,9 % für den Status „subklinisch erkrankt“ und 91,2 % für den Status „erkrankt“ nachgewiesen werden. / The aim of this study was, to examine the suitability of the Coulter Counter for udder health monitoring. Therefore a sample preparation method was developed as precondition for using the Coulter Counter for online Monitoring. The preparation method allows to prepare the samples within a few minutes to make them measurable in the Coulter Counter. For this a high speed centrifugation in overhead position is used (1600 × g; 4 °C; 15 min). This guarantees to measure the cells of the cell pellet as well as of the supernatant, without any fat particles in the sample. It could be shown that the particle size distribution changes during a mastitis. Based on the data of the particle size distribution algorithms to differentiate the udder health status could be developed. The classification of the udder health status could be done with a specifity of 100 % for the status healthy, 96.6 % for the status subclinical infected and 91.2 % for the status clinical infected.
|
546 |
Sparse instances of hard problemsDell, Holger 01 September 2011 (has links)
Diese Arbeit nutzt und verfeinert Methoden der Komplexitätstheorie, um mit diesen die Komplexität dünner Instanzen zu untersuchen. Dazu gehören etwa Graphen mit wenigen Kanten oder Formeln mit wenigen Bedingungen beschränkter Weite. Dabei ergeben sich zwei natürliche Fragestellungen: (a) Gibt es einen effizienten Algorithmus, der beliebige Instanzen eines NP-schweren Problems auf äquivalente, dünne Instanzen reduziert? (b) Gibt es einen Algorithmus, der dünne Instanzen NP-schwerer Probleme bedeutend schneller löst als allgemeine Instanzen gelöst werden können? Wir formalisieren diese Fragen für verschiedene Probleme und zeigen, dass positive Antworten jeweils zu komplexitätstheoretischen Konsequenzen führen, die als unwahrscheinlich gelten. Frage (a) wird als Kommunikation modelliert, in der zwei Akteure kooperativ eine NP-schwere Sprache entscheiden möchten und dabei möglichst wenig kommunizieren. Unter der komplexitätstheoretischen Annahme, dass coNP keine Teilmenge von NP/poly ist, erhalten wir aus unseren Ergebnissen erstaunlich scharfe untere Schranken für interessante Parameter aus verschiedenen Teilgebieten der theoretischen Informatik. Im Speziellen betrifft das die Ausdünnung von Formeln, die Kernelisierung aus der parameterisierten Komplexitätstheorie, die verlustbehaftete Kompression von Entscheidungsproblemen, und die Theorie der probabilistisch verifizierbaren Beweise. Wir untersuchen Fragestellung (b) anhand der Exponentialzeitkomplexität von Zählproblemen. Unter (Varianten) der bekannten Exponentialzeithypothese (ETH) erhalten wir exponentielle untere Schranken für wichtige #P-schwere Probleme: das Berechnen der Zahl der erfüllenden Belegungen einer 2-KNF Formel, das Berechnen der Zahl aller unabhängigen Mengen in einem Graphen, das Berechnen der Permanente einer Matrix mit Einträgen 0 und 1, das Auswerten des Tuttepolynoms an festen Punkten. / In this thesis, we use and refine methods of computational complexity theory to analyze the complexity of sparse instances, such as graphs with few edges or formulas with few constraints of bounded width. Two natural questions arise in this context: (a) Is there an efficient algorithm that reduces arbitrary instances of an NP-hard problem to equivalent, sparse instances? (b) Is there an algorithm that solves sparse instances of an NP-hard problem significantly faster than general instances can be solved? We formalize these questions for different problems and show that positive answers for these formalizations would lead to consequences in complexity theory that are considered unlikely. Question (a) is modeled by a communication process, in which two players want to cooperatively decide an NP-hard language and at the same time communicate as few as possible. Under the complexity-theoretic hypothesis that coNP is not in NP/poly, our results imply surprisingly tight lower bounds for parameters of interest in several areas, namely sparsification, kernelization in parameterized complexity, lossy compression, and probabilistically checkable proofs. We study the question (b) for counting problems in the exponential time setting. Assuming (variants of) the exponential time hypothesis (ETH), we obtain asymptotically tight, exponential lower bounds for well-studied #P-hard problems: Computing the number of satisfying assignments of a 2-CNF formula, computing the number of all independent sets in a graph, computing the permanent of a matrix with entries 0 and 1, evaluating the Tutte polynomial at fixed evaluation points.
|
547 |
Complexity of Normal Forms on Structures of Bounded DegreeHeimberg, Lucas 04 June 2018 (has links)
Normalformen drücken semantische Eigenschaften einer Logik durch syntaktische Restriktionen aus. Sie ermöglichen es Algorithmen, Grenzen der Ausdrucksstärke einer Logik auszunutzen. Ein Beispiel ist die Lokalität der Logik erster Stufe (FO), die impliziert, dass Graph-Eigenschaften wie Erreichbarkeit oder Zusammenhang nicht FO-definierbar sind. Gaifman-Normalformen drücken die Bedeutung einer FO-Formel als Boolesche Kombination lokaler Eigenschaften aus. Sie haben eine wichtige Rolle in Model-Checking Algorithmen für Klassen dünn besetzter Graphen, deren Laufzeit durch die Größe der auszuwertenden Formel parametrisiert ist. Es ist jedoch bekannt, dass Gaifman-Normalformen im Allgemeinen nur mit nicht-elementarem Aufwand konstruiert werden können. Dies führt zu einer enormen Parameterabhängigkeit der genannten Algorithmen. Ähnliche nicht-elementare untere Schranken sind auch für Feferman-Vaught-Zerlegungen und für die Erhaltungssätze von Lyndon, Łoś und Tarski bekannt.
Diese Arbeit untersucht die Komplexität der genannten Normalformen auf Klassen von Strukturen beschränkten Grades, für welche die nicht-elementaren unteren Schranken nicht gelten. Für diese Einschränkung werden Algorithmen mit elementarer Laufzeit für die Konstruktion von Gaifman-Normalformen, Feferman-Vaught-Zerlegungen, und für die Erhaltungssätze von Lyndon, Łoś und Tarski entwickelt, die in den ersten beiden Fällen worst-case optimal sind.
Wichtig hierfür sind Hanf-Normalformen. Es wird gezeigt, dass eine Erweiterung von FO durch unäre Zählquantoren genau dann Hanf-Normalformen erlaubt, wenn alle Zählquantoren ultimativ periodisch sind, und wie Hanf-Normalformen in diesen Fällen in elementarer und worst-case optimaler Zeit konstruiert werden können.
Dies führt zu Model-Checking Algorithmen für solche Erweiterungen von FO sowie zu Verallgemeinerungen der Algorithmen für Feferman-Vaught-Zerlegungen und die Erhaltungssätze von Lyndon, Łoś und Tarski. / Normal forms express semantic properties of logics by means of syntactical restrictions. They allow algorithms to benefit from restrictions of the expressive power of a logic. An example is the locality of first-order logic (FO), which implies that properties like reachability or connectivity cannot be defined in FO. Gaifman's local normal form expresses the satisfaction conditions of an FO-formula by a Boolean combination of local statements. Gaifman normal form serves as a first step in fixed-parameter model-checking algorithms, parameterised by the size of the formula, on sparse graph classes. However, it is known that in general, there are non-elementary lower bounds for the costs involved in transforming a formula into Gaifman normal form. This leads to an enormous parameter-dependency of the aforementioned algorithms. Similar non-elementary lower bounds also hold for Feferman-Vaught decompositions and for the preservation theorems by Lyndon, Łoś, and Tarski.
This thesis investigates the complexity of these normal forms when restricting attention to classes of structures of bounded degree, for which the non-elementary lower bounds are known to fail. Under this restriction, the thesis provides
algorithms with elementary and even worst-case optimal running time for the construction of Gaifman normal form and Feferman-Vaught decompositions. For the preservation theorems, algorithmic versions with elementary running time and non-matching lower bounds are provided.
Crucial for these results is the notion of Hanf normal form. It is shown that an extension of FO by unary counting quantifiers allows Hanf normal forms if, and only if, all quantifiers are ultimately periodic, and furthermore, how Hanf normal form can be computed in elementary and worst-case optimal time in these cases. This leads to model-checking algorithms for such extensions of FO and also allows generalisations of the constructions for Feferman-Vaught decompositions and preservation theorems.
|
548 |
Spectro-imagerie optique à faible flux et comparaison de la cinématique Ha et HI d'un échantillon de galaxies prochesDaigle, Olivier 02 1900 (has links)
Un nouveau contrôleur de EMCCD (Electron multiplying Charge Coupled Device) est présenté. Il permet de diminuer significativement le bruit qui domine lorsque la puce EMCCD est utilisé pour du comptage de photons: le bruit d'injection de charge. À l'aide de ce contrôleur, une caméra EMCCD scientifique a été construite, caractérisée en laboratoire et testée à l'observatoire du mont Mégantic. Cette nouvelle caméra permet, entre autres, de réaliser des observations de la cinématique des galaxies par spectroscopie de champ intégral par interférométrie de Fabry-Perot en lumière Ha beaucoup plus rapidement, ou de galaxies de plus faible luminosité, que les caméras à comptage de photon basées sur des tubes amplificateurs. Le temps d'intégration nécessaire à l'obtention d'un rapport signal sur bruit donné est environ 4 fois moindre qu'avec les anciennes caméras. Les applications d'un tel appareil d'imagerie sont nombreuses: photométrie rapide et faible flux, spectroscopie à haute résolution spectrale et temporelle, imagerie limitée par la diffraction à partir de télescopes terrestres (lucky imaging), etc. D'un point de vue technique, la caméra est dominée par le bruit de Poisson pour les flux lumineux supérieurs à 0.002 photon/pixel/image.
D'un autre côté, la raie d'hydrogène neutre (HI) à 21 cm a souvent été utilisée pour étudier la cinématique des galaxies. L'hydrogène neutre a l'avantage de se retrouver en quantité détectable au-delà du disque optique des galaxies. Cependant, la résolution spatiale de ces observations est moindre que leurs équivalents réalisés en lumière visible. Lors de la comparaison des données HI, avec des données à plus haute résolution, certaines différences étaient simplement attribuées à la faible résolution des observations HI. Le projet THINGS (The HI Nearby Galaxy Survey a observé plusieurs galaxies de l'échantillon SINGS (Spitzer Infrared Nearby Galaxies Survey). Les données cinématiques du projet THIGNS seront comparées aux données cinématiques obtenues en lumière Ha, afin de déterminer si la seule différence de résolution spatiale peut expliquer les différences observées. Les résultats montrent que des différences intrinsèques aux traceurs utilisées (hydrogène neutre ou ionisé), sont responsables de dissemblances importantes. La compréhension de ces particularités est importante: la distribution de la matière sombre, dérivée de la rotation des galaxies, est un test de certains modèles cosmologiques. / A new EMCCD (Electron multiplying Charge Coupled Device) controller is presented. It allows the EMCCD to be used for photon counting by drastically taking down its dominating source of noise : the clock induced charges. A new EMCCD camera was built using this controller. It has been characterized in laboratory and tested at the observatoire du mont Mégantic. When compared to the previous generation of photon counting cameras based on intensifier tubes, this new camera renders the observation of the galaxies kinematics with an integral field spectrometer with a Fabry-Perot interferometer in Ha light much faster, and allows fainter galaxies to be observed. The integration time required to reach a given signal-to-noise ratio is about 4 times less than with the intensifier tubes. Many applications could benefit of such a camera: fast, faint flux photometry, high spectral and temporal resolution spectroscopy, earth-based diffraction limited imagery (lucky imaging), etc. Technically, the camera is dominated by the shot noise for flux higher than 0.002 photon/pixel/image.
The 21 cm emission line of the neutral hydrogen (HI) is often used to map the galaxies kinematics. The extent of the distribution of the neutral hydrogen in galaxies, which goes well beyond the optical disk, is one of the reasons this line is used so often. However, the spatial resolution of such observations is limited when compared to their optical equivalents. When comparing the HI data to higher resolution ones, some differences were simply attributed to the beam smearing of the HI caused by its lower resolution. The THINGS (The HI Nearby Galaxy Survey) project observed many galaxies of the SINGS (Spitzer Infrared Nearby Galaxies Survey) project. The kinematics of THINGS will be compared to the kinematic data of the galaxies obtained in Ha light. The comparison will try to determine whether the sole beam smearing is responsible of the differences observed. The results shows that intrinsic dissimilarities between the kinematical tracers used are responsible of some of the observed disagreements. The understanding of theses differences is of a high importance as the dark matter distribution, inferred from the rotation of the galaxies, is a test to some cosmological models.
|
549 |
Role of Ectodermal-neural cortex 1 protein in human glioma progression, identification of a peptide internalized by human glioblastoma cells and development of an alternative method to generate growth curves of adherent cultures / Papel da proteína Ectodermal-neural cortex 1 (ENC1) na progressão de glioma humano, identificação de um peptídeo internalizado por células de glioblastoma humano e desenvolvimento de um método alternativo para gerar curvas de crescimento celularPereira, Túlio Felipe 14 November 2018 (has links)
Gliomas are the most common form of primary intracranial malignancy, among which astrocytomas are the most frequent. Ectodermal-cortex protein 1 (ENC 1), also known as Nuclear Restricted Protein/Brain (NRP/B), was first characterized as a protein which interacts with the cytoskeleton by binding to actin through Kelch-like domains, being related to neural fate specification during development of the nervous system. The first chapter of this thesis confirms ENC1 as a tumor suppression properties by a genomic edition approach, analyses ENC1 expression in a set of patient glioma samples and describes the correlation these data with patients survival and progression-free survival, concluding that ENC1 expression may constitute a biomarker for glioma aggressiveness. The second chapter refers to the identification and in vitro characterization of the LHTNELQ peptide, which was selected by the Phage Display method using human glioblastoma cells. This new peptide is able to be internalized by these cells and features as a new tool for the development of glioma therapeutics. The third chapter report an alternative method to generate growth curves of adherent cell cultures, which is based on the CFSE fluorescence decay over time. It is an alternative method to determine growth curves of cultured cells, with smaller variation among technical replicates than that of counting-based methods. / Gliomas são a forma mais comum de malignidades primárias intracranianas, dentre os quais os astrocitomas são os mais frequentes. A proteína Ectodermal-neural cortex 1 (ENC1), também conhecida como Nuclear Restricted Protein/Brain (NRP/B), foi primeiramente caracterizada como uma proteína que interage com o citoesqueleto por meio de ligação à actina através de domínios Kelch-like, sendo relacionada com diferenciação neuronal durante o desenvolvimento do sistema nervoso. O primeiro capítulo desta tese descreve confirmação da capacidade supressora tumoral de ENC1 por abordagem de edição genômica, analisa a expressão de ENC1 em um conjunto de amostras de pacientes com gliomas e correlaciona esses dados com tempo de sobrevida geral e sobrevida livre de progressão tumoral nos pacientes, concluindo que a expressão de ENC1 pode ser utilizada como um biomarcador da agressividade do glioma. O segundo capítulo apresenta a identificação e caracterização in vitro do peptídeo LHTNELQ, que foi selecionado pela metodologia de Phage display utilizandose de células de glioblastoma humano. Este novo peptídeo é capaz de internalizar-se nestas células e figura como uma nova ferramenta para o desenvolvimento de estratégias terapêuticas para glioblastomas. No terceiro capítulo propõe-se um método alternativo para gerar curvas de crescimento celular de cultura aderente, o qual é baseado no decaimento da fluorescência do reagente CFSE ao longo do tempo. Tratase de um método alternativo para a determinação de curvas de crescimento de culturas aderentes, com menor variação entre as réplicas técnicas do que os métodos baseados em contagem das células.
|
550 |
Novel fabrication and testing of light confinement devicesRing, Josh January 2016 (has links)
The goal of this project is to study novel nanoscale excitation volumes, sensitive enoughto study individual chromophores and go on to study new and exciting self assemblyapproaches to this problem. Small excitation volumes may be engineered using light con-finement inside apertures in metal films. These apertures enhance fluorescence emissionrates, quantum yields, decrease fluorescence quenching, enable higher signal-to-noiseratios and allow higher concentration single chromophore fluorescence, to be studied byrestricting this excitation volume. Excitation volumes are reported on using the chro-mophore's fluorescence by utilising fluorescence correlation spectroscopy, which monitorsfluctuations in fluorescence intensity. From the correlation in time, we can find the res-idence time, the number of chromophores, the volume in which they are diffusing andtherefore the fluorescence emission efficiency. Fluorescence properties are a probe ofthe local environment, a particularly powerful tool due to the high brightness (quantumyield) fluorescent dyes and sensitive photo-detection equipment both of which are readilyavailable, (such as avalanche photodiodes and photomultiplier tubes). Novel materialscombining the properties of conducting and non-conducting materials at scales muchsmaller than the incident wavelength are known as meta-materials. These allow combi-nations of properties not usually possible in natural materials at optical frequencies. Theproperties reported so far include; negative refraction, negative phase velocity, fluorescenceemission enhancement, lensing and therefore light confinement has also been proposed tobe possible. Instead of expensive and slow lithography methods many of these materialsmay be fabricated with self assembly techniques, which are truly nanoscopic and otherwiseinaccessible with even the most sophisticated equipment. It was found that nanoscaled volumes from ZMW and HMMs based on NW arrays wereall inefficient at enhancing fluorescence. The primary cause was the reduced fluorescencelifetime reducing the fluorescence efficiency, which runs contrary to some commentatorsin the literature. NW based lensing was found to possible in the blue region of the opticalspectrum in a HMM, without the background fluorescence normally associated with a PAAtemplate. This was achieved using a pseudo-ordered array of relatively large nanowireswith a period just smaller than lambda / 2 which minimised losses. Nanowires in the traditionalregime lambda / 10 produced significant scattering and lead to diffraction, such that they werewholly unsuitable for an optical lensing application.
|
Page generated in 0.0475 seconds