• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 221
  • 71
  • 49
  • 23
  • 8
  • 6
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 474
  • 86
  • 56
  • 52
  • 50
  • 47
  • 41
  • 41
  • 38
  • 35
  • 33
  • 32
  • 32
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Software Reliability Prediction – An Evaluation of a Novel Technique

Andersson, Björn, Persson, Marie January 2004 (has links)
Along with continuously increasing computerization, our expectations on software and hardware reliability increase considerably. Therefore, software reliability has become one of the most important software quality attributes. Software reliability modeling based on test data is done to estimate whether the current reliability level meets the requirements for the product. Software reliability modeling also provides possibilities to predict reliability. Costs of software developing and tests together with profit issues in relation to software reliability are one of the main objectives to software reliability prediction. Software reliability prediction currently uses different models for this purpose. Parameters have to be set in order to tune the model to fit the test data. A slightly different prediction model, Time Invariance Estimation, TIE is developed to challenge the models used today. An experiment is set up to investigate whether TIE could be found useful in a software reliability prediction context. The experiment is based on a comparison between the ordinary reliability prediction models and TIE.
302

[en] ALFRED TARSKI: LOGICAL CONSEQUENCE, LOGICAL NOTIONS, AND LOGICAL FORMS / [pt] ALFRED TARSKI: CONSEQÜÊNCIA LÓGICA, NOÇÕES LÓGICAS E FORMAS LÓGICAS

STEFANO DOMINGUES STIVAL 17 September 2004 (has links)
[pt] O tema da presente dissertação é o problema da demarcação entre os termos lógicos e extralógicos no âmbito das ciências formais, anunciado primeiramente por Alfred Tarski em seu artigo de 1936, On the Concept of Logical Consequence. Depois de expor e discutir o problema em questão, mostrando seu surgimento a partir da necessidade de uma definição materialmente adequada do conceito de conseqüência lógica, analisamos a solução formulada por Tarski em um artigo publicado postumamente, intitulado What Are Logical Notions? Algumas discussões subsidiárias, igualmente importantes para o trabalho como um todo, dizem respeito à concepção dos conceitos de modelo e interpretação que se podem depreender dos artigos supracitados, e de como ela difere da assim chamada concepção standard em teoria de modelos. Nosso objetivo principal é mostrar o lugar ocupado pelo conceito de forma lógica na obra de Tarski, e de como sua concepção acerca deste conceito implica uma visão ampliada do conceito de conseqüência lógica, cuja caracterização correta torna necessária a estratificação das formas lógicas numa hierarquia de tipos. / [en] The subject of this paper is the problem of demarcation between logical and extra-logical terms of formal languages, as formulated for the first time by Tarski in his 1936 paper On the Concept of Logical Consequence. After presenting and discussing the demarcation problem, pointing out how it arises from the need for a materially adequate definition of the concept of logical consequence, we analyze the solution presented by Tarski in his posthumously published paper, entitled What Are Logical Notions? Some subsidiary issues, that are also important for the work as a whole, concern the conception of model and interpretation that springs from the two papers mentioned, and how this conception differs from the standard conception in model theory. Our main goal is to show the place occupied by the concept of logical form in Tarski`s work, and how his conception of this concept implies a broader view about the related concept of logical consequence whose correct characterization makes necessary the stratification of logical forms into a hierarchy of types.
303

Détection du pulsar de Vela et recherche de violation d'invariance de Lorentz avec le cinquième télescope de H.E.S.S. / Detection of the Vela pulsar and search for Lorentz invariance violation with the fifth H.E.S.S. telescope

Chrétien, Mathieu 02 October 2015 (has links)
Le cinquième télescope (CT5) du réseau H.E.S.S. (High Energy Stereoscopic System) a été inauguré en 2012. H.E.S.S. est destiné à l’observation du ciel austral dans le domaine des rayons γ et CT5, dont le seuil est d’environ 30 GeV, a permis la détection du pulsar de Vela après 24 heures d’observations. Certains scénarios de gravitation quantique (QG) prédisent une violation d’invariance de Lorentz (LIV). Celle-ci se manifeste par l’ajout de termes ∝(E/EQG)n aux relations de dispersion du photon, où E est l’énergie du quanta de lumière, EQG l’énergie caractéristique des processus de QG et n l’ordre de la correction. Cette dépendance en énergie peut être testée par des mesures de temps de vol entre photons reçus de sources astrophysiques variables (noyaux actifs de galaxies), transitoires (sursauts γ) ou encore périodiques (pulsars). Cette thèse présente l’analyse des données recueillies par CT5 sur le pulsar de Vela. Une méthode de maximum de vraisemblance ayant déjà montré sa robustesse sur d’autres types de sources a été adaptée au cas du pulsar de Vela. Aucune déviation des relations de dispersion standard n’est observée, par conséquent des limites sont placées sur EQG. La plus contraignante est obtenue pour une correction linéaire superluminique aux relations de dispersion EQG > 7.0×1015 GeV. / The fifth telescope (CT5) of the H.E.S.S. array (High Energy Stereoscopic System) was inaugurated in 2012. H.E.S.S. is designed to scrutinize the southern γ ray sky and CT5, whose threshold is about 30 GeV, allowed the Vela pulsar detection in 24 hours observation time. Some quantum gravity (QG) scenarios predict a violation of Lorentz invariance (LIV). This could manifest by additional terms ∝(E/EQG)n to the photon dispersion relations, where E is the light quantum energy, EQG the typical scale at which QG processes are expected to occur and n the order of the correction. This energy dependence could be tested by time of flight measurements between photons emitted from variable (active galactic nuclei), transient (gamma ray bursts) or periodical (pulsars) astrophysical sources. This thesis presents the analysis of the CT5 collected data from the Vela pulsar. A maximum likelihood method already successfully applied to other source species has been adapted here to the Vela pulsar. No deviation from standard photon dispersion relations is observed, therefore limits have been placed on EQG. The most restrictive result has been obtained for a superluminal linear correction to the dispersion relations EQG > 7.0×1015 GeV.
304

Random Matrix Theory with Applications in Statistics and Finance

Saad, Nadia Abdel Samie Basyouni Kotb January 2013 (has links)
This thesis investigates a technique to estimate the risk of the mean-variance (MV) portfolio optimization problem. We call this technique the Scaling technique. It provides a better estimator of the risk of the MV optimal portfolio. We obtain this result for a general estimator of the covariance matrix of the returns which includes the correlated sampling case as well as the independent sampling case and the exponentially weighted moving average case. This gave rise to the paper, [CMcS]. Our result concerning the Scaling technique relies on the moments of the inverse of compound Wishart matrices. This is an open problem in the theory of random matrices. We actually tackle a much more general setup, where we consider any random matrix provided that its distribution has an appropriate invariance property (orthogonal or unitary) under an appropriate action (by conjugation, or by a left-right action). Our approach is based on Weingarten calculus. As an interesting byproduct of our study - and as a preliminary to the solution of our problem of computing the moments of the inverse of a compound Wishart random matrix, we obtain explicit moment formulas for the pseudo-inverse of Ginibre random matrices. These results are also given in the paper, [CMS]. Using the moments of the inverse of compound Wishart matrices, we obtain asymptotically unbiased estimators of the risk and the weights of the MV portfolio. Finally, we have some numerical results which are part of our future work.
305

Investigating the hypothesized factor structure of the Noel-Levitz Student Satisfaction Inventory: A study of the student satisfaction construct.

Odom, Leslie R. 12 1900 (has links)
College student satisfaction is a concept that has become more prevalent in higher education research journals. Little attention has been given to the psychometric properties of previous instrumentation, and few studies have investigated the structure of current satisfaction instrumentation. This dissertation: (a) investigated the tenability of the theoretical dimensional structure of the Noel-Levitz Student Satisfaction Inventory™ (SSI), (b) investigated an alternative factor structure using explanatory factor analyses (EFA), and (c) used multiple-group CFA procedures to determine whether an alternative SSI factor structure would be invariant for three demographic variables: gender (men/women), race/ethnicity (Caucasian/Other), and undergraduate classification level (lower level/upper level). For this study, there was little evidence for the multidimensional structure of the SSI. A single factor, termed General Satisfaction with College, was the lone unidimensional construct that emerged from the iterative CFA and EFA procedures. A revised 20-item model was developed, and a series of multigroup CFAs were used to detect measurement invariance for three variables: student gender, race/ethnicity, and class level. No measurement invariance was noted for the revised 20-item model. Results for the invariance tests indicated equivalence across the comparison groups for (a) the number of factors, (b) the pattern of indicator-factor loadings, (c) the factor loadings, and (d) the item error variances. Because little attention has been given to the psychometric properties of the satisfaction instrumentation, it is recommended that further research continue on the SSI and any additional instrumentation developed to measure student satisfaction. It is possible that invariance issues may explain a portion of the inconsistent findings noted in the review of literature. Although measurement analyses are a time-consuming process, they are essential for understanding the psychometrics characterized by a set of scores obtained from a survey, or any other form of assessment instrument.
306

Multiscale Scanning in Higher Dimensions: Limit theory, statistical consequences and an application in STED microscopy

König, Claudia Juliane 26 June 2018 (has links)
No description available.
307

Robust strategies for glucose control in type 1 diabetes

Revert Tomás, Ana 15 October 2015 (has links)
[EN] Type 1 diabetes mellitus is a chronic and incurable disease that affects millions of people all around the world. Its main characteristic is the destruction (totally or partially) of the beta cells of the pancreas. These cells are in charge of producing insulin, main hormone implied in the control of blood glucose. Keeping high levels of blood glucose for a long time has negative health effects, causing different kinds of complications. For that reason patients with type 1 diabetes mellitus need to receive insulin in an exogenous way. Since 1921 when insulin was first isolated to be used in humans and first glucose monitoring techniques were developed, many advances have been done in clinical treatment with insulin. Currently 2 main research lines focused on improving the quality of life of diabetic patients are opened. The first one is concentrated on the research of stem cells to replace damaged beta cells and the second one has a more technological orientation. This second line focuses on the development of new insulin analogs to allow emulating with higher fidelity the endogenous pancreas secretion, the development of new noninvasive continuous glucose monitoring systems and insulin pumps capable of administering different insulin profiles and the use of decision-support tools and telemedicine. The most important challenge the scientific community has to overcome is the development of an artificial pancreas, that is, to develop algorithms that allow an automatic control of blood glucose. The main difficulty avoiding a tight glucose control is the high variability found in glucose metabolism. This fact is especially important during meal compensation. This variability, together with the delay in subcutaneous insulin absorption and action causes controller overcorrection that leads to late hypoglycemia (the most important acute complication of insulin treatment). The proposals of this work pay special attention to overcome these difficulties. In that way interval models are used to represent the patient physiology and to be able to take into account parametric uncertainty. This type of strategy has been used in both the open loop proposal for insulin dosage and the closed loop algorithm. Moreover the idea behind the design of this last proposal is to avoid controller overcorrection to minimize hypoglycemia while adding robustness against glucose sensor failures and over/under- estimation of meal carbohydrates. The algorithms proposed have been validated both in simulation and in clinical trials. / [ES] La diabetes mellitus tipo 1 es una enfermedad crónica e incurable que afecta a millones de personas en todo el mundo. Se caracteriza por una destrucción total o parcial de las células beta del páncreas. Estas células son las encargadas de producir la insulina, hormona principal en el control de glucosa en sangre. Valores altos de glucosa en la sangre mantenidos en el tiempo afectan negativamente a la salud, provocando complicaciones de diversa índole. Es por eso que los pacientes con diabetes mellitus tipo 1 necesitan recibir insulina de forma exógena. Desde que se consiguiera en 1921 aislar la insulina para poder utilizarla en clínica humana, y se empezaran a desarrollar las primeras técnicas de monitorización de glucemia, se han producido grandes avances en el tratamiento con insulina. Actualmente, las líneas de investigación que se están siguiendo en relación a la mejora de la calidad de vida de los pacientes diabéticos, tienen fundamentalmente 2 vertientes: una primera que se centra en la investigación en células madre para la reposición de las células beta y una segunda vertiente de carácter más tecnológico. Dentro de esta segunda vertiente, están abiertas varias líneas de investigación, entre las que se encuentran el desarrollo de nuevos análogos de insulina que permitan emular más fielmente la secreción endógena del páncreas, el desarrollo de monitores continuos de glucosa no invasivos, bombas de insulina capaces de administrar distintos perfiles de insulina y la inclusión de sistemas de ayuda a la decisión y telemedicina. El mayor reto al que se enfrentan los investigadores es el de conseguir desarrollar un páncreas artificial, es decir, desarrollar algoritmos que permitan disponer de un control automático de la glucosa. La principal barrera que se encuentra para conseguir un control riguroso de la glucosa es la alta variabilidad que presenta su metabolismo. Esto es especialmente significativo durante la compensación de las comidas. Esta variabilidad junto con el retraso en la absorción y actuación de la insulina administrada de forma subcutánea favorece la aparición de hipoglucemias tardías (complicación aguda más importante del tratamiento con insulina) a consecuencia de la sobreactuación del controlador. Las propuestas presentadas en este trabajo hacen especial hincapié en sobrellevar estas dificultades. Así, se utilizan modelos intervalares para representar la fisiología del paciente, y poder tener en cuenta la incertidumbre en sus parámetros. Este tipo de estrategia se ha utilizado tanto en la propuesta de dosificación automática en lazo abierto como en el algoritmo en lazo cerrado. Además la principal idea de diseño de esta última propuesta es evitar la sobreactuación del controlador evitando hipoglucemias y añadiendo robustez ante fallos en el sensor de glucosa y en la estimación de las comidas. Los algoritmos propuestos han sido validados en simulación y en clínica. / [CAT] La diabetis mellitus tipus 1 és una malaltia crònica i incurable que afecta milions de persones en tot el món. Es caracteritza per una destrucció total o parcial de les cèl.lules beta del pàncrees. Aquestes cèl.lules són les encarregades de produir la insulina, hormona principal en el control de glucosa en sang. Valors alts de glucosa en la sang mantinguts en el temps afecten negativament la salut, provocant complicacions de diversa índole. És per això que els pacients amb diabetis mellitus tipus 1 necessiten rebre insulina de forma exògena. Des que s'aconseguís en 1921 aïllar la insulina per a poder utilitzar-la en clínica humana, i es començaren a desenrotllar les primeres tècniques de monitorització de glucèmia, s'han produït grans avanços en el tractament amb insulina. Actualment, les línies d'investigació que s'estan seguint en relació a la millora de la qualitat de vida dels pacients diabètics, tenen fonamentalment 2 vessants: un primer que es centra en la investigació de cèl.lules mare per a la reposició de les cèl.lules beta i un segon vessant de caràcter més tecnològic. Dins d' aquest segon vessant, estan obertes diverses línies d'investigació, entre les que es troben el desenrotllament de nous anàlegs d'insulina que permeten emular més fidelment la secreció del pàncrees, el desenrotllament de monitors continus de glucosa no invasius, bombes d'insulina capaces d'administrar distints perfils d'insulina i la inclusió de sistemes d'ajuda a la decisió i telemedicina. El major repte al què s'enfronten els investigadors és el d'aconseguir desenrotllar un pàncrees artificial, és a dir, desenrotllar algoritmes que permeten disposar d'un control automàtic de la glucosa. La principal barrera que es troba per a aconseguir un control rigorós de la glucosa és l'alta variabilitat que presenta el seu metabolisme. Açò és especialment significatiu durant la compensació dels menjars. Aquesta variabilitat junt amb el retard en l'absorció i actuació de la insulina administrada de forma subcutània afavorix l'aparició d'hipoglucèmies tardanes (complicació aguda més important del tractament amb insulina) a conseqüència de la sobreactuació del controlador. Les propostes presentades en aquest treball fan especial insistència en suportar aquestes dificultats. Així, s'utilitzen models intervalares per a representar la fisiologia del pacient, i poder tindre en compte la incertesa en els seus paràmetres. Aquest tipus d'estratègia s'ha utilitzat tant en la proposta de dosificació automàtica en llaç obert com en l' algoritme en llaç tancat. A més, la principal idea de disseny d'aquesta última proposta és evitar la sobreactuació del controlador evitant hipoglucèmies i afegint robustesa. / Revert Tomás, A. (2015). Robust strategies for glucose control in type 1 diabetes [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/56001 / TESIS
308

Physical basis of the power-law spatial scaling structure of peak discharges

Ayalew, Tibebu Bekele 01 May 2015 (has links)
Key theoretical and empirical results from the past two decades have established that peak discharges exhibit power-law, or scaling, relation with drainage area across multiple scales of time and space. This relationship takes the form Q(A)= $#945;AΘ where Q is peak discharge, A is the drainage area, Θ is the flood scaling exponent, and α is the intercept. Motivated by seminal empirical studies that show that the flood scaling parameters α and Θ change from one rainfall-runoff event to another, this dissertation explores how certain rainfall and catchment physical properties control the flood scaling exponent and intercept at the rainfall-runoff event scale using a combination of extensive numerical simulation experiments and analysis of observational data from the Iowa River basin, Iowa. Results show that Θ generally decreases with increasing values of rainfall intensity, runoff coefficient, and hillslope overland flow velocity, whereas its value generally increases with increasing rainfall duration. Moreover, while the flood scaling intercept is primarily controlled by the excess rainfall intensity, it increases with increasing runoff coefficient and hillslope overland flow velocity. Results also show that the temporal intermittency structure of rainfall has a significant effect on the scaling structure of peak discharges. These results highlight the fact that the flood scaling parameters are able to be estimated from the aforementioned catchment rainfall and physical variables, which can be measured either directly or indirectly using in situ or remote sensing techniques. The dissertation also proposes and demonstrates a new flood forecasting framework that is based on the scaling theory of floods. The results of the study mark a step forward to provide a physically meaningful framework for regionalization of flood frequencies and hence to solve the long standing hydrologic problem of flood prediction in ungauged basins.
309

Etude qualitative d'éventuelles singularités dans les équations de Navier-Stokes tridimensionnelles pour un fluide visqueux. / Description of potential singularities in Navier-Stokes equations for a viscous fluid in dimension three

Poulon, Eugénie 26 June 2015 (has links)
Nous nous intéressons dans cette thèse aux équations de Navier-Stokes pour un fluide visqueux incompressible. Dans la première partie, nous étudions le cas d’un fluide homogène. Rappelons que la grande question de la régularité globale en dimension 3 est plus ouverte que jamais : on ne sait pas si la solution de l’équation correspondant à un état initial suffisamment régulier mais arbitrairement loin du repos, va perdurer indéfiniment dans cet état (régularité globale) ou exploser en temps fini(singularité). Une façon d’aborder le problème est de supposer cette éventuelle rupture de régularité et d’envisager les différents scenarii possibles. Après un rapide survol de la structure propre aux équations de Navier-Stokes et des résultats connus à ce jour (chapitre 1), nous nous intéressons(chapitre 2) à l’existence locale (en temps) de solutions dans des espaces de Sobolev qui ne sont pas invariants d’échelle. Partant d’une donnée initiale qui produit une singularité, on prouve l’existence d’une constante optimale qui minore le temps de vie de la solution. Cette constante, donnée parla méthode rudimentaire du point fixe, fournit ainsi un bon ordre de grandeur sur le temps de vie maximal de la solution. Au chapitre 3, nous poursuivons les investigations sur le comportement de telles solutions explosives à la lumière de la méthode des éléments critiques.Dans le seconde partie de la thèse, nous sommes intéressés à un modèle plus réaliste du point de vue de la physique, celui d’un fluide incompressible à densité variable. Ceci est modélisé par les équations de Navier-Stokes incompressible et inhomogènes. Nous avons étudié le caractère globalement bien posé de ces équations dans la situation d’un fluide évoluant dans un tore de dimension 3, avec des données initiales appartenant à des espaces critiques et sans hypothèse de petitesse sur la densité. / This thesis is concerned with incompressible Navier-Stokes equations for a viscous fluid. In the first part, we study the case of an homogeneous fluid. Let us recall that the big question of the global regularity in dimension 3 is still open : we do not know if the solution associated with a data smooth enough and far from the immobile stage will last over time (global regularity) or on the contrary will stop living in finite time and blow up (singularity). The goal of this thesis is to study this regularity break. One way to deal witht his question is to assume that such a phenomen on occurs and to study differents scenarii. The chapter 1 is devoted to a recollection of well-known results. In chapter 2, we are interesting in the local (in time) existence of a solution in some Sobolev spaces which are not invariant under the natural sclaing of Navier-Stokes. Starting with a data generating a singularity, we can prove there exists an optimal lower boundary of the lifes pan of such a solution. In this way, the lower boundary provided by the elementary procedure of fixed-point, gives the correctorder of magnitude. Then, we keep on investigations about the behaviour of regular solution near the blow up, thanks to the method of critical elements (chapter 3).In the second part, we are concerned with a more relevant model, from a physics point of view : the inhomogeneous Navier-Stokes system. We deal with the global well poseness of such a model for a inhomogeneous fluid, evolving on a tor us in dimension 3, with critical data and without smallnes sassumption on the density.
310

Green\'s function estimates for elliptic and parabolic operators: Applications to quantitative stochastic homogenization and invariance principles for degenerate random environments and interacting particle systems: Green\''s function estimates for elliptic and parabolic operators:Applications to quantitative stochastic homogenization andinvariance principles for degenerate random environments andinteracting particle systems

Giunti, Arianna 19 April 2017 (has links)
This thesis is divided into two parts: In the first one (Chapters 1 and 2), we deal with problems arising from quantitative homogenization of the random elliptic operator in divergence form $-\\nabla \\cdot a \\nabla$. In Chapter 1 we study existence and stochastic bounds for the Green function $G$ associated to $-\\nabla \\cdot a \\nabla$ in the case of systems. Without assuming any regularity on the coefficient field $a= a(x)$, we prove that for every (measurable) uniformly elliptic tensor field $a$ and for almost every point $y \\in \\mathbb{R}^d$, there exists a unique Green\''s function centred in $y$ associated to the vectorial operator $-\\nabla \\cdot a\\nabla $ in $\\mathbb^d$, $d> 2$. In addition, we prove that if we introduce a shift-invariant ensemble $\\langle\\cdot \\rangle$ over the set of uniformly elliptic tensor fields, then $\\nabla G$ and its mixed derivatives $\\nabla \\nabla G$ satisfy optimal pointwise $L^1$-bounds in probability. Chapter 2 deals with the homogenization of $-\\nabla \\cdot a \\nabla$ to $-\\nabla \\ah \\nabla$ in the sense that we study the large-scale behaviour of $a$-harmonic functions in exterior domains $\\$ by comparing them with functions which are $\\ah$-harmonic. More precisely, we make use of the first and second-order correctors to compare an $a$-harmonic function $u$ to the two-scale expansion of suitable $\\ah$-harmonic function $u_h$. We show that there is a direct correspondence between the rate of the sublinear growth of the correctors and the smallness of the relative homogenization error $u- u_h$. The theory of stochastic homogenization of elliptic operators admits an equivalent probabilistic counterpart, which follows from the link between parabolic equations with elliptic operators in divergence form and random walks. This allows to reformulate the problem of homogenization in terms of invariance principle for random walks. The second part of thesis (Chapters 3 and 4) focusses on this interplay between probabilistic and analytic approaches and aims at exploiting it to study invariance principles in the case of degenerate random conductance models and systems of interacting particles. In Chapter 3 we study a random conductance model where we assume that the conductances are independent, stationary and bounded from above but not uniformly away from $0$. We give a simple necessary and sufficient condition for the relaxation of the environment seen by the particle to be diffusive in the sense of every polynomial moment. As a consequence, we derive polynomial moment estimates on the corrector which imply that the discrete elliptic operator homogenises or, equivalently, that the random conductance model satisfies a quenched invariance principle. In Chapter 4 we turn to a more complicated model, namely the symmetric exclusion process. We show a diffusive upper bound on the transition probability of a tagged particle in this process. The proof relies on optimal spectral gap estimates for the dynamics in finite volume, which are of independent interest. We also show off-diagonal estimates of Carne-Varopoulos type.

Page generated in 0.0448 seconds