Spelling suggestions: "subject:"invariance"" "subject:"l'invariance""
301 |
Software Reliability Prediction – An Evaluation of a Novel TechniqueAndersson, Björn, Persson, Marie January 2004 (has links)
Along with continuously increasing computerization, our expectations on software and hardware reliability increase considerably. Therefore, software reliability has become one of the most important software quality attributes. Software reliability modeling based on test data is done to estimate whether the current reliability level meets the requirements for the product. Software reliability modeling also provides possibilities to predict reliability. Costs of software developing and tests together with profit issues in relation to software reliability are one of the main objectives to software reliability prediction. Software reliability prediction currently uses different models for this purpose. Parameters have to be set in order to tune the model to fit the test data. A slightly different prediction model, Time Invariance Estimation, TIE is developed to challenge the models used today. An experiment is set up to investigate whether TIE could be found useful in a software reliability prediction context. The experiment is based on a comparison between the ordinary reliability prediction models and TIE.
|
302 |
[en] ALFRED TARSKI: LOGICAL CONSEQUENCE, LOGICAL NOTIONS, AND LOGICAL FORMS / [pt] ALFRED TARSKI: CONSEQÜÊNCIA LÓGICA, NOÇÕES LÓGICAS E FORMAS LÓGICASSTEFANO DOMINGUES STIVAL 17 September 2004 (has links)
[pt] O tema da presente dissertação é o problema da demarcação
entre os termos lógicos e extralógicos no âmbito das
ciências formais, anunciado primeiramente por Alfred Tarski
em seu artigo de 1936, On the Concept of Logical
Consequence. Depois de expor e discutir o problema em
questão, mostrando seu surgimento a partir da necessidade
de uma definição materialmente adequada do conceito de
conseqüência lógica, analisamos a solução formulada por
Tarski em um artigo publicado postumamente, intitulado What
Are Logical Notions? Algumas discussões subsidiárias,
igualmente importantes para o trabalho como um todo, dizem
respeito à concepção dos conceitos de modelo e
interpretação que se podem depreender dos artigos
supracitados, e de como ela difere da assim chamada
concepção standard em teoria de modelos. Nosso objetivo
principal é mostrar o lugar ocupado pelo conceito de forma
lógica na obra de Tarski, e de como sua concepção acerca
deste conceito implica uma visão ampliada do conceito de
conseqüência lógica, cuja caracterização correta torna
necessária a estratificação das formas lógicas numa
hierarquia de tipos. / [en] The subject of this paper is the problem of demarcation
between logical and extra-logical terms of formal
languages, as formulated for the first time by Tarski in
his 1936 paper On the Concept of Logical Consequence. After
presenting and discussing the demarcation problem, pointing
out how it arises from the need for a materially adequate
definition of the concept of logical consequence, we
analyze the solution presented by Tarski in his
posthumously published paper, entitled What Are Logical
Notions? Some subsidiary issues, that are also important
for the work as a whole, concern the conception of model
and interpretation that springs from the two papers
mentioned, and how this conception differs from the
standard conception in model theory. Our main goal is to
show the place occupied by the concept of logical form in
Tarski`s work, and how his conception of this concept
implies a broader view about the related concept of logical
consequence whose correct characterization makes necessary
the stratification of logical forms into a hierarchy of
types.
|
303 |
Détection du pulsar de Vela et recherche de violation d'invariance de Lorentz avec le cinquième télescope de H.E.S.S. / Detection of the Vela pulsar and search for Lorentz invariance violation with the fifth H.E.S.S. telescopeChrétien, Mathieu 02 October 2015 (has links)
Le cinquième télescope (CT5) du réseau H.E.S.S. (High Energy Stereoscopic System) a été inauguré en 2012. H.E.S.S. est destiné à l’observation du ciel austral dans le domaine des rayons γ et CT5, dont le seuil est d’environ 30 GeV, a permis la détection du pulsar de Vela après 24 heures d’observations. Certains scénarios de gravitation quantique (QG) prédisent une violation d’invariance de Lorentz (LIV). Celle-ci se manifeste par l’ajout de termes ∝(E/EQG)n aux relations de dispersion du photon, où E est l’énergie du quanta de lumière, EQG l’énergie caractéristique des processus de QG et n l’ordre de la correction. Cette dépendance en énergie peut être testée par des mesures de temps de vol entre photons reçus de sources astrophysiques variables (noyaux actifs de galaxies), transitoires (sursauts γ) ou encore périodiques (pulsars). Cette thèse présente l’analyse des données recueillies par CT5 sur le pulsar de Vela. Une méthode de maximum de vraisemblance ayant déjà montré sa robustesse sur d’autres types de sources a été adaptée au cas du pulsar de Vela. Aucune déviation des relations de dispersion standard n’est observée, par conséquent des limites sont placées sur EQG. La plus contraignante est obtenue pour une correction linéaire superluminique aux relations de dispersion EQG > 7.0×1015 GeV. / The fifth telescope (CT5) of the H.E.S.S. array (High Energy Stereoscopic System) was inaugurated in 2012. H.E.S.S. is designed to scrutinize the southern γ ray sky and CT5, whose threshold is about 30 GeV, allowed the Vela pulsar detection in 24 hours observation time. Some quantum gravity (QG) scenarios predict a violation of Lorentz invariance (LIV). This could manifest by additional terms ∝(E/EQG)n to the photon dispersion relations, where E is the light quantum energy, EQG the typical scale at which QG processes are expected to occur and n the order of the correction. This energy dependence could be tested by time of flight measurements between photons emitted from variable (active galactic nuclei), transient (gamma ray bursts) or periodical (pulsars) astrophysical sources. This thesis presents the analysis of the CT5 collected data from the Vela pulsar. A maximum likelihood method already successfully applied to other source species has been adapted here to the Vela pulsar. No deviation from standard photon dispersion relations is observed, therefore limits have been placed on EQG. The most restrictive result has been obtained for a superluminal linear correction to the dispersion relations EQG > 7.0×1015 GeV.
|
304 |
Random Matrix Theory with Applications in Statistics and FinanceSaad, Nadia Abdel Samie Basyouni Kotb January 2013 (has links)
This thesis investigates a technique to estimate the risk of the mean-variance (MV) portfolio optimization problem. We call this technique the Scaling technique. It provides a better estimator of the risk of the MV optimal portfolio. We obtain this result for a general estimator of the covariance matrix of the returns which includes the correlated sampling case as well as the independent sampling case and the exponentially weighted moving average case. This gave rise to the paper, [CMcS].
Our result concerning the Scaling technique relies on the moments of the inverse of compound Wishart matrices. This is an open problem in the theory of random matrices. We actually tackle a much more general setup, where we consider any random matrix provided that its distribution has an appropriate invariance property (orthogonal or unitary) under an appropriate action (by conjugation, or by a left-right action). Our approach is based on Weingarten calculus. As an interesting byproduct of our study - and as a preliminary to the solution of our problem of computing the moments of the inverse of a compound Wishart random matrix, we obtain explicit moment formulas for the pseudo-inverse of Ginibre random matrices. These results are also given in the paper, [CMS].
Using the moments of the inverse of compound Wishart matrices, we obtain asymptotically unbiased estimators of the risk and the weights of the MV portfolio. Finally, we have some numerical results which are part of our future work.
|
305 |
Investigating the hypothesized factor structure of the Noel-Levitz Student Satisfaction Inventory: A study of the student satisfaction construct.Odom, Leslie R. 12 1900 (has links)
College student satisfaction is a concept that has become more prevalent in higher education research journals. Little attention has been given to the psychometric properties of previous instrumentation, and few studies have investigated the structure of current satisfaction instrumentation. This dissertation: (a) investigated the tenability of the theoretical dimensional structure of the Noel-Levitz Student Satisfaction Inventory (SSI), (b) investigated an alternative factor structure using explanatory factor analyses (EFA), and (c) used multiple-group CFA procedures to determine whether an alternative SSI factor structure would be invariant for three demographic variables: gender (men/women), race/ethnicity (Caucasian/Other), and undergraduate classification level (lower level/upper level). For this study, there was little evidence for the multidimensional structure of the SSI. A single factor, termed General Satisfaction with College, was the lone unidimensional construct that emerged from the iterative CFA and EFA procedures. A revised 20-item model was developed, and a series of multigroup CFAs were used to detect measurement invariance for three variables: student gender, race/ethnicity, and class level. No measurement invariance was noted for the revised 20-item model. Results for the invariance tests indicated equivalence across the comparison groups for (a) the number of factors, (b) the pattern of indicator-factor loadings, (c) the factor loadings, and (d) the item error variances. Because little attention has been given to the psychometric properties of the satisfaction instrumentation, it is recommended that further research continue on the SSI and any additional instrumentation developed to measure student satisfaction. It is possible that invariance issues may explain a portion of the inconsistent findings noted in the review of literature. Although measurement analyses are a time-consuming process, they are essential for understanding the psychometrics characterized by a set of scores obtained from a survey, or any other form of assessment instrument.
|
306 |
Multiscale Scanning in Higher Dimensions: Limit theory, statistical consequences and an application in STED microscopyKönig, Claudia Juliane 26 June 2018 (has links)
No description available.
|
307 |
Physical basis of the power-law spatial scaling structure of peak dischargesAyalew, Tibebu Bekele 01 May 2015 (has links)
Key theoretical and empirical results from the past two decades have established that peak discharges exhibit power-law, or scaling, relation with drainage area across multiple scales of time and space. This relationship takes the form Q(A)= $#945;AΘ where Q is peak discharge, A is the drainage area, Θ is the flood scaling exponent, and α is the intercept. Motivated by seminal empirical studies that show that the flood scaling parameters α and Θ change from one rainfall-runoff event to another, this dissertation explores how certain rainfall and catchment physical properties control the flood scaling exponent and intercept at the rainfall-runoff event scale using a combination of extensive numerical simulation experiments and analysis of observational data from the Iowa River basin, Iowa. Results show that Θ generally decreases with increasing values of rainfall intensity, runoff coefficient, and hillslope overland flow velocity, whereas its value generally increases with increasing rainfall duration. Moreover, while the flood scaling intercept is primarily controlled by the excess rainfall intensity, it increases with increasing runoff coefficient and hillslope overland flow velocity. Results also show that the temporal intermittency structure of rainfall has a significant effect on the scaling structure of peak discharges. These results highlight the fact that the flood scaling parameters are able to be estimated from the aforementioned catchment rainfall and physical variables, which can be measured either directly or indirectly using in situ or remote sensing techniques. The dissertation also proposes and demonstrates a new flood forecasting framework that is based on the scaling theory of floods. The results of the study mark a step forward to provide a physically meaningful framework for regionalization of flood frequencies and hence to solve the long standing hydrologic problem of flood prediction in ungauged basins.
|
308 |
Etude qualitative d'éventuelles singularités dans les équations de Navier-Stokes tridimensionnelles pour un fluide visqueux. / Description of potential singularities in Navier-Stokes equations for a viscous fluid in dimension threePoulon, Eugénie 26 June 2015 (has links)
Nous nous intéressons dans cette thèse aux équations de Navier-Stokes pour un fluide visqueux incompressible. Dans la première partie, nous étudions le cas d’un fluide homogène. Rappelons que la grande question de la régularité globale en dimension 3 est plus ouverte que jamais : on ne sait pas si la solution de l’équation correspondant à un état initial suffisamment régulier mais arbitrairement loin du repos, va perdurer indéfiniment dans cet état (régularité globale) ou exploser en temps fini(singularité). Une façon d’aborder le problème est de supposer cette éventuelle rupture de régularité et d’envisager les différents scenarii possibles. Après un rapide survol de la structure propre aux équations de Navier-Stokes et des résultats connus à ce jour (chapitre 1), nous nous intéressons(chapitre 2) à l’existence locale (en temps) de solutions dans des espaces de Sobolev qui ne sont pas invariants d’échelle. Partant d’une donnée initiale qui produit une singularité, on prouve l’existence d’une constante optimale qui minore le temps de vie de la solution. Cette constante, donnée parla méthode rudimentaire du point fixe, fournit ainsi un bon ordre de grandeur sur le temps de vie maximal de la solution. Au chapitre 3, nous poursuivons les investigations sur le comportement de telles solutions explosives à la lumière de la méthode des éléments critiques.Dans le seconde partie de la thèse, nous sommes intéressés à un modèle plus réaliste du point de vue de la physique, celui d’un fluide incompressible à densité variable. Ceci est modélisé par les équations de Navier-Stokes incompressible et inhomogènes. Nous avons étudié le caractère globalement bien posé de ces équations dans la situation d’un fluide évoluant dans un tore de dimension 3, avec des données initiales appartenant à des espaces critiques et sans hypothèse de petitesse sur la densité. / This thesis is concerned with incompressible Navier-Stokes equations for a viscous fluid. In the first part, we study the case of an homogeneous fluid. Let us recall that the big question of the global regularity in dimension 3 is still open : we do not know if the solution associated with a data smooth enough and far from the immobile stage will last over time (global regularity) or on the contrary will stop living in finite time and blow up (singularity). The goal of this thesis is to study this regularity break. One way to deal witht his question is to assume that such a phenomen on occurs and to study differents scenarii. The chapter 1 is devoted to a recollection of well-known results. In chapter 2, we are interesting in the local (in time) existence of a solution in some Sobolev spaces which are not invariant under the natural sclaing of Navier-Stokes. Starting with a data generating a singularity, we can prove there exists an optimal lower boundary of the lifes pan of such a solution. In this way, the lower boundary provided by the elementary procedure of fixed-point, gives the correctorder of magnitude. Then, we keep on investigations about the behaviour of regular solution near the blow up, thanks to the method of critical elements (chapter 3).In the second part, we are concerned with a more relevant model, from a physics point of view : the inhomogeneous Navier-Stokes system. We deal with the global well poseness of such a model for a inhomogeneous fluid, evolving on a tor us in dimension 3, with critical data and without smallnes sassumption on the density.
|
309 |
Green\'s function estimates for elliptic and parabolic operators: Applications to quantitative stochastic homogenization and invariance principles for degenerate random environments and interacting particle systems: Green\''s function estimates for elliptic and parabolic operators:Applications to quantitative stochastic homogenization andinvariance principles for degenerate random environments andinteracting particle systemsGiunti, Arianna 19 April 2017 (has links)
This thesis is divided into two parts: In the first one (Chapters 1 and 2), we deal with problems arising from quantitative homogenization of the random elliptic operator in divergence form $-\\nabla \\cdot a \\nabla$. In Chapter 1 we study existence and stochastic bounds for the Green function $G$ associated to $-\\nabla \\cdot a \\nabla$ in the case of systems. Without assuming any regularity on the coefficient field $a= a(x)$, we prove that for every (measurable) uniformly elliptic tensor field $a$ and for almost every point $y \\in \\mathbb{R}^d$, there exists a unique Green\''s function centred in $y$ associated to the vectorial operator $-\\nabla \\cdot a\\nabla $ in $\\mathbb^d$, $d> 2$. In addition, we prove that if we introduce a shift-invariant ensemble $\\langle\\cdot \\rangle$ over the set of uniformly elliptic tensor fields, then $\\nabla G$ and its mixed derivatives $\\nabla \\nabla G$ satisfy optimal pointwise $L^1$-bounds in probability.
Chapter 2 deals with the homogenization of $-\\nabla \\cdot a \\nabla$ to $-\\nabla \\ah \\nabla$ in the sense that we study the large-scale behaviour of $a$-harmonic functions in exterior domains $\\$ by comparing them with functions which are $\\ah$-harmonic. More precisely, we make use of the first and second-order correctors to compare an $a$-harmonic function $u$ to the two-scale expansion of suitable $\\ah$-harmonic function $u_h$. We show that there is a direct correspondence between the rate of the sublinear growth of the correctors and the smallness of the relative homogenization error $u- u_h$.
The theory of stochastic homogenization of elliptic operators admits an equivalent probabilistic counterpart, which follows from the link between parabolic equations with elliptic operators in divergence form and random walks. This allows to reformulate the problem of homogenization in terms of invariance principle for random walks. The second part of thesis (Chapters 3 and 4) focusses on this interplay between probabilistic and analytic approaches and aims at exploiting it to study invariance principles in the case of degenerate random conductance models and systems of interacting particles.
In Chapter 3 we study a random conductance model where we assume that the conductances are independent, stationary and bounded from above but not uniformly away from $0$. We give a simple necessary and sufficient condition for the relaxation of the environment seen by the particle to be diffusive in the sense of every polynomial moment.
As a consequence, we derive polynomial moment estimates on the corrector which imply that the discrete elliptic operator homogenises or, equivalently, that the random conductance model satisfies a quenched invariance principle.
In Chapter 4 we turn to a more complicated model, namely the symmetric exclusion process. We show a diffusive upper bound on the transition probability of a tagged particle in this process. The proof relies on optimal spectral gap estimates for the dynamics in finite volume, which are of independent interest. We also show off-diagonal estimates of Carne-Varopoulos type.
|
310 |
A Hardware Architecture for Scale-space Extrema DetectionIjaz, Hamza January 2012 (has links)
Vision based object recognition and localization have been studied widely in recent years. Often the initial step in such tasks is detection of interest points from a grey-level image. The current state-of-the-art algorithms in this domain, like Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF) suffer from low execution speeds on a GPU(graphic processing unit) based system. Generally the performance of these algorithms on a GPU is below real-time due to high computational complexity and data intensive nature and results in elevated power consumption. Since real-time performance is desirable in many vision based applications, hardware based feature detection is an emerging solution that exploits inherent parallelism in such algorithms to achieve significant speed gains. The efficient utilization of resources still remains a challenge that directly effects the cost of hardware. This work proposes a novel hardware architecture for scale-space extrema detection part of the SIFT algorithm. The implementation of proposed architecture for Xilinx Virtex-4 FPGA and its evaluation are also presented. The implementation is sufficiently generic and can be adapted to different design parameters efficiently according to the requirements of application. The achieved system performance exceeds real-time requirements (30 frames per second) on a 640 x 480 image. Synthesis results show efficient resource utilization when compared with the existing known implementations.
|
Page generated in 0.0507 seconds