Spelling suggestions: "subject:"quasi"" "subject:"cuasi""
351 |
Application de techniques de commande avancées dans le domaine automobile.Pita Gil, Guillermo 28 March 2011 (has links) (PDF)
Les travaux effectués lors de cette thèse se sont focalisés sur les applications des méthodes et techniques d'Automatique avancée à des problématiques actuelles de l'automobile. Les sujets abordés ont porté sur trois axes fondamentaux en s'appuyant sur des techniques telles que la synthèse H infini LTI et q-LPV, la linéarisation par bouclage dynamique, la retouche de correcteurs de type PI en particulier et l'optimisation des pondérations des filtres nécessaires aux synthèses H infini :* Contrôle de la trajectoire d'un véhicule automobile. Nous avons proposé une structure de commande reprenant une démarche classiquement mise en œuvre dans le milieu aéronautique ou spatial.* Contrôle de la chaîne d'air d'un moteur essence, turbocompressé. Nous avons proposé une formulation novatrice de type q-LPV du modèle du moteur. Cette formulation d'un nouveau modèle de commande nous a permis de synthétiser des correcteurs évolués à paramètres variables qui s'adaptent automatiquement au point de fonctionnement.* Contrôle du freinage d'un véhicule électrique. Pour cette partie, nous avons précisé la motivation et l'intérêt des véhicules électriques, puis étudié le gain d'autonomie potentiellement accessible par la mise en œuvre d'une récupération d'énergie au freinage. Finalement, des solutions permettant de réduire les oscillations induites dans la chaîne de traction par des demandes de couple freineur à la machine électrique ont été développées.
|
352 |
Application des codes cycliques tordusYemen, Olfa 19 January 2013 (has links) (PDF)
Le sujet porte sur une classe de codes correcteurs d erreurs dits codes cycliques tordus, et ses applications a l'Informatique quantique et aux codes quasi-cycliques. Les codes cycliques classiques ont une structure d'idéaux dans un anneau de polynômes. Ulmer a introduit en 2008 une généralisation aux anneaux dits de polynômes tordus, une classe d'anneaux non commutatifs introduits par Ore en 1933. Dans cette thèse on explore le cas du corps a quatre éléments et de l'anneau produit de deux copies du corps a deux éléments.
|
353 |
Inférence pour des processus affines basée sur des observations à temps discretLolo, Maryam January 2009 (has links) (PDF)
Dans ce mémoire, on étudie la distribution empirique des estimateurs de vraisemblance maximale pour des processus affines, basés sur des observations à temps discret. On examine d'abord le cas où le processus est directement observable. Ensuite, on regarde ce qu'il advient lorsque seule une transformation affine du processus est observable, une situation typique dans les applications financières. Deux approches sont alors considérées: maximisation de la vraisemblance exacte ou maximisation d'une quasi-vraisemblance obtenue du filtre de Kalman. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : Estimation de vraisemblance maximale, Processus affines, Obligation à l'escompte, Quasi-vraisemblance, Filtre de Kalman.
|
354 |
The Neural Computations of Spatial Memory from Single Cells to NetworksHedrick, Kathryn 06 September 2012 (has links)
Studies of spatial memory provide valuable insight into more general mnemonic functions, for by observing the activity of cells such as place cells, one can follow a subject’s dynamic representation of a changing environment. I investigate how place cells resolve conflicting neuronal input signals by developing computational models that integrate synaptic inputs on two scales. First, I construct reduced models of morphologically accurate neurons that preserve neuronal structure and the spatial
specificity of inputs. Second, I use a parallel implementation to examine the dynamics among a network of interconnected place cells. Both models elucidate possible roles for the inputs and mechanisms involved in spatial memory.
|
355 |
Information Matrices in Estimating Function Approach: Tests for Model Misspecification and Model SelectionZhou, Qian January 2009 (has links)
Estimating functions have been widely used for parameter
estimation in various statistical problems. Regular estimating
functions produce parameter estimators which have desirable
properties, such as consistency and asymptotic normality. In
quasi-likelihood inference, an important example of estimating
functions, correct specification of the first two moments of the
underlying distribution leads to the information unbiasedness, which
states that two forms of the information matrix: the negative
sensitivity matrix (negative expectation of the first order
derivative of an estimating function) and the variability matrix
(variance of an estimating function) are equal, or in other words,
the analogue of the Fisher information is equivalent to the Godambe
information. Consequently, the information unbiasedness indicates
that the model-based covariance matrix estimator and sandwich
covariance matrix estimator are equivalent. By comparing the
model-based and sandwich variance estimators, we propose information
ratio (IR) statistics for testing model misspecification of
variance/covariance structure under correctly specified mean
structure, in the context of linear regression models, generalized
linear regression models and generalized estimating equations.
Asymptotic properties of the IR statistics are discussed. In
addition, through intensive simulation studies, we show that the IR
statistics are powerful in various applications: test for
heteroscedasticity in linear regression models, test for
overdispersion in count data, and test for misspecified variance
function and/or misspecified working correlation structure.
Moreover, the IR statistics appear more powerful than the classical
information matrix test proposed by White (1982).
In the literature, model selection criteria have been intensively
discussed, but almost all of them target choosing the optimal mean
structure. In this thesis, two model selection procedures are
proposed for selecting the optimal variance/covariance structure
among a collection of candidate structures. One is based on a
sequence of the IR tests for all the competing variance/covariance
structures. The other is based on an ``information discrepancy
criterion" (IDC), which provides a measurement of discrepancy
between the negative sensitivity matrix and the variability matrix.
In fact, this IDC characterizes the relative efficiency loss when
using a certain candidate variance/covariance structure, compared
with the true but unknown structure. Through simulation studies and
analyses of two data sets, it is shown that the two proposed model
selection methods both have a high rate of detecting the
true/optimal variance/covariance structure. In particular, since the
IDC magnifies the difference among the competing structures, it is
highly sensitive to detect the most appropriate variance/covariance
structure.
|
356 |
A tailored skills training programme for professionals in primary health care to increase prescriptions of physical activity on prescription, FaRMånsson, Ann January 2011 (has links)
ABSTRACT Aim: The aim of this study was to evaluate and study the effects of a tailored behavioural skills intervention on the amount of FaR® prescribed, and to describe self-efficacy over time for prescribing FaR® in participants from primary health care units. Method: A quasi-experimental single-case design with multiple–baseline across time and settings was used. Each baseline had an ABC design, baseline (A), intervention (B) and post-intervention (C). The intervention was introduced across two different PHCUs at different times. It was seven participants included. Primary outcome measurements were repeatedly collected for participants in settings. The method was based on behavioural medicine principles. Key concepts from SCT theory was used in the intervention. Result: The result seemed to demonstrate an effect on the prescribing behaviour in terms of a slightly increased amount of prescribed FaR® during the intervention phase, even though not for all participants. It was no or short latency for the changed behaviour during intervention. Adopted behaviour was not maintained in the post-intervention phase. Self-efficacy for prescribing FaR® varied. The variation of overall self-efficacy between baseline and post-intervention was from -10% to 81%. Conclusion: This study indicated that a tailored skills training programme might have the potential to change the prescribing behaviour among professionals in primary health care. An intervention lasting for eleven weeks seemed not enough to maintain the achieved performance. No conclusion could be done on self-efficacy. Keywords: Quasi-experimental single-case design, physical activity on prescription FaR®, behavioural medicine, implementation, primary care.
|
357 |
Information Matrices in Estimating Function Approach: Tests for Model Misspecification and Model SelectionZhou, Qian January 2009 (has links)
Estimating functions have been widely used for parameter
estimation in various statistical problems. Regular estimating
functions produce parameter estimators which have desirable
properties, such as consistency and asymptotic normality. In
quasi-likelihood inference, an important example of estimating
functions, correct specification of the first two moments of the
underlying distribution leads to the information unbiasedness, which
states that two forms of the information matrix: the negative
sensitivity matrix (negative expectation of the first order
derivative of an estimating function) and the variability matrix
(variance of an estimating function) are equal, or in other words,
the analogue of the Fisher information is equivalent to the Godambe
information. Consequently, the information unbiasedness indicates
that the model-based covariance matrix estimator and sandwich
covariance matrix estimator are equivalent. By comparing the
model-based and sandwich variance estimators, we propose information
ratio (IR) statistics for testing model misspecification of
variance/covariance structure under correctly specified mean
structure, in the context of linear regression models, generalized
linear regression models and generalized estimating equations.
Asymptotic properties of the IR statistics are discussed. In
addition, through intensive simulation studies, we show that the IR
statistics are powerful in various applications: test for
heteroscedasticity in linear regression models, test for
overdispersion in count data, and test for misspecified variance
function and/or misspecified working correlation structure.
Moreover, the IR statistics appear more powerful than the classical
information matrix test proposed by White (1982).
In the literature, model selection criteria have been intensively
discussed, but almost all of them target choosing the optimal mean
structure. In this thesis, two model selection procedures are
proposed for selecting the optimal variance/covariance structure
among a collection of candidate structures. One is based on a
sequence of the IR tests for all the competing variance/covariance
structures. The other is based on an ``information discrepancy
criterion" (IDC), which provides a measurement of discrepancy
between the negative sensitivity matrix and the variability matrix.
In fact, this IDC characterizes the relative efficiency loss when
using a certain candidate variance/covariance structure, compared
with the true but unknown structure. Through simulation studies and
analyses of two data sets, it is shown that the two proposed model
selection methods both have a high rate of detecting the
true/optimal variance/covariance structure. In particular, since the
IDC magnifies the difference among the competing structures, it is
highly sensitive to detect the most appropriate variance/covariance
structure.
|
358 |
Keeping Up With the Joneses: Electricity Consumption, Publicity and Social Network Influence in Milton, OntarioDeline, Mary Elizabeth January 2010 (has links)
Abstract
This study used an exploratory research focus to investigate if making electricity consumption public and subject to social norms and networks resulted in consumption decreases for households in Milton, Ontario. In the first phase, Milton Hydro identified customers who fell within an average annual electricity consumption category and these customers were invited to participate by mail. Due to lack of participant uptake, cold-calling, targeting of service and faith groups and commuters, and snowball sampling were employed to obtain a total participant size of 17. The second phase saw participants grouped according to social network type (occupational, faith group, etc) and exposed to approval or disapproval indicators within their group about their daily electricity consumption rates via an on-line ‘energy pool’. There were five main groups: one of neighbours, one of members of a faith group, one of members of a company, one of strangers and one of a control group. Group members saw other members’ indicators with the exception of the control group, whose indicators were privately delivered. All group’s electricity consumption was tracked through daily smart meter readings. Participants also had the option of commenting on each other’s electricity use via an online ‘comment box’. In the third phase participants were asked to participate in a questionnaire to assess: 1) the perceived efficacy of the intervention; 2) perceptions of electricity consumption; and 3) the influence of the group on these perceptions. This sequential methodology was chosen for its ability to “...explain significant (or non-significant) results, outlier results, or surprising results” (Cresswell, 2006, p. 72).
The findings of this exploratory research seem to suggest the following:
1) that publicity or group type does not seem to affect electricity consumption in comparative electricity consumption feedback for this study;
2) that participants used injunctive norms to comment on their electricity consumption but directed these comments solely at themselves; and
3) that the stronger the relationships in the group, the more likely participants were to engage with the website through checking it and commenting on it.
This study may be useful to those in the fields of: 1) electricity conservation who wish to leverage feedback technologies; 2) social networks who wish to better understand how tie strength interacts with social norms and; 3) those in social marketing who wish to develop norm-based campaigns.
|
359 |
Waveguide Sources of Photon PairsHorn, Rolf January 2011 (has links)
This thesis describes various methods for producing photon pairs from waveguides. It covers relevant topics such as waveguide coupling and phase matching, along with the relevant measurement techniques used to infer photon pair production. A new proposal to solve the phase matching problem is described along with two conceptual methods for generating entangled photon pairs. Photon pairs are also experimentally demonstrated from a third novel structure called a Bragg Reflection Waveguide (BRW).
The new proposal to solve the phase matching problem is called Directional Quasi-Phase Matching (DQPM). It is a technique that exploits the directional dependence of the non-linear susceptiblity ($\chi^{(2)}$) tensor. It is aimed at those materials that do not allow birefringent phase-matching or periodic poling. In particular, it focuses on waveguides in which the interplay between the propagation direction, electric field polarizations and the nonlinearity can change the strength and sign of the nonlinear interaction periodically to achieve quasi-phasematching.
One of the new conceptual methods for generating entangled photon pairs involves a new technique that sandwiches two waveguides from two differently oriented but similar crystals together. The idea stems from the design of a Michelson interferometer which interferes the paths over which two unique photon pair processes can occur, thereby creating entanglement in any pair of photons created in the interferometer. By forcing or sandwiching the two waveguides together, the physical space that exists in the standard Micheleson type interferometer is made non-existent, and the interferometer is effectively squashed. The result is that the two unique photon pair processes actually occupy the same physical path. This benefits the stability of the interferometer in addition to miniaturizing it. The technical challenges involved in sandwiching the two waveguides are briefly discussed.
The main result of this thesis is the observation of photon pairs from the BRW. By analyzing the time correlation between two single photon detection events, spontaneous parametric down conversion (SPDC) of a picosecond pulsed ti:sapph laser is demonstrated. The process is mediated by a ridge BRW. The results show evidence for type-0, type-I and type-II phase matching of pump light at 783nm, 786nm and 789nm to down converted light that is strongly degenerate at 1566nm, 1572nm, and 1578nm respectively. The inferred efficiency of the BRW was 9.8$\cdot$10$^{-9}$ photon pairs per pump photon. This contrasts with the predicted type-0 efficiency of 2.65$\cdot$10$^{-11}$. This data is presented for the first time in such waveguides, and represents significant advances towards the integration of sources of quantum information into the existing telecommunications infrastructure.
|
360 |
High-Resolution Numerical Simulations of Wind-Driven GyresKo, William January 2011 (has links)
The dynamics of the world's oceans occur at a vast range of length scales. Although there are theories that aid in understanding the dynamics at planetary scales and microscales, the motions in between are still not yet well understood. This work discusses a numerical model to study barotropic wind-driven gyre flow that is capable of resolving dynamics at the synoptic, O(1000 km), mesoscale, O(100 km) and submesoscales O(10 km). The Quasi-Geostrophic (QG) model has been used predominantly to study ocean circulations but it is limited as it can only describe motions at synoptic scales and mesoscales. The Rotating Shallow Water (SW) model that can describe dynamics at a wider range of horizontal length scales and can better describe motions at the submesoscales. Numerical methods that are capable of high-resolution simulations are discussed for both QG and SW models and the numerical results are compared. To achieve high accuracy and resolve an optimal range of length scales, spectral methods are applied to solve the governing equations and a third-order Adams-Bashforth method is used for the temporal discretization. Several simulations of both models are computed by varying the strength of dissipation. The simulations either tend to a laminar steady state, or a turbulent flow with dynamics occurring at a wide range of length and time scales. The laminar results show similar behaviours in both models, thus QG and SW tend to agree when describing slow, large-scale flows. The turbulent simulations begin to differ as QG breaks down when faster and smaller scale motions occur. Essential differences in the underlying assumptions between the QG and SW models are highlighted using the results from the numerical simulations.
|
Page generated in 0.0355 seconds