531 |
On spectrum sensing, resource allocation, and medium access control in cognitive radio networksKaraputugala Gamacharige, Madushan Thilina 12 1900 (has links)
The cognitive radio-based wireless networks have been proposed as a promising technology
to improve the utilization of the radio spectrum through opportunistic spectrum access. In
this context, the cognitive radios opportunistically access the spectrum which is licensed to
primary users when the primary user transmission is detected to be absent. For opportunistic
spectrum access, the cognitive radios should sense the radio environment and allocate
the spectrum and power based on the sensing results. To this end, in this thesis, I develop
a novel cooperative spectrum sensing scheme for cognitive radio networks (CRNs) based
on machine learning techniques which are used for pattern classification. In this regard,
unsupervised and supervised learning-based classification techniques are implemented for
cooperative spectrum sensing. Secondly, I propose a novel joint channel and power allocation
scheme for downlink transmission in cellular CRNs. I formulate the downlink
resource allocation problem as a generalized spectral-footprint minimization problem. The
channel assignment problem for secondary users is solved by applying a modified Hungarian
algorithm while the power allocation subproblem is solved by using Lagrangian
technique. Specifically, I propose a low-complexity modified Hungarian algorithm for subchannel
allocation which exploits the local information in the cost matrix. Finally, I propose
a novel dynamic common control channel-based medium access control (MAC) protocol
for CRNs. Specifically, unlike the traditional dedicated control channel-based MAC protocols,
the proposed MAC protocol eliminates the requirement of a dedicated channel for
control information exchange. / October 2015
|
532 |
Les processus additifs markoviens et leurs applications en finance mathématiqueMomeya Ouabo, Romuald Hervé 05 1900 (has links)
Cette thèse porte sur les questions d'évaluation et de couverture des options
dans un modèle exponentiel-Lévy avec changements de régime. Un tel modèle est
construit sur un processus additif markovien un peu comme le modèle de Black-
Scholes est basé sur un mouvement Brownien. Du fait de l'existence de plusieurs
sources d'aléa, nous sommes en présence d'un marché incomplet et ce fait rend
inopérant les développements théoriques initiés par Black et Scholes et Merton
dans le cadre d'un marché complet.
Nous montrons dans cette thèse que l'utilisation de certains résultats de la théorie
des processus additifs markoviens permet d'apporter des solutions aux problèmes
d'évaluation et de couverture des options. Notamment, nous arrivons à caracté-
riser la mesure martingale qui minimise l'entropie relative à la mesure de probabilit
é historique ; aussi nous dérivons explicitement sous certaines conditions,
le portefeuille optimal qui permet à un agent de minimiser localement le risque
quadratique associé. Par ailleurs, dans une perspective plus pratique nous caract
érisons le prix d'une option Européenne comme l'unique solution de viscosité
d'un système d'équations intégro-di érentielles non-linéaires. Il s'agit là d'un premier
pas pour la construction des schémas numériques pour approcher ledit prix. / This thesis focuses on the pricing and hedging problems of financial derivatives in
a Markov-modulated exponential-Lévy model. Such model is built on a Markov
additive process as much as the Black-Scholes model is based on Brownian motion.
Since there exist many sources of randomness, we are dealing with an incomplete
market and this makes inoperative techniques initiated by Black, Scholes and
Merton in the context of a complete market.
We show that, by using some results of the theory of Markov additive processes it
is possible to provide solutions to the previous problems. In particular, we characterize
the martingale measure which minimizes the relative entropy with respect
to the physical probability measure. Also under some conditions, we derive explicitly
the optimal portfolio which allows an agent to minimize the local quadratic
risk associated. Furthermore, in a more practical perspective we characterize the
price of a European type option as the unique viscosity solution of a system of
nonlinear integro-di erential equations. This is a rst step towards the construction
of e ective numerical schemes to approximate options price.
|
533 |
Insite as Representation and Regulation: A Discursively-Informed Analysis of the Implementation and Implications of Canada's First Safe Injection SiteSanderson, Alicia 21 July 2011 (has links)
This study consisted of a qualitative analysis of articles from two Canadian newspapers related to North America’s only safe injection facility for drug users, Vancouver’s Insite, and examined the texts for latent themes derived from a review of harm reduction and governmentality literature. The investigation asked “In what ways are Insite and its clients represented in the media and what implications do those portrayals have in terms of Insite’s operation as a harm reduction practice as well as a governmental strategy designed to direct the conduct of drug users who visit the site?” The analysis revealed conflicting representations, some which have positive potential in terms of Insite’s adherence to the fundamental principles of harm reduction and others that undermined those principles and suggested that the site may have traditional governmental functions, perhaps indicating less distance between the harm reduction and governmentality philosophies in the discourse surrounding the SIS than expected.
|
534 |
Iterative and Adaptive PDE Solvers for Shared Memory Architectures / Iterativa och adaptiva PDE-lösare för parallelldatorer med gemensam minnesorganisationLöf, Henrik January 2006 (has links)
Scientific computing is used frequently in an increasing number of disciplines to accelerate scientific discovery. Many such computing problems involve the numerical solution of partial differential equations (PDE). In this thesis we explore and develop methodology for high-performance implementations of PDE solvers for shared-memory multiprocessor architectures. We consider three realistic PDE settings: solution of the Maxwell equations in 3D using an unstructured grid and the method of conjugate gradients, solution of the Poisson equation in 3D using a geometric multigrid method, and solution of an advection equation in 2D using structured adaptive mesh refinement. We apply software optimization techniques to increase both parallel efficiency and the degree of data locality. In our evaluation we use several different shared-memory architectures ranging from symmetric multiprocessors and distributed shared-memory architectures to chip-multiprocessors. For distributed shared-memory systems we explore methods of data distribution to increase the amount of geographical locality. We evaluate automatic and transparent page migration based on runtime sampling, user-initiated page migration using a directive with an affinity-on-next-touch semantic, and algorithmic optimizations for page-placement policies. Our results show that page migration increases the amount of geographical locality and that the parallel overhead related to page migration can be amortized over the iterations needed to reach convergence. This is especially true for the affinity-on-next-touch methodology whereby page migration can be initiated at an early stage in the algorithms. We also develop and explore methodology for other forms of data locality and conclude that the effect on performance is significant and that this effect will increase for future shared-memory architectures. Our overall conclusion is that, if the involved locality issues are addressed, the shared-memory programming model provides an efficient and productive environment for solving many important PDE problems.
|
535 |
Physical restraint use and falls in institutional care of old people effects of a restraint minimization program /Pellfolk, Tony, January 2010 (has links)
Diss. (sammanfattning) Umeå : Umeå universitet, 2010. / Härtill 4 uppsatser.
|
536 |
Nouvelles méthodes de calcul pour la prédiction des interactions protéine-protéine au niveau structural / Novel computational methods to predict protein-protein interactions on the structural levelPopov, Petr 28 January 2015 (has links)
Le docking moléculaire est une méthode permettant de prédire l'orientation d'une molécule donnée relativement à une autre lorsque celles-ci forment un complexe. Le premier algorithme de docking moléculaire a vu jour en 1990 afin de trouver de nouveaux candidats face à la protéase du VIH-1. Depuis, l'utilisation de protocoles de docking est devenue une pratique standard dans le domaine de la conception de nouveaux médicaments. Typiquement, un protocole de docking comporte plusieurs phases. Il requiert l'échantillonnage exhaustif du site d'interaction où les éléments impliqués sont considérées rigides. Des algorithmes de clustering sont utilisés afin de regrouper les candidats à l'appariement similaires. Des méthodes d'affinage sont appliquées pour prendre en compte la flexibilité au sein complexe moléculaire et afin d'éliminer de possibles artefacts de docking. Enfin, des algorithmes d'évaluation sont utilisés pour sélectionner les meilleurs candidats pour le docking. Cette thèse présente de nouveaux algorithmes de protocoles de docking qui facilitent la prédiction des structures de complexes protéinaires, une des cibles les plus importantes parmi les cibles visées par les méthodes de conception de médicaments. Une première contribution concerne l‘algorithme Docktrina qui permet de prédire les conformations de trimères protéinaires triangulaires. Celui-ci prend en entrée des prédictions de contacts paire-à-paire à partir d'hypothèse de corps rigides. Ensuite toutes les combinaisons possibles de paires de monomères sont évalués à l'aide d'un test de distance RMSD efficace. Cette méthode à la fois rapide et efficace améliore l'état de l'art sur les protéines trimères. Deuxièmement, nous présentons RigidRMSD une librairie C++ qui évalue en temps constant les distances RMSD entre conformations moléculaires correspondant à des transformations rigides. Cette librairie est en pratique utile lors du clustering de positions de docking, conduisant à des temps de calcul améliorés d'un facteur dix, comparé aux temps de calcul des algorithmes standards. Une troisième contribution concerne KSENIA, une fonction d'évaluation à base de connaissance pour l'étude des interactions protéine-protéine. Le problème de la reconstruction de fonction d'évaluation est alors formulé et résolu comme un problème d'optimisation convexe. Quatrièmement, CARBON, un nouvel algorithme pour l'affinage des candidats au docking basés sur des modèles corps-rigides est proposé. Le problème d'optimisation de corps-rigides est vu comme le calcul de trajectoires quasi-statiques de corps rigides influencés par la fonction énergie. CARBON fonctionne aussi bien avec un champ de force classique qu'avec une fonction d'évaluation à base de connaissance. CARBON est aussi utile pour l'affinage de complexes moléculaires qui comportent des clashes stériques modérés à importants. Finalement, une nouvelle méthode permet d'estimer les capacités de prédiction des fonctions d'évaluation. Celle-ci permet d‘évaluer de façon rigoureuse la performance de la fonction d'évaluation concernée sur des benchmarks de complexes moléculaires. La méthode manipule la distribution des scores attribués et non pas directement les scores de conformations particulières, ce qui la rend avantageuse au regard des critères standard basés sur le score le plus élevé. Les méthodes décrites au sein de la thèse sont testées et validées sur différents benchmarks protéines-protéines. Les algorithmes implémentés ont été utilisés avec succès pour la compétition CAPRI concernant la prédiction de complexes protéine-protéine. La méthodologie développée peut facilement être adaptée pour de la reconnaissance d'autres types d'interactions moléculaires impliquant par exemple des ligands, de l'ARN… Les implémentations en C++ des différents algorithmes présentés seront mises à disposition comme SAMSON Elements de la plateforme logicielle SAMSON sur http://www.samson-connect.net ou sur http://nano-d.inrialpes.fr/software. / Molecular docking is a method that predicts orientation of one molecule with respect to another one when forming a complex. The first computational method of molecular docking was applied to find new candidates against HIV-1 protease in 1990. Since then, using of docking pipelines has become a standard practice in drug discovery. Typically, a docking protocol comprises different phases. The exhaustive sampling of the binding site upon rigid-body approximation of the docking subunits is required. Clustering algorithms are used to group similar binding candidates. Refinement methods are applied to take into account flexibility of the molecular complex and to eliminate possible docking artefacts. Finally, scoring algorithms are employed to select the best binding candidates. The current thesis presents novel algorithms of docking protocols that facilitate structure prediction of protein complexes, which belong to one of the most important target classes in the structure-based drug design. First, DockTrina - a new algorithm to predict conformations of triangular protein trimers (i.e. trimers with pair-wise contacts between all three pairs of proteins) is presented. The method takes as input pair-wise contact predictions from a rigid-body docking program. It then scans and scores all possible combinations of pairs of monomers using a very fast root mean square deviation (RMSD) test. Being fast and efficient, DockTrina outperforms state-of-the-art computational methods dedicated to predict structure of protein oligomers on the collected benchmark of protein trimers. Second, RigidRMSD - a C++ library that in constant time computes RMSDs between molecular poses corresponding to rigid-body transformations is presented. The library is practically useful for clustering docking poses, resulting in ten times speed up compared to standard RMSD-based clustering algorithms. Third, KSENIA - a novel knowledge-based scoring function for protein-protein interactions is developed. The problem of scoring function reconstruction is formulated and solved as a convex optimization problem. As a result, KSENIA is a smooth function and, thus, is suitable for the gradient-base refinement of molecular structures. Remarkably, it is shown that native interfaces of protein complexes provide sufficient information to reconstruct a well-discriminative scoring function. Fourth, CARBON - a new algorithm for the rigid-body refinement of docking candidates is proposed. The rigid-body optimization problem is viewed as the calculation of quasi-static trajectories of rigid bodies influenced by the energy function. To circumvent the typical problem of incorrect stepsizes for rotation and translation movements of molecular complexes, the concept of controlled advancement is introduced. CARBON works well both in combination with a classical force-field and a knowledge-based scoring function. CARBON is also suitable for refinement of molecular complexes with moderate and large steric clashes between its subunits. Finally, a novel method to evaluate prediction capability of scoring functions is introduced. It allows to rigorously assess the performance of the scoring function of interest on benchmarks of molecular complexes. The method manipulates with the score distributions rather than with scores of particular conformations, which makes it advantageous compared to the standard hit-rate criteria. The methods described in the thesis are tested and validated on various protein-protein benchmarks. The implemented algorithms are successfully used in the CAPRI contest for structure prediction of protein-protein complexes. The developed methodology can be easily adapted to the recognition of other types of molecular interactions, involving ligands, polysaccharides, RNAs, etc. The C++ versions of the presented algorithms will be made available as SAMSON Elements for the SAMSON software platform at http://www.samson-connect.net or at http://nano-d.inrialpes.fr/software.
|
537 |
Séquencement d’une ligne de montage multi-modèles : application à l’industrie du véhicule industriel / Mixed model assembly line sequencing : application in truck industryAroui, Karim 27 May 2015 (has links)
Dans cette thèse, nous considérons le problème du séquencement sur une ligne de montage multi-modèles de véhicules industriels. Pour équilibrer au mieux la charge dynamique des opérateurs, la minimisation de la somme des retards à l’issue de chaque véhicule est proposée.Deux approches peuvent être utilisées pour optimiser le lissage de charge dans un problème de séquencement : l’utilisation directe des temps opératoires ou le respect de règles. La plupart des travaux appliqués à l’industrie automobile utilisent l’approche de respect de règles. Une originalité de ce travail est d’utiliser l’approche de la prise en compte directe des temps opératoires.L’étude de la littérature de ce problème a dévoilé deux lacunes dans les travaux précédents : l’essentiel des travaux modélisent un seul type d’opérateurs d’une part, et proposent des heuristiques ou des métaheuristiques pour résoudre ces problèmes, d’autre part. L’originalité de ce travail est de tester des méthodes exactes pour des instances industrielles et de modéliser le fonctionnement de trois différents types d’opérateurs spécifiques au cas industriel.Deux méthodes exactes sont développées : la programmation linéaire mixte et la programmation dynamique. Une étude expérimentale des facteurs de complexité sur des instances académiques des deux modèles est développée. Les modèles sont aussi testés sur des instances du cas d’étude.Par ailleurs, le problème est traité par deux méthodes approchées : une heuristique basée sur la programmation dynamique d’une part, et des métaheuristiques (algorithme génétique, recuit simulé et un couplage des deux) d’autre part. Les deux approches sont testées sur des instances académiques et des instances du cas d’étude.Ce travail a permis d’apporter une solution intéressante d’un point de vue industriel puisqu’il prend en compte les caractéristiques de la ligne de montage (opérateurs spécifiques) et améliore significativement la qualité du séquencement en un temps de calcul raisonnable. / In this thesis, the problem of sequencing mixed model assembly lines (MMAL) is considered. Our goal is to determine the sequence of products to minimize the work overload. This problem is known as the mixed model assembly line sequencing problem with work overload minimization (MMSP-W). This work is based on an industrial case study of a truck assembly line.Two approaches can be used to minimize the work overload: the use of task operation times or the respect of sequencing rules. Most of the earlier works applied in car industry use the latter approach. The originality of this work is to employ the task operation times for the generation of the product sequence in a MMAL.The literature review has highlighted two main gaps in previous works: most of the papers consider a single type of operators, and propose heuristics or metaheuristics to solve the problem. The originality of this work is to test exact methods for industrial case instances and to model three different types of operators.Two exact methods are developed: the mixed integer linear programming and dynamic programming. The models are tested on industrial case study instances. An experimental study is developed for both approaches in order to understand the complexity factors.Moreover, the problem is treated by two approximate methods: a heuristic based on dynamic programming and metaheuristics (genetic algorithm, simulated annealing and a hybrid method based on both genetic algorithm and simulated annealing). All approaches are tested on academic instances and on real data from the industrial case study.
|
538 |
Contribution à la commande d’un moteur asynchrone destiné à la traction électrique / Contribution to induction motor control for electric tractionMehazzem, Fateh 06 December 2010 (has links)
Le travail présenté dans cette thèse a pour objectif d'apporter une contribution aux méthodes de commande et d'observation des machines asynchrones destinées à la traction électrique. Dans ce contexte, plusieurs algorithmes ont été développés et implémentés. Après une présentation rapide de la commande vectorielle classique, de nouvelles approches de commande non linéaire sont proposées : il s'agit plus précisément de la commande backstepping classique et sa variante avec action intégrale. Une deuxième partie est consacrée à l'observation et à l'estimation des paramètres et des états de la machine, basée sur des structures MRAS-modes glissants d'une part et sur des structures de filtrage synchrone d'autre part. Une analyse détaillée du problème de fonctionnement à basse vitesse nous a conduit à proposer une solution originale dans le cadre d'une commande sans capteur mécanique. Le problème de la dégradation du couple en survitesse a été traité par un algorithme de défluxage basé sur la conception d'un contrôleur de tension. Enfin, nous avons proposé un algorithme d'optimisation afin de minimiser les pertes dans l'ensemble Onduleur-Machine / The work presented in this thesis aims to contribute to the control and observation of the induction machines for electric traction. Several algorithms have been developed and implemented. After a fast presentation of the classical vector control, new approaches of non-linear control are proposed : the classical backstepping and integral backstepping. A second part deals with the observation and the estimation of parameters and states of the machine, based on MRAS-Sliding Mode structures on one hand and on synchronous filtering structures on the other hand. A detailed analysis of the operation at low speed led us to propose an original solution for a Sensorless control. The torque degradation in field weakening zone was treated by a voltage regulation controller. Finally, we proposed losses minimization algorithm for the Inverter-Machine set
|
539 |
Keller-Segel-type models and kinetic equations for interacting particles : long-time asymptotic analysisHoffmann, Franca Karoline Olga January 2017 (has links)
This thesis consists of three parts: The first and second parts focus on long-time asymptotics of macroscopic and kinetic models respectively, while in the third part we connect these regimes using different scaling approaches. (1) Keller–Segel-type aggregation-diffusion equations: We study a Keller–Segel-type model with non-linear power-law diffusion and non-local particle interaction: Does the system admit equilibria? If yes, are they unique? Which solutions converge to them? Can we determine an explicit rate of convergence? To answer these questions, we make use of the special gradient flow structure of the equation and its associated free energy functional for which the overall convexity properties are not known. Special cases of this family of models have been investigated in previous works, and this part of the thesis represents a contribution towards a complete characterisation of the asymptotic behaviour of solutions. (2) Hypocoercivity techniques for a fibre lay-down model: We show existence and uniqueness of a stationary state for a kinetic Fokker-Planck equation modelling the fibre lay-down process in non-woven textile production. Further, we prove convergence to equilibrium with an explicit rate. This part of the thesis is an extension of previous work which considered the case of a stationary conveyor belt. Adding the movement of the belt, the global equilibrium state is not known explicitly and a more general hypocoercivity estimate is needed. Although we focus here on a particular application, this approach can be used for any equation with a similar structure as long as it can be understood as a certain perturbation of a system for which the global Gibbs state is known. (3) Scaling approaches for collective animal behaviour models: We study the multi-scale aspects of self-organised biological aggregations using various scaling techniques. Not many previous studies investigate how the dynamics of the initial models are preserved via these scalings. Firstly, we consider two scaling approaches (parabolic and grazing collision limits) that can be used to reduce a class of non-local kinetic 1D and 2D models to simpler models existing in the literature. Secondly, we investigate how some of the kinetic spatio-temporal patterns are preserved via these scalings using asymptotic preserving numerical methods.
|
540 |
Assessment of waste management practices in the informal business sector in Olievenhoutbosch township, PretoriaDube, Innocent 02 1900 (has links)
The increase in global population and high urbanisation rates characterised by high
resource consumption and waste generation levels has led to challenges in waste
management around the world. Waste management remains one of the most critical
challenges faced by local governments in developing countries. Informal business
enterprises have come under the spotlight for their high waste production and poor waste
management practices. Many arguments have been put forward as to the real
environmental impacts caused by informal business enterprises due to their waste
practices.
This research aimed at assessing the waste management practices in the informal
business sector in Olievenhoutbosch Township, Pretoria. Data collection was carried out
between March 2016 and September 2016. The research utilised both qualitative and
quantitative methods. The methodology employed techniques that included structured
questionnaires, structured interviews and field observations. Semi- structured face to face
interviews were carried out with key informants. These interviews provided information
on the frequency of waste collection, available waste management awareness and
challenges faced in delivering the service. The research also involved 230 field
observations to study the pattern and frequency of waste collection and waste behaviours
by informal business enterprises. Questionnaires were administered to 120 informal
business enterprises with a response rate of 84.17%. Data from questionnaires and field
observations indicated that waste generated by informal business enterprises (plastic
bags, card board, packaging plastics, glass bottles and plastic bottles) was mainly
recyclable waste. The most preferred disposal methods were use of refuse plastic bags
(31%), open space dumping (20%) and burning (30%).
Analysis of the results showed that there was lack of information on waste management
and that also influenced waste behaviours. Preferences for waste disposal methods were
influenced by many factors including lack of information, shortage of waste disposal
facilities and waste collection frequency by the local town council. The research found
that waste collection in various sections of the township was done once per week which
has led to increased indiscriminate waste dumping and burning of waste. It was
recommended that waste management information be provided to informal business
enterprises especially on waste separation and recycling. The municipality should
increase frequency of waste collection or provide central point waste facilities to business
operators. / Environmental Sciences / M. Sc. (Environmental Science)
|
Page generated in 0.0851 seconds