• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 100
  • 95
  • 18
  • 17
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 634
  • 77
  • 55
  • 54
  • 54
  • 40
  • 40
  • 38
  • 37
  • 36
  • 34
  • 32
  • 28
  • 27
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Study of Passive Optical Network (PON) System and Devices

Guo, Qingyi 04 1900 (has links)
<p>The fiber-to-the-x (FTTX) has been widely investigated as a leading access technology to meet the ever growing demand for bandwidth in the last mile. The passive optical network (PON) provides a cost-effective and durable solution. In this thesis, we investigate different aspects of the PON, in the search for cost-effective and high-performance designs of link system and devices.</p> <p>In Chapter 2, we propose a novel upstream link scheme for optical orthogonal frequency division multiplexing (OOFDM)-PON. The colorless laser diodes are used at the optical network units (ONUs), and the overlapped channel spectrum of orthogonal subcarrier multiplexing provides high spectral efficiency. At the optical line terminal (OLT), optical switch and all optical fast Fourier transform (OFFT) are adopted for high speed demultiplexing. The deterioration caused by the laser perturbation is also investigated.</p> <p>In Chapter 3, we design a novel polarization beam splitter (PBS), which is one of the most important components in polarization-controlled optical systems, e.g. the next-generation PON utilizing polarization multiplexing. Our PBS is built on a slab waveguide platform where the light is vertically confined. Planar lenses are formed to collimate and refocus light beam by converting the phase front of the beam. A planar subwavelength grating of a wedge shape induces the form birefringence, where the transverse electric (TE) and transverse magnetic (TM) waves have different effective refractive indices, and are steered to distinct directions. This design provides low insertion loss (< 0.9 dB) and low crosstalk (< -30 dB) for a bandwidth of 100 nm in a compact size, and can be realized by different material systems for easy fabrication and/or monolithic integration with other optical components.</p> <p>In Chapter 4, we study the mode partition noise (MPN) characteristics of the Fabry-Perot (FP) laser diode using the time-domain simulation of noise-driven multi-mode laser rate equation. FP laser is cheaper than the widely used distributed feedback (DFB) laser diode in PON, but its MPN is the major limiting factor in an optical transmission system. We calculate the probability density functions for each longitudinal mode. We also investigate the k-factor, which is a simple yet important measure of MPN. The sources of the k-factor are studied with simulation, including the intrinsic source of the laser Langevin noise, and the extrinsic source of the bit pattern.</p> / Doctor of Philosophy (PhD)
602

Family deceased estate division agreements from old Babylonian Larsa, Nippur and Sippar

Claassens, Susandra Jacoba 11 1900 (has links)
In most cases in a deceased person’s estate, there are problems with co-ownership where more than one family member inherits the deceased family estate assets. To escape the perils of co-ownership the beneficiaries consensually agree to divide the inherited communallyshared asset/s. This agreement can take place immediately after the death of the family estate owner or some time later regarding some or all of the said assets. On the conclusion of the division agreement, the contractual party who receives the awarded assets enjoys sole ownership and the other contractual parties by agreement retract their ownership. In a jurisprudential content analysis of forty-six recorded family deceased division agreements from Old Babylonian Larsa and Nippur, essential elements are identified which are the framework and qualification requirements for a family deceased division agreement. Within this framework the concepts, terms and elements of the agreement are categorised as natural and incidental elements, which reflect the specific law traditions and choices of contractual parties and show the unique scribal traditions in the different Old Babylonian city-states of Larsa, Nippur and Sippar. The aim of the study is to shed a more focused light on the interpretation of recorded Old Babylonian division agreements and to show that the division agreement was a successful, timeless, estate administration mechanism and tool to obviate any undesirable consequences of co-ownership of the bequeathed property. / Old Testament & Ancient Near Eastern Studies / D. Litt. et Phil. (Ancient Near Eastern Studies)
603

Patrons de distribution des crustacés planctoniques dans le fleuve Saint-Laurent

Cusson, Edith 04 1900 (has links)
La recherche porte sur les patrons de distribution longitudinale (amont-aval) et transversale (rive nord - rive sud) des communautés de crustacés planctoniques qui ont été analysés le long du fleuve Saint-Laurent entre le lac Saint-François et la zone de transition estuarienne, à deux hydropériodes en mai (crue) et en août (étiage). Les données zooplanctoniques et environnementales ont été récoltées à 52 stations réparties sur 16 transects transversaux en 2006. Au chapitre 1, nous présentons les principaux modèles écosystémiques en rivière, une synthèse des facteurs influençant le zooplancton en rivières et les objectifs et hypothèses de recherche. Au chapitre 2, nous décrivons la structure des communautés de zooplancton dans trois zones biogéographiques du fleuve et 6 habitats longitudinaux, ainsi que les relations entre la structure du zooplancton et la distribution spatiale des masses d’eau et les variables environnementales. Au chapitre 3, nous réalisons une partition de la variation des variables spatiales AEM (basées sur la distribution des masses d’eau) et des variables environnementales pour évaluer quelle part de la variation du zooplancton est expliquée par les processus hydrologiques (variables AEM) et les conditions locales (facteurs environnementaux). Le gradient salinité-conductivité relié à la discontinuité fleuve-estuaire a déterminé la distribution à grande échelle du zooplancton. Dans les zones fluviales, la distribution du zooplancton est davantage influencée par la distribution des masses d’eau que par les facteurs environnementaux locaux. La distribution des masses d’eau explique une plus grande partie de la variation dans la distribution du zooplancton en août qu’en mai. / The research aims to determine the distribution patterns of crustacean plankton along the longitudinal (west-east) and transversal (north shore - south shore) axes of the St. Lawrence River between Lake Saint-François and the estuarine transition zone, during two hydroperiods in May (high discharge) and August (low discharge). The zooplankton samples and the environmental data were collected at 52 stations distributed along 16 transversal transects in 2006. In chapter 1, we present the theoretical concepts of river ecosystem models, and a synthesis on the generative processes driving zooplankton spatial patterns in rivers. We also present our research objectives and hypotheses. In chapter 2, we describe spatial patterns of the zooplankton community structure in three biogeographic zones of the St. Lawrence and 6 longitudinal habitats, together with the relationships between zooplankton spatial structure and water masses spatial distribution and environmental characteristics. In chapter 3, we perform a variation partitioning procedure on spatial variables AEM (based on water masses spatial distribution) and environmental variables in order to assess how much of the zooplankton variation is explained by hydrological processes (AEM variables) and local conditions (environmental factors). The salinity-conductivity gradient related to the fluvial-estuary discontinuity determines the large-scale spatial patterns of the crustacean zooplankton. In the fluvial zones, the zooplankton distribution patterns are more influenced by the water masses spatial structure than by local environmental factors. The spatial distribution of the water masses explained more of the spatial structure of zooplankton communities in August than in May.
604

Développement de modèles prédictifs de la toxicocinétique de substances organiques

Peyret, Thomas 02 1900 (has links)
Les modèles pharmacocinétiques à base physiologique (PBPK) permettent de simuler la dose interne de substances chimiques sur la base de paramètres spécifiques à l’espèce et à la substance. Les modèles de relation quantitative structure-propriété (QSPR) existants permettent d’estimer les paramètres spécifiques au produit (coefficients de partage (PC) et constantes de métabolisme) mais leur domaine d’application est limité par leur manque de considération de la variabilité de leurs paramètres d’entrée ainsi que par leur domaine d’application restreint (c. à d., substances contenant CH3, CH2, CH, C, C=C, H, Cl, F, Br, cycle benzénique et H sur le cycle benzénique). L’objectif de cette étude est de développer de nouvelles connaissances et des outils afin d’élargir le domaine d’application des modèles QSPR-PBPK pour prédire la toxicocinétique de substances organiques inhalées chez l’humain. D’abord, un algorithme mécaniste unifié a été développé à partir de modèles existants pour prédire les PC de 142 médicaments et polluants environnementaux aux niveaux macro (tissu et sang) et micro (cellule et fluides biologiques) à partir de la composition du tissu et du sang et de propriétés physicochimiques. L’algorithme résultant a été appliqué pour prédire les PC tissu:sang, tissu:plasma et tissu:air du muscle (n = 174), du foie (n = 139) et du tissu adipeux (n = 141) du rat pour des médicaments acides, basiques et neutres ainsi que pour des cétones, esters d’acétate, éthers, alcools, hydrocarbures aliphatiques et aromatiques. Un modèle de relation quantitative propriété-propriété (QPPR) a été développé pour la clairance intrinsèque (CLint) in vivo (calculée comme le ratio du Vmax (μmol/h/kg poids de rat) sur le Km (μM)), de substrats du CYP2E1 (n = 26) en fonction du PC n octanol:eau, du PC sang:eau et du potentiel d’ionisation). Les prédictions du QPPR, représentées par les limites inférieures et supérieures de l’intervalle de confiance à 95% à la moyenne, furent ensuite intégrées dans un modèle PBPK humain. Subséquemment, l’algorithme de PC et le QPPR pour la CLint furent intégrés avec des modèles QSPR pour les PC hémoglobine:eau et huile:air pour simuler la pharmacocinétique et la dosimétrie cellulaire d’inhalation de composés organiques volatiles (COV) (benzène, 1,2-dichloroéthane, dichlorométhane, m-xylène, toluène, styrène, 1,1,1 trichloroéthane et 1,2,4 trimethylbenzène) avec un modèle PBPK chez le rat. Finalement, la variabilité de paramètres de composition des tissus et du sang de l’algorithme pour les PC tissu:air chez le rat et sang:air chez l’humain a été caractérisée par des simulations Monte Carlo par chaîne de Markov (MCMC). Les distributions résultantes ont été utilisées pour conduire des simulations Monte Carlo pour prédire des PC tissu:sang et sang:air. Les distributions de PC, avec celles des paramètres physiologiques et du contenu en cytochrome P450 CYP2E1, ont été incorporées dans un modèle PBPK pour caractériser la variabilité de la toxicocinétique sanguine de quatre COV (benzène, chloroforme, styrène et trichloroéthylène) par simulation Monte Carlo. Globalement, les approches quantitatives mises en œuvre pour les PC et la CLint dans cette étude ont permis l’utilisation de descripteurs moléculaires génériques plutôt que de fragments moléculaires spécifiques pour prédire la pharmacocinétique de substances organiques chez l’humain. La présente étude a, pour la première fois, caractérisé la variabilité des paramètres biologiques des algorithmes de PC pour étendre l’aptitude des modèles PBPK à prédire les distributions, pour la population, de doses internes de substances organiques avant de faire des tests chez l’animal ou l’humain. / Physiologically-based pharmacokinetic (PBPK) models simulate the internal dose metrics of chemicals based on species-specific and chemical-specific parameters. The existing quantitative structure-property relationships (QSPRs) allow to estimate the chemical-specific parameters (partition coefficients (PCs) and metabolic constants) but their applicability is limited by their lack of consideration of variability in input parameters and their restricted application domain (i.e., substances containing CH3, CH2, CH, C, C=C, H, Cl, F, Br, benzene ring and H in benzene ring). The objective of this study was to develop new knowledge and tools to increase the applicability domain of QSPR-PBPK models for predicting the inhalation toxicokinetics of organic compounds in humans. First, a unified mechanistic algorithm was developed from existing models to predict macro (tissue and blood) and micro (cell and biological fluid) level PCs of 142 drugs and environmental pollutants on the basis of tissue and blood composition along with physicochemical properties. The resulting algorithm was applied to compute the tissue:blood, tissue:plasma and tissue:air PCs in rat muscle (n = 174), liver (n = 139) and adipose tissue (n = 141) for acidic, neutral, zwitterionic and basic drugs as well as ketones, acetate esters, alcohols, ethers, aliphatic and aromatic hydrocarbons. Then, a quantitative property-property relationship (QPPR) model was developed for the in vivo rat intrinsic clearance (CLint) (calculated as the ratio of the in vivo Vmax (μmol/h/kg bw rat) to the Km (μM)) of CYP2E1 substrates (n = 26) as a function of n-octanol:water PC, blood:water PC, and ionization potential). The predictions of the QPPR as lower and upper bounds of the 95% mean confidence intervals were then integrated within a human PBPK model. Subsequently, the PC algorithm and QPPR for CLint were integrated along with a QSPR model for the hemoglobin:water and oil:air PCs to simulate the inhalation pharmacokinetics and cellular dosimetry of volatile organic compounds (VOCs) (benzene, 1,2-dichloroethane, dichloromethane, m-xylene, toluene, styrene, 1,1,1-trichloroethane and 1,2,4 trimethylbenzene) using a PBPK model for rats. Finally, the variability in the tissue and blood composition parameters of the PC algorithm for rat tissue:air and human blood:air PCs was characterized by performing Markov chain Monte Carlo (MCMC) simulations. The resulting distributions were used for conducting Monte Carlo simulations to predict tissue:blood and blood:air PCs for VOCs. The distributions of PCs, along with distributions of physiological parameters and CYP2E1 content, were then incorporated within a PBPK model, to characterize the human variability of the blood toxicokinetics of four VOCs (benzene, chloroform, styrene and trichloroethylene) using Monte Carlo simulations. Overall, the quantitative approaches for PCs and CLint implemented in this study allow the use of generic molecular descriptors rather than specific molecular fragments to predict the pharmacokinetics of organic substances in humans. In this process, the current study has, for the first time, characterized the variability of the biological input parameters of the PC algorithms to expand the ability of PBPK models to predict the population distributions of the internal dose metrics of organic substances prior to testing in animals or humans.
605

Programming Model and Protocols for Reconfigurable Distributed Systems

Arad, Cosmin January 2013 (has links)
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for large-scale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics. / Kompics / CATS / REST
606

Relações estrutura-retenção de flavonóides por cromatografia a líquido em membranas imobilizadas artificialmente / Structure retention relationships of flavonoids by liquid chromatography using immobilized artificial membranes

Santoro, Adriana Leandra 24 August 2007 (has links)
Para um composto químico exercer seu efeito bioativo é necessário que ele atravesse várias barreiras biológicas até alcançar seu sitio de ação. Propriedades farmacocinéticas insatisfatórias (como absorção, distribuição, metabolismo e excreção) são reconhecidamente as principais causas na descontinuidade de pesquisas na busca por novos fármacos. Neste trabalho, modelos biofísicos foram utilizados para o estudo de absorção de uma série de flavonóides naturais com atividade tripanossomicida. O coeficiente cromatográfico de partição, kw, foi determinado através da cromatografia líquida de alta eficiência em fase reversa, RP-HPLC, utilizando-se de colunas cromatográficas empacotadas com constituintes básicos da membrana biológica (fosfatidilcolina e colesterol). Os resultados obtidos demonstraram que nas colunas compostas por fosfatidilcolina a retenção de flavonóides hidroxilados é determinada por interações secundárias, além da partição, e no caso da coluna de colesterol, a partição é o principal mecanismo que rege a retenção. Uma série de descritores físico-químicos foi gerada pelos campos moleculares de interações (MIFs) entre os flavonóides naturais e algumas sondas químicas virtuais, utilizando o programa GRID. Os descritores físico-químicos gerados foram correlacionados com os log kw por análise dos mínimos múltiplos parciais (PLS), utilizando o programa VolSurf, com a finalidade de gerar um modelo quantitativo entre estrutura e propriedade (QSPR) para esta classe de compostos. O modelo produzido por este estudo, ao utilizar os dados de partição em colesterol, log kwCol, apresentou elevada consistência interna, com bom poder de correlação (R2 = 0, 97) e predição (Q2 = 0,86) para a partição destas moléculas / In order to a chemical compound exert its bioactive effect it is necessary that it crosses some biological barriers until reaching its site of action. Unfavorable pharmacokinetics properties (absorption, distribution, metabolism and excretion) are admittedly one of the main causes in the discontinuity of research in the search for new drugs. In this work, biophysics models were used for the study of absorption of a series of natural flavonoids with trypanocide activity. The chromatographic retention indices (log kw) were determined on immobilized artificial membranes columns (IAM.PC.DD, IAM.PC.DD2, Cholesteryl 10-Undecetonoato) obtained by the extrapolation method. The results demonstrated that in the composed columns for fosfatidilcolina the retention of hydroxil flavonoids is determined by secondary interactions, beyond the partition. In the case of the retention for the cholesterol column, the partition is the main mechanism that drives the retention. A series of physico-chemical descriptors were generated by the molecular interaction fields (MIF) between the flavonoids and some virtual chemical probes, using the program GRID. The descriptors were correlated with log kw by the partial least squares regression (PLS), using the VolSurf program, with the purpose to generate a quantitative model between the structure and the retention (QSRR) for this compounds class. The model produced for this study, when using the data of partition in cholesterol, log kwCol, presented high internal consistency, with good correlation power (R2 = 0, 97) and prediction (Q2 = 0,86) for the partition of these molecules
607

Computational techniques in finite semigroup theory

Wilson, Wilf A. January 2019 (has links)
A semigroup is simply a set with an associative binary operation; computational semigroup theory is the branch of mathematics concerned with developing techniques for computing with semigroups, as well as investigating semigroups with the help of computers. This thesis explores both sides of computational semigroup theory, across several topics, especially in the finite case. The central focus of this thesis is computing and describing maximal subsemigroups of finite semigroups. A maximal subsemigroup of a semigroup is a proper subsemigroup that is contained in no other proper subsemigroup. We present novel and useful algorithms for computing the maximal subsemigroups of an arbitrary finite semigroup, building on the paper of Graham, Graham, and Rhodes from 1968. In certain cases, the algorithms reduce to computing maximal subgroups of finite groups, and analysing graphs that capture information about the regular I-classes of a semigroup. We use the framework underpinning these algorithms to describe the maximal subsemigroups of many families of finite transformation and diagram monoids. This reproduces and greatly extends a large amount of existing work in the literature, and allows us to easily see the common features between these maximal subsemigroups. This thesis is also concerned with direct products of semigroups, and with a special class of semigroups known as Rees 0-matrix semigroups. We extend known results concerning the generating sets of direct products of semigroups; in doing so, we propose techniques for computing relatively small generating sets for certain kinds of direct products. Additionally, we characterise several features of Rees 0-matrix semigroups in terms of their underlying semigroups and matrices, such as their Green's relations and generating sets, and whether they are inverse. In doing so, we suggest new methods for computing Rees 0-matrix semigroups.
608

Design, Implementation and Analysis of a Description Model for Complex Archaeological Objects / Elaboration, mise en œuvre et analyse d’un mod`ele de description d’objets arch´eologiques complexes

Ozturk, Aybuke 09 July 2018 (has links)
La céramique est l'un des matériaux archéologiques les plus importants pour aider à la reconstruction des civilisations passées. Les informations à propos des objets céramiques complexes incluent des données textuelles, numériques et multimédias qui posent plusieurs défis de recherche abordés dans cette thèse. D'un point de vue technique, les bases de données de céramiques présentent différents formats de fichiers, protocoles d'accès et langages d'interrogation. Du point de vue des données, il existe une grande hétérogénéité et les experts ont différentes façons de représenter et de stocker les données. Il n'existe pas de contenu et de terminologie standard, surtout en ce qui concerne la description des céramiques. De plus, la navigation et l'observation des données sont difficiles. L'intégration des données est également complexe en raison de laprésence de différentes dimensions provenant de bases de données distantes, qui décrivent les mêmes catégories d'objets de manières différentes.En conséquence, ce projet de thèse vise à apporter aux archéologues et aux archéomètres des outils qui leur permettent d'enrichir leurs connaissances en combinant différentes informations sur les céramiques. Nous divisons notre travail en deux parties complémentaires : (1) Modélisation de données archéologiques complexes, et (2) Partitionnement de données (clustering) archéologiques complexes. La première partie de cette thèse est consacrée à la conception d'un modèle de données archéologiques complexes pour le stockage des données céramiques. Cette base de donnée alimente également un entrepôt de données permettant des analyses en ligne (OLAP). La deuxième partie de la thèse est consacrée au clustering (catégorisation) des objets céramiques. Pour ce faire, nous proposons une approche floue, dans laquelle un objet céramique peut appartenir à plus d'un cluster (d'une catégorie). Ce type d'approche convient bien à la collaboration avec des experts, enouvrant de nouvelles discussions basées sur les résultats du clustering.Nous contribuons au clustering flou (fuzzy clustering) au sein de trois sous-tâches : (i) une nouvelle méthode d'initialisation des clusters flous qui maintient linéaire la complexité de l'approche ; (ii) un indice de qualité innovant qui permet de trouver le nombre optimal de clusters ; et (iii) l'approche Multiple Clustering Analysis qui établit des liens intelligents entre les données visuelles, textuelles et numériques, ce qui permet de combiner tous les types d'informations sur les céramiques. Par ailleurs, les méthodes que nous proposons pourraient également être adaptées à d'autres domaines d'application tels que l'économie ou la médecine. / Ceramics are one of the most important archaeological materials to help in the reconstruction of past civilizations. Information about complex ceramic objects is composed of textual, numerical and multimedia data, which induce several research challenges addressed in this thesis. From a technical perspective, ceramic databases have different file formats, access protocols and query languages. From a data perspective, ceramic data are heterogeneous and experts have differentways of representing and storing data. There is no standardized content and terminology, especially in terms of description of ceramics. Moreover, data navigation and observation are difficult. Data integration is also difficult due to the presence of various dimensions from distant databases, which describe the same categories of objects in different ways.Therefore, the research project presented in this thesis aims to provide archaeologists and archaeological scientists with tools for enriching their knowledge by combining different information on ceramics. We divide our work into two complementary parts: (1) Modeling of Complex Archaeological Data and (2) Clustering Analysis of Complex Archaeological Data. The first part of this thesis is dedicated to the design of a complex archaeological database model for the storage of ceramic data. This database is also used to source a data warehouse for doing online analytical processing (OLAP). The second part of the thesis is dedicated to an in-depth clustering (categorization) analysis of ceramic objects. To do this, we propose a fuzzy approach, where ceramic objects may belong to more than one cluster (category). Such a fuzzy approach is well suited for collaborating with experts, by opening new discussions based on clustering results.We contribute to fuzzy clustering in three sub-tasks: (i) a novel fuzzy clustering initialization method that keeps the fuzzy approach linear; (ii) an innovative quality index that allows finding the optimal number of clusters; and (iii) the Multiple Clustering Analysis approach that builds smart links between visual, textual and numerical data, which assists in combining all types ofceramic information. Moreover, the methods we propose could also be adapted to other application domains such as economy or medicine.
609

L'ÉNUMÉRATION DANS LE MESURAGE DES COLLECTIONS. UN DYSFONCTIONNEMENT DANS LA TRANSPOSITION DIDACTIQUE.

Briand, Joël 14 December 1993 (has links) (PDF)
Certaines difficultés dans des activités de dénombrement sont imputables à la difficulté de passer d'un ensemble fini d'éléments à la détermination d'un ordre total sur cet ensemble. La capacité à faire ce passage peut être identifiée comme une connaissance que nous nommons énumération. L'apprentissage de l'énumération ne figure pas dans l'enseignement. Son existence culturelle même est faiblement reconnue. La première partie de notre étude montre d'une part que le contrôle du comptage et du dénombrement effectif des éléments d'une collection finie exige principalement de la part des élèves une conception de l'énumération, d'autre part que cette connaissance est nécessaire pour la construction et la compréhension des opérations arithmétiques. La deuxième partie étudie les conséquences dans l'enseignement : l'enseignement a besoin de connaissances qu'il ne prend pas en charge. L'impossibilité, pour l'enseignement, à réaliser une transposition didactique de l'énumération fait que cette connaissance est entièrement sous la responsabilité de l'élève et qu'il ne peut y avoir de négociation didactique à ce sujet. D'où des difficultés du côté des élèves comme du côté des professeurs. Nous étudions en particulier quels moyens l'institution enseignante se donne pour résoudre localement les problèmes posés par l'absence obligée de cette transposition, et les nouvelles difficultés que cela engendre. Pour permettre une transposition didactique de l'énumération, il est alors nécessaire de produire des situations a-didactiques de l'énumération et de réfléchir aux conditions de sa transformation en objet de savoir. Nous développons une ingénierie d'apprentissage de l'énumération d'ensembles. Nous posons ensuite la question du nécessaire choix, par le système enseignant, de la frontière entre les connaissances et les savoirs. Enfin, nous montrons comment notre ingénierie peut être comprise des enseignants en formation.
610

Programming Model and Protocols for Reconfigurable Distributed Systems

Arad, Cosmin Ionel January 2013 (has links)
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for largescale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics. / <p>QC 20130520</p>

Page generated in 0.4827 seconds