• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 72
  • 33
  • 17
  • 16
  • 12
  • 9
  • 8
  • 5
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 392
  • 72
  • 63
  • 59
  • 55
  • 47
  • 38
  • 36
  • 33
  • 31
  • 27
  • 24
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Jeux de typage et analyse de lambda-grammaires non-contextuelles

Bourreau, Pierre 29 June 2012 (has links)
Les grammaires catégorielles abstraites (ou λ-grammaires) sont un formalisme basé sur le λ-calcul simplement typé. Elles peuvent être vues comme des grammaires générant de tels termes, et ont été introduites afin de modéliser l’interface entre la syntaxe et la sémantique du langage naturel, réunissant deux idées fondamentales : la distinction entre tectogrammaire (c.a.d. structure profonde d’un énoncé) et phénogrammaire (c.a.d représentation de la surface d’un énoncé) de la langue, ex- primé par Curry ; et une modélisation algébrique du principe de compositionnalité afin de rendre compte de la sémantique des phrases, due à Montague. Un des avantages principaux de ce formalisme est que l’analyse d’une grammaires catégorielle abstraite permet de résoudre aussi bien le problème de l’analyse de texte, que celui de la génération de texte. Des algorithmes d’analyse efficaces ont été découverts pour les grammaires catégorielles abstraites de termes linéaires et quasi-linéaires, alors que le problème de l’analyse est non-élémentaire dans sa forme la plus générale. Nous proposons d’étudier des classes de termes pour lesquels l’analyse grammaticale reste solvable en temps polynomial. Ces résultats s’appuient principalement sur deux théorèmes de typage : le théorème de cohérence, spécifiant qu’un λ-terme donné est l’unique habitant d’un certain typage ; et le théorème d’expansion du sujet, spécifiant que deux termes β-équivalents habitent les même typages. Afin de mener cette étude à bien, nous utiliserons une représentation abstraite des notions de λ-termes et de typages, sous forme de jeux. En particulier, nous nous appuierons grandement sur cette notion afin de démontrer le théorème de cohérence pour de nouvelles familles de λ-termes et de typages. Grâce à ces résultats, nous montrerons qu’il est possible de construire de manière directe, un reconnaisseur dans le langage Datalog, pour des grammaires catégorielles abstraites de -termes quasi-affines. / Abstract categorial grammars (or, equivalently, lambda-grammars) is formalism based on the simply-typed lambda-calculus. These grammars can be described as grammars of such terms and were introduced in order to bring a model of the syntax-semantics interface in natural language, based on two main ideas: the distinction between the tectogrammatical (i.e. the deep structure of an utterance) and phenogrammatical (i.e. the interpretation of this structure) levels in natural languages, which was expressed by Curry; and an algebraic modeling of the principle of compositionality in order to give account of the semantics of a sentence. an idea formalized by Montague. One of the main advantages of abstract categorial grammars is that both the problems of natural language parsing and generation can be tackled under the same problem: parsing abstract categorial grammars. Efficient algorithms were discovered for abstract categorial grammars of linear and almost linear lambda-terms, while it is known the recognition problem is decidable but non-elementary in general. This work focuses on the study of classes of terms for which parsing can still be solved in polynomial time. The results we give are mainly based on two theorems: the coherence theorem which specifies that a given lambda-term in the desired class must be the unique inhabitant of one of its typing; and the subject expansion theorem, which states that two beta-equivalent terms of the desired class must inhabit the same typings. In order to lead the study, we use an alternative representation of both simply-typed lambda-terms and their typings as games. In particular, we will use this representation in order to prove the coherence theorems for new classes of lambda-terms. Thanks to these results, we will show it is possible to build in a direct way, recognizers for grammars of almost affine lambda-terms as Datalog programs.
172

Inequalities associated to Riesz potentials and non-doubling measures with applications

Bhandari, Mukta Bahadur January 1900 (has links)
Doctor of Philosophy / Department of Mathematics / Charles N. Moore / The main focus of this work is to study the classical Calder\'n-Zygmund theory and its recent developments. An attempt has been made to study some of its theory in more generality in the context of a nonhomogeneous space equipped with a measure which is not necessarily doubling. We establish a Hedberg type inequality associated to a non-doubling measure which connects two famous theorems of Harmonic Analysis-the Hardy-Littlewood-Weiner maximal theorem and the Hardy-Sobolev integral theorem. Hedberg inequalities give pointwise estimates of the Riesz potentials in terms of an appropriate maximal function. We also establish a good lambda inequality relating the distribution function of the Riesz potential and the fractional maximal function in $(\rn, d\mu)$, where $\mu$ is a positive Radon measure which is not necessarily doubling. Finally, we also derive potential inequalities as an application.
173

Metodjämförelse mellan IMMAGE 800 och BN ProSpec för U-albumin, U-IgG, U-kappa och U-lambda

Al-Hadad, Mohamed January 2010 (has links)
<p>Njurarna är ett organsystem med viktiga funktioner som exempelvis utsöndring av flertalet vattenlösliga substanser. För att sjukdomssymtom ska uppträda krävs mer än tre fjärdedelars bortfall av njurfunktionen, eftersom njurarna har en enorm reservkapacitet. Genom att analysera bland annat proteinerna albumin, immunoglobulin G, kappa och lambda i urin utreds om njurfunktionen fungerar som den ska. Analys av dessa proteiner kan ske med analysinstrumenten IMMAGE 800 från Beckman Coulter och BN ProSpec från DADE BEHRING. Båda dessa analysinstrument använder sig av metoden nefelometri, som är en metod där ljusspridning i en vätska eller gas kan mätas.</p><p>Syftet med föreliggande studie var att analysera urinprover på både IMMAGE 800 och BN ProSpec och sedan jämföra resultaten. Under denna studie kalibrerades standardkurvor, genomfördes kvalitetskontroller och 37 prov analyserades. Samma prov analyserades flera gånger, både under samma dag och vid ett antal kommande dagar för att erhålla precisionen. Korrelationskoefficienten blev 0,999 för U-albumin; 0,998 för U-IgG; 0,947 för U-kappa och 0,883 för U-lambda. ProSpec kan således användas vid analys av U-albumin, U-IgG, U-kappa och U-lambda då den uppfyller EQUALIS kvalitetsmål.</p>
174

The safe lambda calculus

Blum, William January 2009 (has links)
We consider a syntactic restriction for higher-order grammars called safety that constrains occurrences of variables in the production rules according to their type-theoretic order. We transpose and generalize this restriction to the setting of the simply-typed lambda calculus, giving rise to what we call the safe lambda calculus. We analyze its expressivity and obtain a result in the same vein as Schwichtenberg's 1976 characterization of the simply-typed lambda calculus: the numeric functions representable in the safe lambda calculus are exactly the multivariate polynomials; thus conditional is not definable. We also give a similar characterization for representable word functions. We then examine the complexity of deciding beta-eta equality of two safe simply-typed terms and show that this problem is PSPACE-hard. The safety restriction is then extended to other applied lambda calculi featuring recursion and references such as PCF and Idealized Algol (IA for short). The next contribution concerns game semantics. We introduce a new concrete presentation of this semantics using the theory of traversals. It is shown that the revealed game denotation of a term can be computed by traversing some souped-up version of the term's abstract syntax tree using adequately defined traversal rules. Based on this presentation and via syntactic reasoning we obtain a game-semantic interpretation of safety: the strategy denotations of safe lambda-terms satisfy a property called P-incremental justification which says that the player's moves are always justified by the last pending opponent's move of greater order occurring in the player's view. Next we look at models of the safe lambda calculus. We show that these are precisely captured by Incremental Closed Categories. A game model is constructed and is shown to be fully abstract for safe IA. Further, it is effectively presentable: two terms are equivalent just if they have the same set of complete O-incrementally justified plays---where O-incremental justification is defined as the dual of P-incremental justification. Finally we study safety from the point of view of algorithmic game semantics. We observe that in the third-order fragment of IA, the addition of unsafe contexts is conservative for observational equivalence. This implies that all the upper complexity bounds known for the lower-order fragments of IA also hold for the safe fragment; we show that the lower-bounds remain the same as well. At order 4, observational equivalence is known to be undecidable for IA. We conjecture that for the order-4 safe fragment of IA, the problem is reducible to the DPDA-equivalence problem and is thus decidable.
175

Higher-order queries and applications

Vu, Quoc Huy January 2012 (has links)
Higher-order transformations are ubiquitous within data management. In relational databases, higher-order queries appear in numerous aspects including query rewriting and query specification. In XML databases, higher-order functions are natural due to the close connection of XML query languages with functional programming. The thesis investigates higher-order query languages that combine higher- order transformations with ordinary database query languages. We de- fine higher-order query languages based on Relational Algebra, Monad Algebra, and XQuery. The thesis also studies basic problems for these query languages including evaluation, containment, and type inference. We show that even though evaluating these higher-order query languages is non-elementary, there are subclasses that are polynomially reducible to evaluation for ordinary query languages. Our theoretical analysis is complemented by an implementation of the languages, our Higher-Order Mapping Evaluation System (HOMES). The system integrates querying and query transformation in a single higher- order query language. It allows users to write queries that integrate and combine query transformations. The system is implemented on top of traditional database management systems. The evaluation algorithm is optimized by a combination of subquery caching techniques from relational and XML databases and sharing detection schemes from functional programming.
176

Control of posture and stability of the double-joint (Shoulder/Elbow) arm

Mihaltchev, Pavel January 2003 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
177

Building a scalable distributed data platform using lambda architecture

Mehta, Dhananjay January 1900 (has links)
Master of Science / Department of Computer Science / William H. Hsu / Data is generated all the time over Internet, systems sensors and mobile devices around us this is often referred to as ‘big data’. Tapping this data is a challenge to organizations because of the nature of data i.e. velocity, volume and variety. What make handling this data a challenge? This is because traditional data platforms have been built around relational database management systems coupled with enterprise data warehouses. Legacy infrastructure is either technically incapable to scale to big data or financially infeasible. Now the question arises, how to build a system to handle the challenges of big data and cater needs of an organization? The answer is Lambda Architecture. Lambda Architecture (LA) is a generic term that is used for scalable and fault-tolerant data processing architecture that ensures real-time processing with low latency. LA provides a general strategy to knit together all necessary tools for building a data pipeline for real-time processing of big data. LA comprise of three layers – Batch Layer, responsible for bulk data processing, Speed Layer, responsible for real-time processing of data streams and Service Layer, responsible for serving queries from end users. This project draw analogy between modern data platforms and traditional supply chain management to lay down principles for building a big data platform and show how major challenges with building a data platforms can be mitigated. This project constructs an end to end data pipeline for ingestion, organization, and processing of data and demonstrates how any organization can build a low cost distributed data platform using Lambda Architecture.
178

Model Based Catalyst Control

Irman, Svraka, Linus, Österdahl Wetterhag January 2019 (has links)
A one dimensional discretized model of a two brick three way catalyst (TWC) system was developed and implemented in MATLAB, Simulink and TargetLink in collaboration with Volvo Cars and Linköpings Universitet - ISY. The purpose of this thesis was to increase system understanding and create a model based TWC control for further development at Volvo Cars. A total of 50 states were modelled, including emission concentrations (O2, CO, C3H6, C3H8, H2, NOx, CO2, H2O), temperature and oxygen buffer level (OBL). A model based control structure was implemented in the form of five separate PID-controllers enabling possibilities to control the OBL of each separate slice of each brick individually and through simple reference handling. The control structures includes anti-windup, feedforward control and feedback safety for model reset during sensor indication of leakage. Specific equipment and software used included MATLAB, Simulink, TargetLink, Volvo SULEV30 TWC and testing rigs. Overall increase in system understanding was achieved in comparison with contemporary TWC modelling and control, as well as sufficient system performance in regard to estimate emissions, simulation duration and pedagogical value. Concluding thoughts of the thesis revolve the complexity of the actual TWC modelling, parameter estimation as well as control. The model presented in this thesis has great potential of describing TWC systems but with great effort during parameter estimation. With ECU performance available in temporary vehicle production year 2019, a complex model may be combined with a simple control strategy whilst a simple model may be combined with a complex control strategy.
179

Estudo sobre a vida útil de rolamentos fixos de uma carreira de esferas. / Study about rolling bearing life of deep groove ball bearings.

Campanha, Marcos Vilodres 19 December 2007 (has links)
O presente trabalho destina-se à discussão sobre o cálculo de vida útil de rolamentos. Mostrando o avanço do processo de cálculo ao longo das décadas até o mais alto grau de desenvolvimento atual. A preocupação do texto é demonstrar de forma simples e objetiva as divergências que existem entre a formulação teórica e a real vida dos rolamentos, no que tange a fadiga de contato. Neste contexto foram realizados testes, em máquina especialmente destinada ao ensaio da fadiga de rolamentos. Variando-se para as duas séries de ensaios, apenas, a temperatura (aproximadamente 85°C e 110°C). Os resultados obtidos indicam que a vida real dos rolamentos apresenta grande divergência se comparada com a vida útil calculada, principalmente, no regime com maior temperatura. Atribui-se a esta disparidade, a ausência de cálculos precisos quanto à correlação da vida útil com o fator l, que é uma forma de se calcular o espaçamento entre as superfícies de contato, e o não emprego do cálculo do fator de carga, na formulação da vida útil de rolamentos. / The present work has the purpose of discussing the life of rolling bearings, describing the evolution of bearing life calculation until its current state of the art. Our focus is to demonstrate, simply and objectively, the inconsistencies occurring between the actual life of rolling bearings and their theoretical fatigue life estimation. For such purpose, tests were developed in a special bearing test rig to assess bearing fatigue. Two test sets were carried out with temperature being the only variation (approximately 85°C and 110°C). Results obtained from these tests suggest that the real life of rolling bearings is indeed very different from calculated bearing life, especially under higher temperature. Such disparity can be attributed to the lack of a precise computation of the relationship between bearing real life and the l factor - which determines the thickness of lubricant separating raceways and balls - as well as to the failure to compute the load factor in bearing life estimation.
180

Vers un calcul des constructions pédagogique / Towards a pedagogical calculus of constructions

Demange, Vincent 07 December 2012 (has links)
Les systèmes pédagogiques sont apparus récemment à propos des calculs propositionnels (jusqu'à l'ordre supérieur), et consistent à donner systématiquement des exemples des notions (hypothèses) introduites. Formellement, cela signifie que pour mettre un ensemble Delta de formules en hypothèse, il est requis de donner une substitution sigma telle que les instances de formules sigma(Delta) soient démontrables. Cette nécessité d'exemplification ayant été pointée du doigt par Poincaré (1913) comme relevant du bon sens: une définition d'un objet par postulat n'ayant d'intérêt que si un tel objet peut être construit. Cette restriction appliquée à des systèmes formels intuitionnistes rejoint l'idée des mathématiques sans négation défendues par Griss (1946) au milieu du siècle dernier, et présentées comme une version radicale de l'intuitionnisme. À travers l'isomorphisme de Curry-Howard (1980), la contrepartie calculatoire est l'utilité des programmes définis dans les systèmes fonctionnels correspondant: toute fonction peut être appliquée à un argument clos. Les premiers résultats concernant les calculs propositionnels jusqu'au second ordre ont été publiés récemment par Colson et Michel (2007, 2008, 2009). Nous exposons dans ce rapport une tentative d'uniformisation et d'extension au Calcul des Constructions (CC) des précédents résultats. Tout d'abord une définition formelle et précise de sous-système pédagogique du Calcul des Constructions est introduite, puis différents tels sous-systèmes sont déclinés en exemple / Pedagogical formal systems have appeared recently for propositional calculus (up to the higher order), and it consists of systematically give examples of introduced notions (hypotheses). Formally, it means that to use a set Delta of formulas as hypotheses, one must first give a substitution sigma such that all the instances of formulas sigma(Delta) can be proved. This neccesity of giving examples has been pointed out by Poincaré (1913) as a common-sense practice: a definition of an object by means of assumptions has interest only if such an object can be constructed. This restriction applied to intuitionistic formal systems is consistent with the idea of negationless mathematics advocated by Griss (1946) in the middle of the past century, and shown as a more radical view of intuitionism. Through the Curry-Howard isomorphism (1980), the computational counterpart is the utility of programs defined in the associated functional systems: every function can be applied to a closed value. First results concerning propositional calculi up to the second-order has recently been published by Colson and Michel (2007, 2008, 2009). In this thesis we present an attempt to standardize and to extend to the Calculus of Constructions (CC) those previous results. First a formal and precise definition of pedagogical sub-systems of the Calculus of Constructions is introduced, and different such sub-systems are exhibited as examples

Page generated in 0.0255 seconds