• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 768
  • 242
  • 119
  • 117
  • 37
  • 34
  • 16
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1745
  • 355
  • 304
  • 279
  • 262
  • 243
  • 191
  • 191
  • 185
  • 182
  • 181
  • 170
  • 169
  • 168
  • 161
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Incremental Verification of Timing Constraints for Real-Time Systems

Andrei, Ştefan, Chin, Wei Ngan, Rinard, Martin C. 01 1900 (has links)
Testing constraints for real-time systems are usually verified through the satisfiability of propositional formulae. In this paper, we propose an alternative where the verification of timing constraints can be done by counting the number of truth assignments instead of boolean satisfiability. This number can also tell us how “far away” is a given specification from satisfying its safety assertion. Furthermore, specifications and safety assertions are often modified in an incremental fashion, where problematic bugs are fixed one at a time. To support this development, we propose an incremental algorithm for counting satisfiability. Our proposed incremental algorithm is optimal as no unnecessary nodes are created during each counting. This works for the class of path RTL. To illustrate this application, we show how incremental satisfiability counting can be applied to a well-known rail-road crossing example, particularly when its specification is still being refined. / Singapore-MIT Alliance (SMA)
362

Couplages thermo-hydro-mécaniques dans les sols et les roches tendres partiellement saturés

Collin, Frédéric 11 February 2003 (has links)
Le thème général de cette thèse porte sur le comportement des sols et des roches tendres partiellement saturés. Cette condition de saturation partielle entraîne une complexification du comportement et une augmentation des couplages entre les différents phénomènes existants. Nous avons travaillé sur deux applications différentes qui présentent en fait beaucoup de similitudes. Ce travail s'est effectué principalement dans le code aux éléments finis LAGAMINE. Le premier domaine d'étude concerne le stockage de déchets nucléaires de haute activité. Pour ces derniers, le concept de dépôts dans des couches géologiques profondes a été développé afin de protéger les êtres humains et leur environnement des effets néfastes de la radioactivité. L'idée est de construire un système de galeries dans lesquelles seront placés les déchets vitrifiés ; une barrière d'étanchéité ouvragée (généralement des blocs d'argile compactée) remplira le reste de la galerie et assurera un complément à une barrière géologique naturelle. Pour dimensionner ce système complexe, il est nécessaire de bien connaître les caractéristiques hydrogéologiques, thermiques, mécaniques, chimiques et biologiques, ainsi que de comprendre les processus couplés qui ne manqueront pas de s'y développer. C'est la raison pour laquelle se sont créés des URL (Underground Research Laboratories) dans les couches géologiques potentielles, comme le SCK-CEN à Mol. Les modèles numériques viennent en complément des études expérimentales réalisées dans ces laboratoires et aident à la compréhension des mesures effectuées. En effet, le comportement de la barrière d'étanchéité est très complexe, impliquant des phénomènes thermo-hydro-mécaniques prenant place durant l'échauffement (les déchets dégagent toujours une certaine quantité d'énergie) et l'hydratation (par la formation hôte) de la barrière argileuse ouvragée. Dans ce cadre, nous avons développé un modèle d'écoulement multiphasique avec changement de phase ; il permet d'étudier les transferts hydriques et de chaleur se produisant dans la zone proche de la galerie. Les couplages sont nombreux : les variations de température influencent les propriétés des fluides, ces derniers transportent de la chaleur lors de leur déplacement (flux convectifs), ces conditions de saturation partielle (liées à la succion) induisent également des modifications du comportement mécanique de l'argile. Enfin, dans ces milieux très peu perméables, la prise en compte des transferts hydriques en phase vapeur est primordiale. Ces développements ont été réalisés dans le cadre du projet européen CATSIUS CLAY, ce qui nous a permis une comparaison avec d'autres codes de calculs et la validation de notre travail. Le deuxième domaine d'étude est la subsidence des réservoirs pétroliers de mer du Nord. En effet, certains réservoirs se situent dans des couches de craie à plusieurs milliers de mètres sous le niveau de la mer et ils sont exploités à partir d'installations off-shores. La production du pétrole induit une déplétion du réservoir qui s'accompagne d'une compaction ; cette dernière se répercute jusqu'au fond marin et cela met en danger les stations off-shores. La solution actuellement mise en uvre est l'injection d'eau dans le réservoir afin de le repressuriser et de diminuer ainsi la compaction. Malheureusement, cela a provoqué dans ces formations crayeuses un tassement supplémentaire ! Toutefois, celui-ci n'a pas que des aspects négatifs ; la compaction additionnelle permet une récupération secondaire du pétrole, qui n'aurait pu être obtenue autrement sinon. Il est donc très intéressant de pouvoir contrôler le tassement des couches réservoir. Dans le cadre des projets européens PASACHALK, nous avons développé une loi constitutive élastoplastique suivant l'idée que la sensibilité à l'eau d'une craie initialement saturée d'huile est reliée à l'effet de la succion. Cette dernière comprend des effets purement capillaires mais d'autres également (osmotiques par exemple). Nous avons donc construit un modèle multimécanisme avec influence de la succion, en utilisant les outils et concepts développés en mécanique des sols non-saturés (l'argile notamment). On voit dès lors que les modèles de l'argile de scellement et ceux de la craie de réservoir présentent de nombreuses similitudes ! Cette recherche a été facilitée par le fait qu'une craie, similaire à celles des réservoirs de Mer du Nord, affleure dans notre pays ; on l'exploite notamment dans la carrière de Lixhe, en région liégeoise. Cette craie possède les mêmes caractéristiques et propriétés que celles des formations du réservoir. La seule différence réside dans le fait qu'il n'y a jamais eu de pétrole dans ses pores ! L'analyse de l'ensemble des expérimentations réalisées sur ce matériau, nous a permis de mettre en évidence les caractéristiques du comportement de la craie de manière à calibrer notre loi. Enfin, des essais d'injection dans des échantillons nous fournissent un moyen de validation de nos modèles. Ainsi, nous avons réalisé des simulations à l'échelle du réservoir qui ont confirmé que la variation de succion est bien une explication de certaines compactions dans les réservoirs pétroliers.
363

Entanglement and Quantum Computation from a Geometric and Topological Perspective

Johansson, Markus January 2012 (has links)
In this thesis we investigate geometric and topological structures in the context of entanglement and quantum computation. A parallel transport condition is introduced in the context of Franson interferometry based on the maximization of two-particle coincidence intensity. The dependence on correlations is investigated and it is found that the holonomy group is in general non-Abelian, but Abelian for uncorrelated systems. It is found that this framework contains a parallel transport condition developed by Levay in the case of two-qubit systems undergoing local SU(2) evolutions. Global phase factors of topological origin, resulting from cyclic local SU(2) evolution, called topological phases, are investigated in the context of multi-qubit systems. These phases originate from the topological structure of the local SU(2)-orbits and are an attribute of most entangled multi-qubit systems. The relation between topological phases and SLOCC-invariant polynomials is discussed. A general method to find the values of the topological phases in an n-qubit system is described. A non-adiabatic generalization of holonomic quantum computation is developed in which high-speed universal quantum gates can be realized by using non-Abelian geometric phases. It is shown how a set of non-adiabatic holonomic one- and two-qubit gates can be implemented by utilizing transitions in a generic three-level Λ configuration. The robustness of the proposed scheme to different sources of error is investigated through numerical simulation. It is found that the gates can be made robust to a variety of errors if the operation time of the gate can be made sufficiently short. This scheme opens up for universal holonomic quantum computation on qubits characterized by short coherence times.
364

Sampling from the Hardcore Process

Dodds, William C 01 January 2013 (has links)
Partially Recursive Acceptance Rejection (PRAR) and bounding chains used in conjunction with coupling from the past (CFTP) are two perfect simulation protocols which can be used to sample from a variety of unnormalized target distributions. This paper first examines and then implements these two protocols to sample from the hardcore gas process. We empirically determine the subset of the hardcore process's parameters for which these two algorithms run in polynomial time. Comparing the efficiency of these two algorithms, we find that PRAR runs much faster for small values of the hardcore process's parameter whereas the bounding chain approach is vastly superior for large values of the process's parameter.
365

Development of a Symbolic Computer Algebra Toolbox for 2D Fourier Transforms in Polar Coordinates

Dovlo, Edem 29 September 2011 (has links)
The Fourier transform is one of the most useful tools in science and engineering and can be expanded to multi-dimensions and curvilinear coordinates. Multidimensional Fourier transforms are widely used in image processing, tomographic reconstructions and in fact any application that requires a multidimensional convolution. By examining a function in the frequency domain, additional information and insights may be obtained. In this thesis, the development of a symbolic computer algebra toolbox to compute two dimensional Fourier transforms in polar coordinates is discussed. Among the many operations implemented in this toolbox are different types of convolutions and procedures that allow for managing the toolbox effectively. The implementation of the two dimensional Fourier transform in polar coordinates within the toolbox is shown to be a combination of two significantly simpler transforms. The toolbox is also tested throughout the thesis to verify its capabilities.
366

Algorithms and architectures for decimal transcendental function computation

Chen, Dongdong 27 January 2011
Nowadays, there are many commercial demands for decimal floating-point (DFP) arithmetic operations such as financial analysis, tax calculation, currency conversion, Internet based applications, and e-commerce. This trend gives rise to further development on DFP arithmetic units which can perform accurate computations with exact decimal operands. Due to the significance of DFP arithmetic, the IEEE 754-2008 standard for floating-point arithmetic includes it in its specifications. The basic decimal arithmetic unit, such as decimal adder, subtracter, multiplier, divider or square-root unit, as a main part of a decimal microprocessor, is attracting more and more researchers' attentions. Recently, the decimal-encoded formats and DFP arithmetic units have been implemented in IBM's system z900, POWER6, and z10 microprocessors.<p> Increasing chip densities and transistor count provide more room for designers to add more essential functions on application domains into upcoming microprocessors. Decimal transcendental functions, such as DFP logarithm, antilogarithm, exponential, reciprocal and trigonometric, etc, as useful arithmetic operations in many areas of science and engineering, has been specified as the recommended arithmetic in the IEEE 754-2008 standard. Thus, virtually all the computing systems that are compliant with the IEEE 754-2008 standard could include a DFP mathematical library providing transcendental function computation. Based on the development of basic decimal arithmetic units, more complex DFP transcendental arithmetic will be the next building blocks in microprocessors.<p> In this dissertation, we researched and developed several new decimal algorithms and architectures for the DFP transcendental function computation. These designs are composed of several different methods: 1) the decimal transcendental function computation based on the table-based first-order polynomial approximation method; 2) DFP logarithmic and antilogarithmic converters based on the decimal digit-recurrence algorithm with selection by rounding; 3) a decimal reciprocal unit using the efficient table look-up based on Newton-Raphson iterations; and 4) a first radix-100 division unit based on the non-restoring algorithm with pre-scaling method. Most decimal algorithms and architectures for the DFP transcendental function computation developed in this dissertation have been the first attempt to analyze and implement the DFP transcendental arithmetic in order to achieve faithful results of DFP operands, specified in IEEE 754-2008.<p> To help researchers evaluate the hardware performance of DFP transcendental arithmetic units, the proposed architectures based on the different methods are modeled, verified and synthesized using FPGAs or with CMOS standard cells libraries in ASIC. Some of implementation results are compared with those of the binary radix-16 logarithmic and exponential converters; recent developed high performance decimal CORDIC based architecture; and Intel's DFP transcendental function computation software library. The comparison results show that the proposed architectures have significant speed-up in contrast to the above designs in terms of the latency. The algorithms and architectures developed in this dissertation provide a useful starting point for future hardware-oriented DFP transcendental function computation researches.
367

Algorithms and architectures for decimal transcendental function computation

Chen, Dongdong 27 January 2011 (has links)
Nowadays, there are many commercial demands for decimal floating-point (DFP) arithmetic operations such as financial analysis, tax calculation, currency conversion, Internet based applications, and e-commerce. This trend gives rise to further development on DFP arithmetic units which can perform accurate computations with exact decimal operands. Due to the significance of DFP arithmetic, the IEEE 754-2008 standard for floating-point arithmetic includes it in its specifications. The basic decimal arithmetic unit, such as decimal adder, subtracter, multiplier, divider or square-root unit, as a main part of a decimal microprocessor, is attracting more and more researchers' attentions. Recently, the decimal-encoded formats and DFP arithmetic units have been implemented in IBM's system z900, POWER6, and z10 microprocessors.<p> Increasing chip densities and transistor count provide more room for designers to add more essential functions on application domains into upcoming microprocessors. Decimal transcendental functions, such as DFP logarithm, antilogarithm, exponential, reciprocal and trigonometric, etc, as useful arithmetic operations in many areas of science and engineering, has been specified as the recommended arithmetic in the IEEE 754-2008 standard. Thus, virtually all the computing systems that are compliant with the IEEE 754-2008 standard could include a DFP mathematical library providing transcendental function computation. Based on the development of basic decimal arithmetic units, more complex DFP transcendental arithmetic will be the next building blocks in microprocessors.<p> In this dissertation, we researched and developed several new decimal algorithms and architectures for the DFP transcendental function computation. These designs are composed of several different methods: 1) the decimal transcendental function computation based on the table-based first-order polynomial approximation method; 2) DFP logarithmic and antilogarithmic converters based on the decimal digit-recurrence algorithm with selection by rounding; 3) a decimal reciprocal unit using the efficient table look-up based on Newton-Raphson iterations; and 4) a first radix-100 division unit based on the non-restoring algorithm with pre-scaling method. Most decimal algorithms and architectures for the DFP transcendental function computation developed in this dissertation have been the first attempt to analyze and implement the DFP transcendental arithmetic in order to achieve faithful results of DFP operands, specified in IEEE 754-2008.<p> To help researchers evaluate the hardware performance of DFP transcendental arithmetic units, the proposed architectures based on the different methods are modeled, verified and synthesized using FPGAs or with CMOS standard cells libraries in ASIC. Some of implementation results are compared with those of the binary radix-16 logarithmic and exponential converters; recent developed high performance decimal CORDIC based architecture; and Intel's DFP transcendental function computation software library. The comparison results show that the proposed architectures have significant speed-up in contrast to the above designs in terms of the latency. The algorithms and architectures developed in this dissertation provide a useful starting point for future hardware-oriented DFP transcendental function computation researches.
368

Generalized Probabilistic Bowling Distributions

Hohn, Jennifer Lynn 01 May 2009 (has links)
Have you ever wondered if you are better than the average bowler? If so, there are a variety of ways to compute the average score of a bowling game, including methods that account for a bowler’s skill level. In this thesis, we discuss several different ways to generate bowling scores randomly. For each distribution, we give results for the expected value and standard deviation of each frame's score, the expected value of the game’s final score, and the correlation coefficient between the score of the first and second roll of a single frame. Furthermore, we shall generalize the results in each distribution for an frame game on pins. Additionally, we shall generalize the number of possible games when bowling frames on pins. Then, we shall derive the frequency distribution of each frame’s scores and the arithmetic mean for frames on pins. Finally, to summarize the variety of distributions, we shall make tables that display the results obtained from each distribution used to model a particular bowler’s score. We evaluate the special case when bowling 10 frames on 10 pins, which represents a standard bowling game.
369

Development of a Symbolic Computer Algebra Toolbox for 2D Fourier Transforms in Polar Coordinates

Dovlo, Edem 29 September 2011 (has links)
The Fourier transform is one of the most useful tools in science and engineering and can be expanded to multi-dimensions and curvilinear coordinates. Multidimensional Fourier transforms are widely used in image processing, tomographic reconstructions and in fact any application that requires a multidimensional convolution. By examining a function in the frequency domain, additional information and insights may be obtained. In this thesis, the development of a symbolic computer algebra toolbox to compute two dimensional Fourier transforms in polar coordinates is discussed. Among the many operations implemented in this toolbox are different types of convolutions and procedures that allow for managing the toolbox effectively. The implementation of the two dimensional Fourier transform in polar coordinates within the toolbox is shown to be a combination of two significantly simpler transforms. The toolbox is also tested throughout the thesis to verify its capabilities.
370

Probabilistic Darwin Machines: A new approach to develop Evolutionary Object Detection Systems

Baró i Solé, Xavier 03 April 2009 (has links)
Des dels principis de la informàtica, s'ha intentat dotar als ordinadors de la capacitat per realitzar moltes de les tasques quotidianes de les persones. Un dels problemes més estudiats i encara menys entesos actualment és la capacitat d'aprendre a partir de les nostres experiències i generalitzar els coneixements adquirits.Una de les tasques inconscients per a les persones i que més interès està despertant en àmbit científics des del principi, és el que es coneix com a reconeixement de patrons. La creació de models del món que ens envolta, ens serveix per a reconèixer objectes del nostre entorn, predir situacions, identificar conductes, etc. Tota aquesta informació ens permet adaptar-nos i interactuar amb el nostre entorn. S'ha arribat a relacionar la capacitat d'adaptació d'un ésser al seu entorn amb la quantitat de patrons que és capaç d'identificar.Quan parlem de reconeixement de patrons en el camp de la Visió per Computador, ens referim a la capacitat d'identificar objectes a partir de la informació continguda en una o més imatges. En aquest camp s'ha avançat molt en els últims anys, i ara ja som capaços d'obtenir resultats "útils" en entorns reals, tot i que encara estem molt lluny de tenir un sistema amb la mateixa capacitat d'abstracció i tan robust com el sistema visual humà.En aquesta tesi, s'estudia el detector de cares de Viola i Jones, un dels mètode més estesos per resoldre la detecció d'objectes. Primerament, s'analitza la manera de descriure els objectes a partir d'informació de contrastos d'il·luminació en zones adjacents de les imatges, i posteriorment com aquesta informació és organitzada per crear estructures més complexes. Com a resultat d'aquest estudi, i comparant amb altres metodologies, s'identifiquen dos punts dèbils en el mètode de detecció de Viola i Jones. El primer fa referència a la descripció dels objectes, i la segona és una limitació de l'algorisme d'aprenentatge, que dificulta la utilització de millors descriptors.La descripció dels objectes utilitzant les característiques de Haar, limita la informació extreta a zones connexes de l'objecte. En el cas de voler comparar zones distants, s'ha d'optar per grans mides de les característiques, que fan que els valors obtinguts depenguin més del promig de valors d'il·luminació de l'objecte, que de les zones que es volen comparar. Amb l'objectiu de poder utilitzar aquest tipus d'informacions no locals, s'intenta introduir els dipols dissociats en l'esquema de detecció d'objectes.El problema amb el que ens trobem en voler utilitzar aquest tipus de descriptors, és que la gran cardinalitat del conjunt de característiques, fa inviable la utilització de l'Adaboost, l'algorisme utilitzat per a l'aprenentatge. El motiu és que durant el procés d'aprenentatge, es fa un anàlisi exhaustiu de tot l'espai d'hipòtesis, i al ser tant gran, el temps necessari per a l'aprenentatge esdevé prohibitiu. Per eliminar aquesta limitació, s'introdueixen mètodes evolutius dins de l'esquema de l'Adaboost i s'estudia els efectes d'aquest canvi en la capacitat d'aprenentatge. Les conclusions extretes són que no només continua essent capaç d'aprendre, sinó que la velocitat de convergència no és afectada significativament.Aquest nou Adaboost amb estratègies evolutives obre la porta a la utilització de conjunts de característiques amb cardinalitats arbitràries, el que ens permet indagar en noves formes de descriure els nostres objectes, com per exemple utilitzant els dipols dissociats. El primer que fem és comparar la capacitat d'aprenentatge del mètode utilitzant les característiques de Haar i els dipols dissociats. Com a resultat d'aquesta comparació, el que veiem és que els dos tipus de descriptors tenen un poder de representació molt similar, i depenent del problema en que s'apliquen, uns s'adapten una mica millor que els altres. Amb l'objectiu d'aconseguir un sistema de descripció capaç d'aprofitar els punts forts tant de Haar com dels dipols, es proposa la utilització d'un nou tipus de característiques, els dipols dissociats amb pesos, els quals combinen els detectors d'estructures que fan robustes les característiques de Haar amb la capacitat d'utilitzar informació no local dels dipols dissociats. A les proves realitzades, aquest nou conjunt de característiques obté millors resultats en tots els problemes en que s'ha comparat amb les característiques de Haar i amb els dipols dissociats.Per tal de validar la fiabilitat dels diferents mètodes, i poder fer comparatives entre ells, s'ha utilitzat un conjunt de bases de dades públiques per a diferents problemes, tals com la detecció de cares, la detecció de texts, la detecció de vianants i la detecció de cotxes. A més a més, els mètodes també s'han provat sobre una base de dades més extensa, amb la finalitat de detectar senyals de trànsit en entorns de carretera i urbans. / Ever since computers were invented, we have wondered whether they might perform some of the human quotidian tasks. One of the most studied and still nowadays less understood problem is the capacity to learn from our experiences and how we generalize the knowledge that we acquire.One of that unaware tasks for the persons and that more interest is awakening in different scientific areas since the beginning, is the one that is known as pattern recognition. The creation of models that represent the world that surrounds us, help us for recognizing objects in our environment, to predict situations, to identify behaviors... All this information allows us to adapt ourselves and to interact with our environment. The capacity of adaptation of individuals to their environment has been related to the amount of patterns that are capable of identifying.When we speak about pattern recognition in the field of Computer Vision, we refer to the ability to identify objects using the information contained in one or more images. Although the progress in the last years, and the fact that nowadays we are already able to obtain "useful" results in real environments, we are still very far from having a system with the same capacity of abstraction and robustness as the human visual system.In this thesis, the face detector of Viola & Jones is studied as the paradigmatic and most extended approach to the object detection problem. Firstly, we analyze the way to describe the objects using comparisons of the illumination values in adjacent zones of the images, and how this information is organized later to create more complex structures. As a result of this study, two weak points are identified in this family of methods: The first makes reference to the description of the objects, and the second is a limitation of the learning algorithm, which hampers the utilization of best descriptors.Describing objects using Haar-like features limits the extracted information to connected regions of the object. In the case we want to compare distant zones, large contiguous regions must be used, which provokes that the obtained values depend more on the average of lighting values of the object than in the regions we are wanted to compare. With the goal to be able to use this type of non local information, we introduce the Dissociated Dipoles into the outline of objects detection.The problem using this type of descriptors is that the great cardinality of this feature set makes unfeasible the use of Adaboost as learning algorithm. The reason is that during the learning process, an exhaustive search is made over the space of hypotheses, and since it is enormous, the necessary time for learning becomes prohibitive. Although we studied this phenomenon on the Viola & Jones approach, it is a general problem for most of the approaches, where learning methods introduce a limitation on the descriptors that can be used, and therefore, on the quality of the object description. In order to remove this limitation, we introduce evolutionary methods into the Adaboost algorithm, studying the effects of this modification on the learning ability. Our experiments conclude that not only it continues being able to learn, but its convergence speed is not significantly altered.This new Adaboost with evolutionary strategies opens the door to the use of feature sets with an arbitrary cardinality, which allows us to investigate new ways to describe our objects, such as the use of Dissociated Dipoles. We first compare the learning ability of this evolutionary Adaboost using Haar-like features and Dissociated Dipoles, and from the results of this comparison, we conclude that both types of descriptors have similar representation power, but depends on the problem they are applied, one adapts a little better than the other. With the aim of obtaining a descriptor capable of share the strong points from both Haar-like and Dissociated Dipoles, we propose a new type of feature, the Weighted Dissociated Dipoles, which combines the robustness of the structure detectors present in the Haar-like features, with the Dissociated Dipoles ability to use non local information. In the experiments we carried out, this new feature set obtains better results in all problems we test, compared with the use of Haar-like features and Dissociated Dipoles.In order to test the performance of each method, and compare the different methods, we use a set of public databases, which covers face detection, text detection, pedestrian detection, and cars detection. In addition, our methods are tested to face a traffic sign detection problem, over large databases containing both, road and urban scenes.

Page generated in 0.0962 seconds