• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1912
  • 597
  • 576
  • 417
  • 240
  • 177
  • 57
  • 53
  • 40
  • 26
  • 26
  • 25
  • 24
  • 23
  • 20
  • Tagged with
  • 4797
  • 533
  • 503
  • 495
  • 426
  • 421
  • 375
  • 362
  • 354
  • 345
  • 340
  • 334
  • 317
  • 316
  • 315
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

An algorithm for multi-objective assignment problem.

January 2005 (has links)
Tse Hok Man. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2005. / Includes bibliographical references (leaves 68-69). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background Study --- p.4 / Chapter 2.1 --- Channel Assignment in Multicarrier CDMA Systems --- p.4 / Chapter 2.1.1 --- Channel Throughput --- p.5 / Chapter 2.1.2 --- Greedy Approach to Channel Assignment --- p.6 / Chapter 2.2 --- Generalised Assignment Problem --- p.7 / Chapter 2.2.1 --- Branch and Bound Approach for GAP --- p.8 / Chapter 2.2.2 --- Genetic Algorithm for GAP --- p.10 / Chapter 2.3 --- Negative Cycle Detection --- p.11 / Chapter 2.3.1 --- Labeling Method --- p.11 / Chapter 2.3.2 --- Bellman-Ford-Moore algorithm --- p.13 / Chapter 2.3.3 --- Amortized Search --- p.14 / Chapter 3 --- Multi-objective Assignment Problem --- p.15 / Chapter 3.1 --- Multi-objective Assignment Problem --- p.16 / Chapter 3.2 --- NP-Hardness --- p.18 / Chapter 3.3 --- Transformation of the Multi-objective Assignment Problem --- p.19 / Chapter 3.4 --- Algorithm --- p.23 / Chapter 3.5 --- Example --- p.25 / Chapter 3.6 --- A Special Case - Linear Objective Function --- p.32 / Chapter 3.7 --- Performance on the assignment problem --- p.33 / Chapter 4 --- Goal Programming Model for Channel Assignment Problem --- p.35 / Chapter 4.1 --- Motivation --- p.35 / Chapter 4.2 --- System Model --- p.36 / Chapter 4.3 --- Goal Programming Model for Channel Assignment Problem --- p.38 / Chapter 4.4 --- Simulation --- p.39 / Chapter 4.4.1 --- Throughput Optimization --- p.40 / Chapter 4.4.2 --- Best-First-Assign Algorithm --- p.41 / Chapter 4.4.3 --- Channel Swapping Algorithm --- p.41 / Chapter 4.4.4 --- Lower Bound --- p.43 / Chapter 4.4.5 --- Result --- p.43 / Chapter 4.5 --- Future Works --- p.50 / Chapter 5 --- Extended Application on the General Problem --- p.51 / Chapter 5.1 --- Latency Minimization --- p.52 / Chapter 5.2 --- Generalised Assignment Problem --- p.53 / Chapter 5.3 --- Quadratic Assignment Problem --- p.60 / Chapter 6 --- Conclusion --- p.65 / Bibliography --- p.67
142

Détection et analyse de l'impact des défauts de code dans les applications mobiles / Detection and analysis of impact of code smells in mobile applications

Hecht, Geoffrey 30 November 2016 (has links)
Les applications mobiles deviennent des logiciels complexes qui doivent être développés rapidement tout en évoluant de manière continue afin de répondre aux nouveaux besoins des utilisateurs ainsi qu'à des mises à jour régulières. S'adapter à ces contraintes peut provoquer la présence de mauvais choix d'implémentation ou de conception que nous appelons défauts de code. La présence de défauts de code au sein d'une application peut dégrader la qualité et les performances d'une application. Il est alors important de connaître ces défauts mais aussi de pouvoir les détecter et les corriger. Les défauts de code sont bien connus pour les applications orientés objets et de nombreux outils permettent leurs détections, mais ce n'est pas le cas pour les applications mobiles. Les connaissances concernant les défauts de code dans les applications mobiles sont lacunaires, de plus les outils permettant la détection et la correction des défauts sont inexistants ou peu matures. Nous présentons donc ici une classification de 17 défauts de code pouvant apparaître dans les applications Android, ainsi qu'un outil permettant la détection et la correction des défauts de code sur Android. Nous appliquons et validons notre méthode sur de grandes quantités d'applications (plus de 3000) dans deux études qui évaluent la présence et l'évolution du nombre des défauts de code dans des applications populaires. De plus, nous présentons aussi deux approches destinées à évaluer l'impact de la correction des défauts de code sur les performances et la consommation d'énergie. Ces approches nous ont permis d'observer que la correction des défauts de code est bénéfique dans la plupart des cas. / Mobile applications are becoming complex software systems that must be developed quickly and evolve continuously to fit new user requirements and execution contexts. However, addressing these constraints may result in poor low-level design choices, known as code smells. The presence of code smells within software systems may incidentally degrade their quality and performance, and hinder their maintenance and evolution. Thus, it is important to know this smells but also to detect and correct them. While code smells are well-known in object-oriented applications, their study in mobile applications is still in their infancy. Moreover there is a lack of tools to detect and correct them. That is why we present a classification of 17 code smells that may appear in Android applications, as well as a tool to detect and correct code smells on Android. We apply and validate our approach on large amounts of applications (over 3000) in two studies evaluating the presence and evolution of the number of code smells in popular applications. In addition, we also present two approaches to assess the impact of the correction of code smells on performance and energy consumption. These approaches have allowed us to observe that the correction of code smells is beneficial in most cases.
143

Les contrats et les droits fondamentaux : perspective franco-québécoise / Contracts and fundamental rights

Torres-Ceyte, Jérémie 31 March 2016 (has links)
La rencontre entre les contrats et les droits fondamentaux est au centre de très nombreux débats juridiques contemporains : qu’il s’agisse notamment de la place du fait religieux dans la société, de la marchandisation du corps humain, ou encore du respect de la dignité de la personne. Cette rencontre stimule la réflexion de nombreux juristes, le sens de l’étude est de contribuer modestement à celle-ci dans la perspective d’une comparaison entre les droits français et québécois.On peut alors observer que l’exigence de respect des droits fondamentaux dans les contrats progresse dans les deux systèmes. En premier lieu, parce que nos droits font une place de plus en plus grande aux instruments de protection des droits fondamentaux, leur autorité s’impose en matière contractuelle. Ensuite, il faut remarquer que l’autorité des droits fondamentaux n’épuise pas leurs effets dans ce domaine. Ils rayonnent dans les contrats, car de relecture en réécriture les droits français et québécois des contrats sont de plus en plus imprégnés par l’exigence de respect des droits fondamentaux. Toutefois, en France et au Québec, à cette progression répond la nécessité de permettre l’inscription sociale des droits fondamentaux. On voit alors que le pouvoir sur les droits fondamentaux dans les contrats s’affirme, et que de contrat médical en contrat de travail, il devient incontournable pour permettre leur exercice. Toutefois, la dangerosité inhérente du pouvoir sur les droits fondamentaux justifie une réflexion sur les limites qui peuvent lui être assigné, à la fois en considération du respect de la dignité de la personne, mais également en considération de sa légitimité / The meeting of contract law with fundamental rights is at the center of numerous contemporary legal debates, notably with regard to the place of the religious in society, the commodification of the body, or respect for human dignity. This encounter has prodded a reflection from numerous jurists. The aim of this study is to bring a modest contribution to the discussion, through a comparison of French and Québec law. The exigencies of respect for fundamental rights is evolving in the two legal systems. Because fundamental rights instruments play a larger role in our laws, their authority in contractual matters is becoming ineludible. Indeed, it should be noted that fundamental rights have not reached their full extent in this field. They emerge within contracts, because from revisiting to re-writing, Québec and French contract law are increasingly influenced by the obligation to comply with fundamental rights.However, this evolution in France and in Quebec is accompanied by a requirement that fundamental rights be allowed social admission. From that point on, we can see power over fundamental rights being asserted within contacts, that power evolving from medical contracts to work contracts towards becoming inescapable for their enforcement. Yet, the danger inherent in such power over fundamental rights calls for serious deliberations on the limits that must be set upon it, both with regard to the dignity of the human person, and in relation to its legitimacy
144

Parallel concatenation of regular LDGM codes

Chai, Huiqiong. January 2007 (has links)
Thesis (M.S.)--University of Delaware, 2006. / Principal faculty advisor: Javier Garcia-Frias, Dept. of Electrical and Computer Engineering. Includes bibliographical references.
145

Probabilistic Proof-carrying Code

Sharkey, Michael Ian 17 April 2012 (has links)
Proof-carrying code is an application of software verification techniques to the problem of ensuring the safety of mobile code. However, previous proof-carrying code systems have assumed that mobile code will faithfully execute the instructions of the program. Realistic implementations of computing systems are susceptible to probabilistic behaviours that can alter the execution of a program in ways that can result in corruption or security breaches. We investigate the use of a probabilistic bytecode language to model deterministic programs that are executed on probabilistic computing systems. To model probabilistic safety properties, a probabilistic logic is adapted to out bytecode instruction language, and soundness is proven. A sketch of a completeness proof of the logic is also shown.
146

Code optimizations for narrow bitwidth architectures

Bhagat, Indu 23 February 2012 (has links)
This thesis takes a HW/SW collaborative approach to tackle the problem of computational inefficiency in a holistic manner. The hardware is redesigned by restraining the datapath to merely 16-bit datawidth (integer datapath only) to provide an extremely simple, low-cost, low-complexity execution core which is best at executing the most common case efficiently. This redesign, referred to as the Narrow Bitwidth Architecture, is unique in that although the datapath is squeezed to 16-bits, it continues to offer the advantage of higher memory addressability like the contemporary wider datapath architectures. Its interface to the outside (software) world is termed as the Narrow ISA. The software is responsible for efficiently mapping the current stack of 64-bit applications onto the 16-bit hardware. However, this HW/SW approach introduces a non-negligible penalty both in dynamic code-size and performance-impact even with a reasonably smart code-translator that maps the 64- bit applications on to the 16-bit processor. The goal of this thesis is to design a software layer that harnesses the power of compiler optimizations to assuage this negative performance penalty of the Narrow ISA. More specifically, this thesis focuses on compiler optimizations targeting the problem of how to compile a 64-bit program to a 16-bit datapath machine from the perspective of Minimum Required Computations (MRC). Given a program, the notion of MRC aims to infer how much computation is really required to generate the same (correct) output as the original program. Approaching perfect MRC is an intrinsically ambitious goal and it requires oracle predictions of program behavior. Towards this end, the thesis proposes three heuristic-based optimizations to closely infer the MRC. The perspective of MRC unfolds into a definition of productiveness - if a computation does not alter the storage location, it is non-productive and hence, not necessary to be performed. In this research, the definition of productiveness has been applied to different granularities of the data-flow as well as control-flow of the programs. Three profile-based, code optimization techniques have been proposed : 1. Global Productiveness Propagation (GPP) which applies the concept of productiveness at the granularity of a function. 2. Local Productiveness Pruning (LPP) applies the same concept but at a much finer granularity of a single instruction. 3. Minimal Branch Computation (MBC) is an profile-based, code-reordering optimization technique which applies the principles of MRC for conditional branches. The primary aim of all these techniques is to reduce the dynamic code footprint of the Narrow ISA. The first two optimizations (GPP and LPP) perform the task of speculatively pruning the non-productive (useless) computations using profiles. Further, these two optimization techniques perform backward traversal of the optimization regions to embed checks into the nonspeculative slices, hence, making them self-sufficient to detect mis-speculation dynamically. The MBC optimization is a use case of a broader concept of a lazy computation model. The idea behind MBC is to reorder the backslices containing narrow computations such that the minimal necessary computations to generate the same (correct) output are performed in the most-frequent case; the rest of the computations are performed only when necessary. With the proposed optimizations, it can be concluded that there do exist ways to smartly compile a 64-bit application to a 16- bit ISA such that the overheads are considerably reduced. / Esta tesis deriva su motivación en la inherente ineficiencia computacional de los procesadores actuales: a pesar de que muchas aplicaciones contemporáneas tienen unos requisitos de ancho de bits estrechos (aplicaciones de enteros, de red y multimedia), el hardware acaba utilizando el camino de datos completo, utilizando más recursos de los necesarios y consumiendo más energía. Esta tesis utiliza una aproximación HW/SW para atacar, de forma íntegra, el problema de la ineficiencia computacional. El hardware se ha rediseñado para restringir el ancho de bits del camino de datos a sólo 16 bits (únicamente el de enteros) y ofrecer así un núcleo de ejecución simple, de bajo consumo y baja complejidad, el cual está diseñado para ejecutar de forma eficiente el caso común. El rediseño, llamado en esta tesis Arquitectura de Ancho de Bits Estrecho (narrow bitwidth en inglés), es único en el sentido que aunque el camino de datos se ha estrechado a 16 bits, el sistema continúa ofreciendo las ventajas de direccionar grandes cantidades de memoria tal como procesadores con caminos de datos más anchos (64 bits actualmente). Su interface con el mundo exterior se denomina ISA estrecho. En nuestra propuesta el software es responsable de mapear eficientemente la actual pila software de las aplicaciones de 64 bits en el hardware de 16 bits. Sin embargo, esta aproximación HW/SW introduce penalizaciones no despreciables tanto en el tamaño del código dinámico como en el rendimiento, incluso con un traductor de código inteligente que mapea las aplicaciones de 64 bits en el procesador de 16 bits. El objetivo de esta tesis es el de diseñar una capa software que aproveche la capacidad de las optimizaciones para reducir el efecto negativo en el rendimiento del ISA estrecho. Concretamente, esta tesis se centra en optimizaciones que tratan el problema de como compilar programas de 64 bits para una máquina de 16 bits desde la perspectiva de las Mínimas Computaciones Requeridas (MRC en inglés). Dado un programa, la noción de MRC intenta deducir la cantidad de cómputo que realmente se necesita para generar la misma (correcta) salida que el programa original. Aproximarse al MRC perfecto es una meta intrínsecamente ambiciosa y que requiere predicciones perfectas de comportamiento del programa. Con este fin, la tesis propone tres heurísticas basadas en optimizaciones que tratan de inferir el MRC. La utilización de MRC se desarrolla en la definición de productividad: si un cálculo no altera el dato que ya había almacenado, entonces no es productivo y por lo tanto, no es necesario llevarlo a cabo. Se han propuesto tres optimizaciones del código basadas en profile: 1. Propagación Global de la Productividad (GPP en inglés) aplica el concepto de productividad a la granularidad de función. 2. Poda Local de Productividad (LPP en inglés) aplica el mismo concepto pero a una granularidad mucho más fina, la de una única instrucción. 3. Computación Mínima del Salto (MBC en inglés) es una técnica de reordenación de código que aplica los principios de MRC a los saltos condicionales. El objetivo principal de todas esta técnicas es el de reducir el tamaño dinámico del código estrecho. Las primeras dos optimizaciones (GPP y LPP) realizan la tarea de podar especulativamente las computaciones no productivas (innecesarias) utilizando profiles. Además, estas dos optimizaciones realizan un recorrido hacia atrás de las regiones a optimizar para añadir chequeos en el código no especulativo, haciendo de esta forma la técnica autosuficiente para detectar, dinámicamente, los casos de fallo en la especulación. La idea de la optimización MBC es reordenar las instrucciones que generan el salto condicional tal que las mínimas computaciones que general la misma (correcta) salida se ejecuten en la mayoría de los casos; el resto de las computaciones se ejecutarán sólo cuando sea necesario.
147

Revealing code : what can language teach software?

Hodges, Steve 13 April 2004 (has links)
In the last twenty years, computer code has emerged from obscure beginnings to occupy a rather prominent place in our culture. We can see evidence of code's cultural presence in our everyday conversation, in the way we interact with computers and networks, and in many current advertisements. Code also occupies an important place in the study of new media; some in that field have gone so far as to call code "the language of our time." My thesis aims to comprehend the dimensions of this important relationship. I adopt an interdisciplinary approach to this comparison between language and code, using theory and examples from structuralist linguistics, information theory, computer programming, and literature. A group of experimental French authors called the Oulipo provides many excellent examples for our comparison of language and code. The science of cryptography provides another conceptual bridge between the two areas. The comparison will lead to an examination of some current efforts to engage in a criticism of software and will suggest additional future challenges.
148

Circuit Design of Maximum a Posteriori Algorithm for Turbo Code Decoder

Kao, Chih-wei 30 July 2010 (has links)
none
149

Code design for erasure channels with limited or noisy feedback

Nagasubramanian, Karthik 15 May 2009 (has links)
The availability of feedback in communication channels can significantly increase the reliability of transmission while decreasing the encoding and decoding complexity. Most of the applications like cellular telephony, satellite communications and internet involve two-way transmission. Hence, it is important to devise coding schemes which utilize the advantages of feedback. Most of the results in code designs, which make use of feedback, concentrate on noiseless and instantaneous feedback. But in real-time systems, the feedback is usually noisy, and is available at the transmitter after some delay. Hence, it is important that we characterize the gains obtained in this case over that of one-way channels. We consider binary erasure channels to keep the problem tractable. For the erasure channels with noisy feedback, we have designed and analyzed a concatenated coding scheme, which achieves lower probability of error than any forward error correcting code of the same rate. Hence, it is shown that even noisy feedback can be useful in increasing the reliability of the channel. We have designed and analyzed a coding scheme using Low Density Parity Check (LDPC) codes along with selective retransmission strategy, which utilizes the limited (but noiseless), delayed feedback to achieve low frame error rates even with small blocklengths, at rates close to capacity. Furthermore, our scheme provides a way to trade off feedback bandwidth for reliability. The complexity of this scheme is lower than that of a forward error correcting code (FEC) of same blocklength and comparable performance. We have shown that our scheme performs better than the Automatic Repeat Request (ARQ) protocol which makes use of 1 bit feedback to signal retransmissions. For fair comparisons, we have also incorporated the rate loss due to the bits which are fed back in addition to the retransmitted bits. Thus, we have shown that for two-way communications with complexity and delay constraints, it is better to utilize the availability of feedback than to use just FEC.
150

A Dynamic Bandwidth Borrowing Algorithm for QoS Support in OVSF-CDMA System

Wu, Peng-Long 26 August 2003 (has links)
Orthogonal variable spreading factor (OVSF) code is used in WCDMA system to provide variable service data rates. However, most researches focus on decreasing the number of code assignment without considering how to manage bandwidth with properties of OVSF code. In this research, we propose a dynamic bandwidth borrowing algorithm for quality of service (QoS) supported in the OVSF-CDMA system. When a new call arrives or a current serviced call requests a higher data rate, but the system can not provide the required bandwidth, the borrowing algorithm can be activated to borrow bandwidth from current serviced calls. Also, a dynamic bandwidth reserved algorithm is proposed to avoid forced termination caused by the suddenly increasing bandwidth requirements of current serviced calls. Simulation results show that the value of throughput increases and the QoS of current serviced calls can be maintained.

Page generated in 0.0431 seconds