• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 392
  • 85
  • 67
  • 50
  • 27
  • 13
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • Tagged with
  • 791
  • 220
  • 112
  • 82
  • 67
  • 58
  • 56
  • 55
  • 55
  • 55
  • 52
  • 52
  • 51
  • 50
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
421

A Branch And Bound Algorithm For Resource Leveling Problem

Mutlu, Mustafa Cagdas 01 August 2010 (has links) (PDF)
Resource Leveling Problem (RLP) aims to minimize undesired fluctuations in resource distribution curves which cause several practical problems. Many studies conclude that commercial project management software packages can not effectively deal with RLP. In this study a branch and bound algorithm is presented for solving RLP for single and multi resource, small size networks. The algorithm adopts a depth-first strategy and stores start times of non-critical activities in the nodes of the search tree. Optimal resource distributions for 4 different types of resource leveling metrics can be obtained via the developed procedure. To prune more of the search tree and thereby reduce the computation time, several lower bound calculation methods are employed. Experiment results from 20 problems showed that the suggested algorithm can successfully locate optimal solutions for networks with up to 20 activities. The algorithm presented in this study contributes to the literature in two points. First, the new lower bound improvement method (maximum allowable daily resources method) introduced in this study reduces computation time required for achieving the optimal solution for the RLP. Second, optimal solutions of several small sized problems have been obtained by the algorithm for some traditional and recently suggested leveling metrics. Among these metrics, Resource Idle Day (RID) has been utilized in an exact method for the first time. All these solutions may form a basis for performance evaluation of heuristic and metaheuristic procedures for the RLP. Limitations of the developed branch and bound procedure are discussed and possible further improvements are suggested.
422

Development and advanced characterization of novel chemically amplified resists for next generation lithography

Lee, Cheng-Tsung 19 September 2008 (has links)
The microelectronics industry has made remarkable progress with the development of integrated circuit (IC) technology which depends on the advance of micro-fabrication and integration techniques. On one hand, next-generation lithography (NGL) technologies which utilize extreme ultraviolet (EUV) and the state-of-art 193 nm immmersion and double patterning lithography have emerged as the promising candidates to meet the resolution requirements of the microelectronic industry roadmap. On the other hand, the development and advanced characterization of novel resist materials with the required critical imaging properties, such as high resolution, high sensitivity, and low line edge roughness (LER), is also indispensable. In conventional multi-component chemically amplified resist (CAR) system, the inherent incompatibility between small molecule photoacid generator (PAG) and the bulky polymer resin can lead to PAG phase separation, PAG aggregation, non-uniform PAG and acid distribution, as well as uncontrolled acid migration during the post-exposure baking (PEB) processes in the resist film. These problems ultimately create the tri-lateral tradeoff between achieving the desired lithography characteristics. Novel resist materials which can relief this constraint are essential and have become one of the most challenging issues for the implementation NGL technologies. This thesis work focuses on the development and characterization of novel resist materials for NGL technologies. In the first part of the thesis work, advanced characterization techniques for studying resist fundamental properties and lithographic performance are developed and demonstrated. These techniques provide efficient and precise evaluations of PAG acid generation, acid diffusivity, and intrinsic resolution and LER of resist materials. The applicability of these techniques to the study of resist structure-function relationships are also evaluated and discussed. In the second part of the thesis work, the advanced characterization and development of a novel resist system, the polymer-bound-PAG resists, are reported. The advantages of direct incorporation of PAG functionality into the resist polymer main chain are investigated and illustrated through both experimental and modeling studies. The structure-function relationships between the fundamental properties of polymer-bound-PAG resists and their lithographic performance are also investigated. Recommendations on substantial future works for characterizing and improving resist lithographic performance are discussed at the end of this thesis work.
423

Turing machine algorithms and studies in quasi-randomness

Kalyanasundaram, Subrahmanyam 09 November 2011 (has links)
Randomness is an invaluable resource in theoretical computer science. However, pure random bits are hard to obtain. Quasi-randomness is a tool that has been widely used in eliminating/reducing the randomness from randomized algorithms. In this thesis, we study some aspects of quasi-randomness in graphs. Specifically, we provide an algorithm and a lower bound for two different kinds of regularity lemmas. Our algorithm for FK-regularity is derived using a spectral characterization of quasi-randomness. We also use a similar spectral connection to also answer an open question about quasi-random tournaments. We then provide a "Wowzer" type lower bound (for the number of parts required) for the strong regularity lemma. Finally, we study the derandomization of complexity classes using Turing machine simulations. 1. Connections between quasi-randomness and graph spectra. Quasi-random (or pseudo-random) objects are deterministic objects that behave almost like truly random objects. These objects have been widely studied in various settings (graphs, hypergraphs, directed graphs, set systems, etc.). In many cases, quasi-randomness is very closely related to the spectral properties of the combinatorial object that is under study. In this thesis, we discover the spectral characterizations of quasi-randomness in two different cases to solve open problems. A Deterministic Algorithm for Frieze-Kannan Regularity: The Frieze-Kannan regularity lemma asserts that any given graph of large enough size can be partitioned into a number of parts such that, across parts, the graph is quasi-random. . It was unknown if there was a deterministic algorithm that could produce a parition satisfying the conditions of the Frieze-Kannan regularity lemma in deterministic sub-cubic time. In this thesis, we answer this question by designing an O(n[superscript]w) time algorithm for constructing such a partition, where w is the exponent of fast matrix multiplication. Even Cycles and Quasi-Random Tournaments: Chung and Graham in had provided several equivalent characterizations of quasi-randomness in tournaments. One of them is about the number of "even" cycles where even is defined in the following sense. A cycle is said to be even, if when walking along it, an even number of edges point in the wrong direction. Chung and Graham showed that if close to half of the 4-cycles in a tournament T are even, then T is quasi-random. They asked if the same statement is true if instead of 4-cycles, we consider k-cycles, for an even integer k. We resolve this open question by showing that for every fixed even integer k geq 4, if close to half of the k-cycles in a tournament T are even, then T must be quasi-random. 2. A Wowzer type lower bound for the strong regularity lemma. The regularity lemma of Szemeredi asserts that one can partition every graph into a bounded number of quasi-random bipartite graphs. Alon, Fischer, Krivelevich and Szegedy obtained a variant of the regularity lemma that allows one to have an arbitrary control on this measure of quasi-randomness. However, their proof only guaranteed to produce a partition where the number of parts is given by the Wowzer function, which is the iterated version of the Tower function. We show here that a bound of this type is unavoidable by constructing a graph H, with the property that even if one wants a very mild control on the quasi-randomness of a regular partition, then any such partition of H must have a number of parts given by a Wowzer-type function. 3. How fast can we deterministically simulate nondeterminism? We study an approach towards derandomizing complexity classes using Turing machine simulations. We look at the problem of deterministically counting the exact number of accepting computation paths of a given nondeterministic Turing machine. We provide a deterministic algorithm, which runs in time roughly O(sqrt(S)), where S is the size of the configuration graph. The best of the previously known methods required time linear in S. Our result implies a simulation of probabilistic time classes like PP, BPP and BQP in the same running time. This is an improvement over the currently best known simulation by van Melkebeek and Santhanam.
424

Asymmetric Catalysis : Ligand Design and Conformational Studies.

Hallman, Kristina January 2001 (has links)
<p>This thesis deals with the design of ligands for efficientasymmetric catalysis and studies of the conformation of theligands in the catalytically active complexes. All ligandsdeveloped contain chiral oxazoline heterocycles.</p><p>The conformations of hydroxy- and methoxy-substitutedpyridinooxazolines and bis(oxazolines) during Pd-catalysedallylic alkylations were investigated using crystallography,2D-NMR techniques and DFT calculations. A stabilising OH-Pdinteraction was discovered which might explain the differencein reactivity between the hydroxy- and methoxy-containingligands. The conformational change in the ligands due to thisinteraction may explain the different selectivities observed inthe catalytic reaction.</p><p>Polymer-bound pyridinooxazolines and bis(oxazolines) weresynthesised and employed in Pd-catalysed allylic alkylationswith results similar to those of monomeric analogues;enantioselectivities up to 95% were obtained. One polymer-boundligand could be re-used several times after removal of Pd(0).The polymer-bound bis(oxazoline) was also used in Zn-catalysedDiels-Alder reactions, but the heterogenised catalyst gavelower selectivities than a monomeric analogue.</p><p>A series of chiral dendron-containing pyridinooxazolines andbis(oxazolines) were synthesised and evaluated in Pd-catalysedallylic alkylations. The dendrons did not seem to have anyinfluence on the selectivity and little influence on the yieldwhen introduced in the pyridinooxazoline ligands. In thebis(oxazoline) series lower generation dendrimers had a postiveon the selectivity, but the selectivity and the activitydecreased with increasing generation.</p><p>Crown ether-containing ligands were investigated inpalladium-catalysed alkylations. No evidence of a possibleinteraction between the metal in the crown ether and thenucleophile was discovered.</p><p>A new type of catalyst, an oxazoline-containing palladacyclewas found to be very active in oxidations of secondary alcoholsto the corresponding aldehydes or ketones. The reactions wereperformed with air as the re-oxidant. Therefore, this is anenviromentally friendly oxidation method.</p><p><b>Keywords:</b>asymmetric catalysis, chiral ligand,oxazolines, conformational study, allylic substitution,polymer-bound ligands, dendritic ligands, crown ether,oxidations, palladacycle.</p>
425

Ordonnancement dynamique dans les industries agroalimentaires

Tangour, Fatma 12 July 2007 (has links) (PDF)
Nos travaux portent sur la résolution de problèmes d'optimisation en ordonnancement d'ateliers de production, et plus particulièrement ceux relatifs à l'ordonnancement dynamique dans les industries agroalimentaires. <br />Les contraintes et les critères considérés sont spécifiques à ce type d'industrie qui présente certaines particularités, dues à la nature des produits manipulés et fabriqués, dont les durées de vie assez courtes. Ils concernent aussi le respect des dates de validité des composants primaires formant les opérations, des produits semi-finis et des produits finis. Les critères retenus sont aussi liés à ces particularités. On a distingué le coût des produits périmés, le coût du discount de distribution et la date de fin de l'ordonnancement, le makespan. Une méthode exacte et deux méthodes approchées ont été retenues et mises en œuvre, avec succès, pour les problèmes à une machine. <br />La méthode exacte, branch & bound, est appliquée pour la minimisation de la fonction de coût total. Les algorithmes génétiques, dotés d'un nouveau codage et hybridés avec l'approche Pareto-optimale, sont proposés pour la recherche de la solution optimale et pour aider le décideur de prendre une décision. Les algorithmes d'optimisation par colonie de fourmis, constituant la deuxième méthode approchée, est un processus stochastique qui, malgré la difficulté de paramétrage de l'algorithme correspondant, nous a permis de construire des solutions, en ajoutant des composants aux solutions temporaires.
426

Performance and Implementation Aspects of Nonlinear Filtering

Hendeby, Gustaf January 2008 (has links)
I många fall är det viktigt att kunna få ut så mycket och så bra information som möjligt ur tillgängliga mätningar. Att utvinna information om till exempel position och hastighet hos ett flygplan kallas för filtrering. I det här fallet är positionen och hastigheten exempel på tillstånd hos flygplanet, som i sin tur är ett system. Ett typiskt exempel på problem av den här typen är olika övervakningssystem, men samma behov blir allt vanligare även i vanliga konsumentprodukter som mobiltelefoner (som talar om var telefonen är), navigationshjälpmedel i bilar och för att placera upplevelseförhöjande grafik i filmer och TV -program. Ett standardverktyg som används för att extrahera den information som behövs är olineär filtrering. Speciellt vanliga är metoderna i positionerings-, navigations- och målföljningstillämpningar. Den här avhandlingen går in på djupet på olika frågeställningar som har med olineär filtrering att göra: * Hur utvärderar man hur bra ett filter eller en detektor fungerar? * Vad skiljer olika metoder åt och vad betyder det för deras egenskaper? * Hur programmerar man de datorer som används för att utvinna informationen? Det mått som oftast används för att tala om hur effektivt ett filter fungerar är RMSE (root mean square error), som i princip är ett mått på hur långt ifrån det korrekta tillståndet man i medel kan förvänta sig att den skattning man får är. En fördel med att använda RMSE som mått är att det begränsas av Cramér-Raos undre gräns (CRLB). Avhandlingen presenterar metoder för att bestämma vilken betydelse olika brusfördelningar har för CRLB. Brus är de störningar och fel som alltid förekommer när man mäter eller försöker beskriva ett beteende, och en brusfördelning är en statistisk beskrivning av hur bruset beter sig. Studien av CRLB leder fram till en analys av intrinsic accuracy (IA), den inneboende noggrannheten i brus. För lineära system får man rättframma resultat som kan användas för att bestämma om de mål som satts upp kan uppnås eller inte. Samma metod kan också användas för att indikera om olineära metoder som partikelfiltret kan förväntas ge bättre resultat än lineära metoder som kalmanfiltret. Motsvarande metoder som är baserade på IA kan även användas för att utvärdera detektionsalgoritmer. Sådana algoritmer används för att upptäcka fel eller förändringar i ett system. När man använder sig av RMSE för att utvärdera filtreringsalgoritmer fångar man upp en aspekt av filtreringsresultatet, men samtidigt finns många andra egenskaper som kan vara intressanta. Simuleringar i avhandlingen visar att även om två olika filtreringsmetoder ger samma prestanda med avseende på RMSE så kan de tillståndsfördelningar de producerar skilja sig väldigt mycket åt beroende på vilket brus det studerade systemet utsätts för. Dessa skillnader kan vara betydelsefulla i vissa fall. Som ett alternativ till RMSE används därför här kullbackdivergensen som tydligt visar på bristerna med att bara förlita sig på RMSE-analyser. Kullbackdivergensen är ett statistiskt mått på hur mycket två fördelningar skiljer sig åt. Två filtreringsalgoritmer har analyserats mer i detalj: det rao-blackwelliserade partikelfiltret (RBPF) och den metod som kallas unscented Kalman filter (UKF). Analysen av RBPF leder fram till ett nytt sätt att presentera algoritmen som gör den lättare att använda i ett datorprogram. Dessutom kan den nya presentationen ge bättre förståelse för hur algoritmen fungerar. I undersökningen av UKF ligger fokus på den underliggande så kallade unscented transformation som används för att beskriva vad som händer med en brusfördelning när man transformerar den, till exempel genom en mätning. Resultatet består av ett antal simuleringsstudier som visar på de olika metodernas beteenden. Ett annat resultat är en jämförelse mellan UT och Gauss approximationsformel av första och andra ordningen. Den här avhandlingen beskriver även en parallell implementation av ett partikelfilter samt ett objektorienterat ramverk för filtrering i programmeringsspråket C ++. Partikelfiltret har implementerats på ett grafikkort. Ett grafikkort är ett exempel på billig hårdvara som sitter i de flesta moderna datorer och mest används för datorspel. Det används därför sällan till sin fulla potential. Ett parallellt partikelfilter, det vill säga ett program som kör flera delar av partikelfiltret samtidigt, öppnar upp för nya tillämpningar där snabbhet och bra prestanda är viktigt. Det objektorienterade ramverket för filtrering uppnår den flexibilitet och prestanda som behövs för storskaliga Monte-Carlo-simuleringar med hjälp av modern mjukvarudesign. Ramverket kan också göra det enklare att gå från en prototyp av ett signalbehandlingssystem till en slutgiltig produkt. / Nonlinear filtering is an important standard tool for information and sensor fusion applications, e.g., localization, navigation, and tracking. It is an essential component in surveillance systems and of increasing importance for standard consumer products, such as cellular phones with localization, car navigation systems, and augmented reality. This thesis addresses several issues related to nonlinear filtering, including performance analysis of filtering and detection, algorithm analysis, and various implementation details. The most commonly used measure of filtering performance is the root mean square error (RMSE), which is bounded from below by the Cramér-Rao lower bound (CRLB). This thesis presents a methodology to determine the effect different noise distributions have on the CRLB. This leads up to an analysis of the intrinsic accuracy (IA), the informativeness of a noise distribution. For linear systems the resulting expressions are direct and can be used to determine whether a problem is feasible or not, and to indicate the efficacy of nonlinear methods such as the particle filter (PF). A similar analysis is used for change detection performance analysis, which once again shows the importance of IA. A problem with the RMSE evaluation is that it captures only one aspect of the resulting estimate and the distribution of the estimates can differ substantially. To solve this problem, the Kullback divergence has been evaluated demonstrating the shortcomings of pure RMSE evaluation. Two estimation algorithms have been analyzed in more detail; the Rao-Blackwellized particle filter (RBPF) by some authors referred to as the marginalized particle filter (MPF) and the unscented Kalman filter (UKF). The RBPF analysis leads to a new way of presenting the algorithm, thereby making it easier to implement. In addition the presentation can possibly give new intuition for the RBPF as being a stochastic Kalman filter bank. In the analysis of the UKF the focus is on the unscented transform (UT). The results include several simulation studies and a comparison with the Gauss approximation of the first and second order in the limit case. This thesis presents an implementation of a parallelized PF and outlines an object-oriented framework for filtering. The PF has been implemented on a graphics processing unit (GPU), i.e., a graphics card. The GPU is a inexpensive parallel computational resource available with most modern computers and is rarely used to its full potential. Being able to implement the PF in parallel makes new applications, where speed and good performance are important, possible. The object-oriented filtering framework provides the flexibility and performance needed for large scale Monte Carlo simulations using modern software design methodology. It can also be used to help to efficiently turn a prototype into a finished product.
427

On stability, transition and turbulence in three-dimensional boundary-layer flows

Hosseini, Seyed Mohammd January 2015 (has links)
A lot has changed since that day on December 17, 1903 when the Wright brothers made the first powered manned flight. Even though the concepts behind flying are unaltered, appearance of stat-of-the-art modern aircrafts has undergone a massive evolution. This is mainly owed to our deeper understanding of how to harness and optimize the interaction between fluid flows and aircraft bodies. Flow passing over wings and different junctions on an aircraft faces numerous local features, for instance, acceleration or deceleration, laminar or turbulent state, and interacting boundary layers. In our study we aim to characterize some of these flow features and their physical roles. Primarily, stability characteristics of flow over a wing subject to a negative pressure gradient are studied. This is a common condition for flows over swept wings. Part of the current numerical study conforms to existing experimental studies where a passive control mechanism has been tested to delay laminarturbulent transition. The same flow type has also been considered to study the receptivity of three-dimensional boundary layers to freestream turbulence. The work entails investigation of effects of low-level freestream turbulence on crossflow instability, as well as interaction with micron-sized surface roughness elements. Another common three-dimensional flow feature arises as a resultof stream-lines passing through a junction, the so-calledcorner-flow. For instance, thisflow can be formed in the junction between the wing and fuselage on aplane.A series of direct numerical simulations using linear Navier-Stokes equationshave been performed to determine the optimal initial perturbation. Optimalrefers to perturbations which can gain the maximum energy from the flow overa period of time. In other words this method seeks to determine theworst-casescenario in terms of perturbation growth. Here, power-iterationtechnique hasbeen applied to the Navier-Stokes equations and their adjoint to determine theoptimal initial perturbation. Recent advances in super-computers have enabled advance computational methods to increasingly contribute to design of aircrafts, in particular for turbulent flows with regions of separation. In this work we investigate theturbulentflow on an infinite wing at a moderate chord Reynolds number of Re= 400,000 using a well resolved direct numerical simulation. A conventional NACA4412 has been chosen for this work. The turbulent flow is characterizedusing statistical analysis and following time history data in regions with interesting flow features. In the later part of this work, direct numerical simulation has been chosen as a tool to mainly investigate the effect of freestream turbulence on the transition mechanism of flow from laminar to turbulent around a turbine blade. / <p>QC 20151125</p>
428

Achievable rates for Gaussian Channels with multiple relays

Coso Sánchez, Aitor del 12 September 2008 (has links)
Los canales múltiple-entrada-múltiple-salida (MIMO) han sido ampliamente propuestos para superar los desvanecimientos aleatorios de canal en comunicaciones inalámbricas no selectivas en frecuencia. Basados en equipar tanto transmisores como receptores con múltiple antenas, sus ventajas son dobles. Por un lado, permiten al transmisor: i) concentrar la energía transmitida en una dirección-propia determinada, o ii) codificar entre antenas con el fin de superar desvanecimientos no conocidos de canal. Por otro lado, facilitan al receptor el muestreo de la señal en el dominio espacial. Esta operación, seguida por la combinación coherente de muestras, aumenta la relación señal a ruido de entrada al receptor. De esta forma, el procesado multi-antena es capaz de incrementar la capacidad (y la fiabilidad) de la transmisión en escenarios con alta dispersión.Desafortunadamente, no siempre es posible emplazar múltiples antenas en los dispositivos inalámbricos, debido a limitaciones de espacio y/o coste. Para estos casos, la manera más apropiada de explotar el procesado multi-antena es mediante retransmisión, consistente en disponer un conjunto de repetidores inalámbricos que asistan la comunicación entre un grupo de transmisores y un grupo de receptores, todos con una única antena. Con la ayuda de los repetidores, por tanto, los canales MIMO se pueden imitar de manera distribuida. Sin embargo, la capacidad exacta de las comunicaciones con repetidores (así como la manera en que este esquema funciona con respeto al MIMO equivalente) es todavía un problema no resuelto. A dicho problema dedicamos esta tesis.En particular, la presente disertación tiene como objetivo estudiar la capacidad de canales Gaussianos asistidos por múltiples repetidores paralelos. Dos repetidores se dicen paralelos si no existe conexión directa entre ellos, si bien ambos tienen conexión directa con la fuente y el destino de la comunicación. Nos centramos en el análisis de tres canales ampliamente conocidos: el canal punto-a-punto, el canal de múltiple-acceso y el canal de broadcast, y estudiamos su mejora de funcionamiento con repetidores. A lo largo de la tesis, se tomarán las siguientes hipótesis: i) operación full-duplex en los repetidores, ii) conocimiento de canal tanto en transmisión como en recepción, y iii) desvanecimiento sin memoria, e invariante en el tiempo.En primer lugar, analizamos el canal con múltiples repetidores paralelos, en el cual una única fuente se comunica con un único destino en presencia de N repetidores paralelos. Derivamos límites inferiores de la capacidad del canal por medio de las tasas de transmisión conseguibles con distintos protocolos: decodificar-y-enviar, decodificar-parcialmente-y-enviar, comprimir-y-enviar, y repetición lineal. Asimismo, con un fin comparativo, proveemos un límite superior, obtenido a través del Teorema de max-flow-min-cut. Finalmente, para el número de repetidores tendiendo a infinito, presentamos las leyes de crecimiento de todas las tasas de transmisión, así como la del límite superior.A continuación, la tesis se centra en el canal de múltiple-acceso (MAC) con múltiples repetidores paralelos. El canal consiste en múltiples usuarios comunicándose simultáneamente con un único destino en presencia de N repetidores paralelos. Derivamos una cota superior de la región de capacidad de dicho canal utilizando, de nuevo, el Teorema de max-flow-min-cut, y encontramos regiones de tasas de transmisión conseguibles mediante: decodificar-y-enviar, comprimir-y-enviar, y repetición lineal. Asimismo, se analiza el valor asintótico de dichas tasas de transmisión conseguibles, asumiendo el número de usuarios creciendo sin límite. Dicho estudio nos permite intuir el impacto de la diversidad multiusuario en redes de acceso con repetidores.Finalmente, la disertación considera el canal de broadcast (BC) con múltiples repetidores paralelos. En él, una única fuente se comunica con múltiples destinos en presencia de N repetidores paralelos. Para dicho canal, derivamos tasas de transmisión conseguibles dado: i) codificación de canal tipo dirty paper en la fuente, ii) decodificar-y-enviar, comprimir-y-enviar, y repetición lineal, respectivamente, en los repetidores. Además, para repetición lineal, demostramos que la dualidad MAC-BC se cumple. Es decir, la región de tasas de transmisión conseguibles en el BC es igual a aquélla del MAC con una limitación de potencia suma. Utilizando este resultado, se derivan algoritmos de asignación óptima de recursos basados en teoría de optimización convexa. / Multiple-input-multiple-output (MIMO) channels are extensively proposed as a means to overcome the random channel impairments of frequency-flat wireless communications. Based upon placing multiple antennas at both the transmitter and receiver sides of the communication, their virtues are twofold. On the one hand, they allow the transmitter: i) to concentrate the transmitted power onto a desired eigen-direction, or ii) tocode across antennas to overcome unknown channel fading. On the other hand, they permit the receiver to sample the signal on the space domain. This operation, followed by the coherent combination of samples, increases the signal-to-noise ratio at the input of the detector. In fine, MIMO processing is able to provide large capacity (and reliability) gains within rich-scattered scenarios.Nevertheless, equipping wireless handsets with multiple antennas is not always possible or worthwhile. Mainly, due to size and cost constraints, respectively. For these cases, the most appropriate manner to exploit multi-antenna processing is by means of relaying. This consists of a set of wireless relay nodes assisting the communication between a set of single-antenna sources and a set of single-antenna destinations. With the aid of relays, indeed, MIMO channels can be mimicked in a distributed way. However, the exact channel capacity of single-antenna communications with relays (and how this scheme performs with respect to the equivalent MIMO channel) is a long-standing open problem. To it we have devoted this thesis.In particular, the present dissertation aims at studying the capacity of Gaussian channels when assisted by multiple, parallel, relays. Two relays are said to be parallel if there is no direct link between them, while both have direct link from the source and towards the destination. We focus on three well-known channels: the point-to-point channel, the multi-access channel and the broadcast channel, and study their performance improvement with relays. All over the dissertation, the following assumptions are taken: i) full-duplex operation at the relays, ii) transmit and receive channel state information available at all network nodes, and iii) time-invariant, memory-less fading.Firstly, we analyze the multiple-parallel relay channel, where a single source communicates to a single destination in the presence of N parallel relays. The capacity of the channel is lower bounded by means of the achievable rates with different relaying protocols, i.e. decode-and-forward, partial decode-and-forward, compress-and-forward and linear relaying. Likewise, a capacity upper bound is provided for comparison, derived using the max-flow-min-cut Theorem. Finally, for number of relays growing to infinity, the scaling laws of all achievable rates are presented, as well as the one of the upper bound.Next, the dissertation focusses on the multi-access channel (MAC) with multiple-parallel relays. The channel consists of multiple users simultaneously communicating to a single destination in the presence of N parallel relay nodes. We bound the capacity region of the channel using, again, the max-flow-min-cut Theorem and find achievable rate regions by means of decode-and-forward, linear relaying and compress-and-forward. In addition, we analyze the asymptotic performance of the obtained achievable sum-rates, given the number of users growing without bound. Such a study allows us to grasp the impact of multi-user diversity on access networks with relays.Finally, the dissertation considers the broadcast channel (BC) with multiple parallel relays. This consists of a single source communicating to multiple receivers in the presence of N parallel relays. For the channel, we derive achievable rate regions considering: i) dirty paper encoding at the source, and ii) decode-and-forward, linear relaying and compress-and-forward, respectively, at the relays. Moreover, for linear relaying, we prove that MAC-BC duality holds. That is, the achievable rate region of the BC is equal to that of the MAC with a sum-power constraint. Using this result, the computation of the channel's weighted sum-rate with linear relaying is notably simplified. Likewise, convex resource allocation algorithms can be derived.
429

Multi-period optimization of pavement management systems

Yoo, Jaewook 30 September 2004 (has links)
The purpose of this research is to develop a model and solution methodology for selecting and scheduling timely and cost-effective maintenance, rehabilitation, and reconstruction activities (M & R) for each pavement section in a highway network and allocating the funding levels through a finite multi-period horizon within the constraints imposed by budget availability in each period, frequency availability of activities, and specified minimum pavement quality requirements. M & R is defined as a chronological sequence of reconstruction, rehabilitation, and major/minor maintenance, including a "do nothing" activity. A procedure is developed for selecting an M & R activity for each pavement section in each period of a specified extended planning horizon. Each activity in the sequence consumes a known amount of capital and generates a known amount of effectiveness measured in pavement quality. The effectiveness of an activity is the expected value of the overall gains in pavement quality rating due to the activity performed on a highway network over an analysis period. It is assumed that the unused portion of the budget for one period can be carried over to subsequent periods. Dynamic Programming (DP) and Branch-and-Bound (B-and-B) approaches are combined to produce a hybrid algorithm for solving the problem under consideratioin. The algorithm is essentially a DP approach in the sense that the problem is divided into smaller subproblems corresponding to each single period problem. However, the idea of fathoming partial solutions that could not lead to an optimal solution is incorporated within the algorithm to reduce storage and computational requirements in the DP frame using the B-and-B approach. The imbedded-state approach is used to reduce a multi-dimensional DP to a one-dimensional DP. For bounding at each stage, the problem is relaxed in a Lagrangean fashion so that it separates into longest-path network model subproblems. The values of the Lagrangean multipliers are found by a subgradient optimization method, while the Ford-Bellman network algorithm is employed at each iteration of the subgradient optimization procedure to solve the longest-path network problem as well as to obtain an improved lower and upper bound. If the gap between lower and upper bound is sufficiently small, then we may choose to accept the best known solutions as being sufficiently close to optimal and terminate the algorithm rather than continue to the final stage.
430

Stability of bacterial DNA in relation to microbial detection in teeth

Brundin, Malin January 2013 (has links)
The fate of DNA from dead cells is an important issue when interpreting results from root canal infections analysed by the PCR technique. DNA from dead bacterial cells is known to be detectable long time after cell death and its stability is dependent on many different factors. This work investigated factors found in the root canal that could affect the recovery of microbial DNA. In an ex vivo experiment, DNA from non-viable gram-positive Enterococcus faecalis was inoculated in instrumented root canals and recovery of DNA was assessed by PCR over a two-year period. DNA was still recoverable two years after cell death in 21/25 teeth. The fate of DNA from the gram-negative bacteria Fusobacterium nucleatum and the gram-positive Peptostreptococcus anaerobius was assessed in vitro. DNA from dead F. nucleatum and P. anaerobius could be detected by PCR six months post cell death even though it was clear that the DNA was released from the cells due to lost of cell wall integrity during the experimental period. The decomposition rate of extracellular DNA was compared to cell-bound and it was evident that DNA still located inside the bacterium was much less prone to decay than extracellular DNA. Free (extracellular) DNA is very prone to decay in a naked form. Binding to minerals is known to protect DNA from degradation. The fate of extracellular DNA was assessed after binding to ceramic hydroxyapatite and dentine. The data showed that free DNA, bound to these materials, was protected from spontaneous decay and from enzymatic decomposition by nucleases. The main conclusions from this thesis were: i) DNA from dead bacteria can be detected by PCR years after cell death ex vivo and in vitro. ii) Cell-bound DNA is less prone to decomposition than extracellular DNA. iii) DNA is released from the bacterium some time after cell death. iv) Extracellular DNA bound to hydroxyapatite or dentine is protected from spontaneous decomposition and enzymatic degradation.

Page generated in 0.0444 seconds