• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 132
  • 43
  • 16
  • 13
  • 11
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • Tagged with
  • 290
  • 28
  • 25
  • 23
  • 22
  • 18
  • 17
  • 16
  • 16
  • 16
  • 15
  • 15
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

The Design, Implementation, and Refinement of Wait-Free Algorithms and Containers

Feldman, Steven 01 January 2015 (has links)
My research has been on the development of concurrent algorithms for shared memory systems that provide guarantees of progress. Research into such algorithms is important to developers implementing applications on mission critical and time sensitive systems. These guarantees of progress provide safety properties and freedom from many hazards, such as dead-lock, live-lock, and thread starvation. In addition to the safety concerns, the fine-grained synchronization used in implementing these algorithms promises to provide scalable performance in massively parallel systems. My research has resulted in the development of wait-free versions of the stack, hash map, ring buffer, vector, and a multi-word compare-and-swap algorithms. Through this experience, I have learned and developed new techniques and methodologies for implementing non-blocking and wait-free algorithms. I have worked with and refined existing techniques to improve their practicality and applicability. In the creation of the aforementioned algorithms, I have developed an association model for use with descriptor-based operations. This model, originally developed for the multi-word compare-and-swap algorithm, has been applied to the design of the vector and ring buffer algorithms. To unify these algorithms and techniques, I have released Tervel, a wait-free library of common algorithms and containers. This library includes a framework that simplifies and improves the design of non-blocking algorithms. I have reimplemented several algorithms using this framework and the resulting implementation exhibits less code duplication and fewer perceivable states. When reimplementing algorithms, I have adapted their Application Programming Interface (API) specification to remove ambiguity and non-deterministic behavior found when using a sequential API in a concurrent environment. To improve the performance of my algorithm implementations, I extended OVIS's Lightweight Distributed Metric Service (LDMS)'s data collection and transport system to support performance monitoring using perf_event and PAPI libraries. These libraries have provided me with deeper insights into the behavior of my algorithms, and I was able to use these insights to improve the design and performance of my algorithms.
72

Combining Blocked and Interleaved Presentation During Passive Study and Its Effect on Inductive Learning

Wright, Emily Gail 24 May 2017 (has links)
No description available.
73

DNA-Enhanced Efficiency and Luminance of Organic Light Emitting Diodes

Spaeth, Hans D. 16 October 2012 (has links)
No description available.
74

Numerical simulation of blocking by the resonance of topographically forced waves

Dionne, Pierre, 1962- January 1986 (has links)
No description available.
75

Designing Order Picking Systems for Distribution Centers

Parikh, Pratik J. 06 October 2006 (has links)
This research addresses decisions involved in the design of an order picking system in a distribution center. A distribution center (DC) in a logistics system is responsible for obtaining materials from different suppliers and assembling (or sorting) them to fulfill a number of different customer orders. Order picking, which is a key activity in a DC, refers to the operation through which items are retrieved from storage locations to fulfill customer orders. Several decisions are involved when designing an order picking system (OPS). Some of these decisions include the identification of the picking-area layout, configuration of the storage system, and determination of the storage policy, picking method, picking strategy, material handling system, pick-assist technology, etc. For a given set of these parameters, the best design depends on the objective function (e.g., maximizing throughout, minimizing cost, etc.) being optimized. The overall goal of this research is to develop a set of analytical models for OPS design. The idea is to help an OPS designer to identify the best performing alternatives out of a large number of possible alternatives. Such models will complement experienced-based or simulation-based approaches, with the goal of improving the efficiency and efficacy of the design process. In this dissertation we focus on the following two key OPS design issues: configuration of the storage system and selection between batch and zone order picking strategies. Several factors that affect these decisions are identified in this dissertation; a common factor amongst these being picker blocking. We first develop models to estimate picker blocking (Contribution 1) and use the picker blocking estimates in addressing the two OPS design issues, presented as Contributions 2 and 3. In Contribution 1 we develop analytical models using discrete-time Markov chains to estimate pick-face blocking in wide-aisle OPSs. Pick-face blocking refers to the blocking experienced by a picker at a pick-face when another picker is already picking at that pick-face. We observe that for the case when pickers may pick only one item at a pick-face, similar to in-the-aisle blocking, pick-face blocking first increases with an increase in pick-density and then decreases. Moreover, pick-face blocking increases with an increase in the number of pickers and pick to walk time ratio, while it decreases with an increase in the number of pick-faces. For the case when pickers may pick multiple items at a pick-face, pick-face blocking increases monotonically with an increase in the pick-density. These blocking estimates are used in addressing the two OPS design issues, which are presented as Contributions 2 and 3. In Contribution 2 we address the issue of configuring the storage system for order picking. A storage system, typically comprised of racks, is used to store pallet-loads of various stock keeping units (SKU) --- a SKU is a unique identifier of products or items that are stored in a DC. The design question we address is related to identifying the optimal height (i.e., number of storage levels), and thus length, of a one-pallet-deep storage system. We develop a cost-based optimization model in which the number of storage levels is the decision variable and satisfying system throughput is the constraint. The objective of the model is to minimize the system cost, which is comprised of the cost of labor and space. To estimate the cost of labor we first develop a travel-time model for a person-aboard storage/retrieval (S/R) machine performing Tchebyshev travel as it travels in the aisle. Then, using this travel-time model we estimate the throughput of each picker, which helps us estimate the number of pickers required to satisfy the system throughput for a given number of storage levels. An estimation of the cost of space is also modeled to complete the total cost model. Results from an experimental study suggest that a low (in height) and long (in length) storage system tends to be optimal for situations where there is a relatively low number of storage locations and a relatively high throughput requirement; this is in contrast with common industry perception of the higher the better. The primary reason for this contrast is because the industry does not consider picker blocking and vertical travel of the S/R machine. On the other hand, results from the same optimization model suggest that a manual OPS should, in almost all situations, employ a high (in height) and short (in length) storage system; a result that is consistent with industry practice. This consistency is expected as picker blocking and vertical travel, ignored in industry, are not a factor in a manual OPS. In Contribution 3 we address the issue of selecting between batch and zone picking strategies. A picking strategy defines the manner in which the pickers navigate the picking aisles of a storage area to pick the required items. Our aim is to help the designer in identifying the least expensive picking strategy to be employed that meets the system throughput requirements. Consequently, we develop a cost model to estimate the system cost of a picking system that employs either a batch or a zone picking strategy. System cost includes the cost of pickers, equipment, imbalance, sorting system, and packers. Although all elements are modeled, we highlight the development of models to estimate the cost of imbalance and sorting system. Imbalance cost refers to the cost of fulfilling the left-over items (in customer orders) due to workload-imbalance amongst pickers. To estimate the imbalance cost we develop order batching models, the solving of which helps in identifying the number of items unfulfilled. We also develop a comprehensive cost model to estimate the cost of an automated sorting system. To demonstrate the use of our models we present an illustrative example that compares a sort-while-pick batch picking system with a simultaneous zone picking system. To summarize, the overall goal of our research is to develop a set of analytical models to help the designer in designing order picking systems in a distribution center. In this research we focused on two key design issues and addressed them through analytical approaches. Our future research will focus on addressing other design issues and incorporating them in a decision support system. / Ph. D.
76

Large Scale Homogeneous Turbulence and Interactions with a Flat-Plate Cascade

Larssen, Jon Vegard 07 April 2005 (has links)
The turbulent flow through a marine propulsor was experimentally modeled using a large cascade configuration with six 33 cm chord flat plates spanning the entire height of the test section in the Virginia Tech Stability Wind Tunnel. Three-component hot-wire velocity measurements were obtained ahead, throughout and behind both an unstaggered and a 35º staggered cascade configuration with blade spacing and onset turbulence integral scales on the order of the chord. This provided a much needed data-set of much larger Taylor Reynolds number than previous related studies and allowed a thorough investigation of the blade-blocking effects of the cascade on the incident turbulent field. In order to generate the large scale turbulence needed for this study, a mechanically rotating "active" grid design was adopted and placed in the contraction of the wind tunnel at a streamwise location sufficient to cancel out the relatively large inherent low frequency anisotropy associated with this type of grid. The resulting turbulent flow is one of the largest Reynolds number (Reλ  1000) homogeneous near-isotropic turbulent flows ever created in a wind tunnel, and provided the opportunity to investigate Reynolds number effects on turbulence parameters, especially relating to inertial range dynamics. Key findings include 1) that the extent of local isotropy is solely determined by the turbulence generator and the size of the wind-tunnel that houses it; and 2) that the turbulence generator operating conditions affect the shape of the equilibrium range at fixed Taylor Reynolds number. The latter finding suggests that grid turbulence is not necessarily self-similar at a given Reynolds number independent of how it was generated. The experimental blade-blocking data was compared to linear cascade theory and showed good qualitative agreement, especially for wavenumbers above the region of influence of the wind tunnel and turbulence generator effects. As predicted, the turbulence is permanently modified by the presence of the cascade after which it remains invariant for a significant downstream distance outside the thin viscous regions. The obtained results support the claim that Rapid Distortion Theory (RDT) is capable of providing reasonable estimates of the flow behind the cascade even though the experimental conditions lie far outside the predicted region of validity. / Ph. D.
77

Improvement of interconnection networks for clusters: direct-indirect hybrid topology and HoL-blocking reduction routing

Peñaranda Cebrián, Roberto 03 March 2018 (has links)
Tesis por compendio / Nowadays, clusters of computers are used to solve computation intensive problems. These clusters take advantage of a large number of computing nodes to provide a high degree of parallelization. Interconnection networks are used to connect all these computing nodes. The interconnection network should be able to efficiently handle the traffic generated by this large number of nodes. Interconnection networks have different design parameters that define the behavior of the network. Two of them are the topology and the routing algorithm. The topology of a interconnection network defines how the different network elements are connected, while the routing algorithm determines the path that a packet must take from the source to the destination node. The most commonly used topologies typically follow a regular structure and can be classified into direct and indirect topologies, depending on how the different network elements are interconnected. On the other hand, routing algorithms can also be classified into two categories: deterministic and adaptive algorithms. To evaluate interconnection networks, metrics such as latency or network productivity are often used. Throughput refers to the traffic that the network is capable of accepting the network per time unit. On the other hand, latency is the time that a packet requires to reach its destination. This time can be divided into two parts. The first part is the time taken by the packet to reach its destination in the absence of network traffic. The second part is due to network congestion created by existing traffic. One of the effects of congestion is the so-called Head-of-Line blocking, where the packet at the head of a queue blocks, causing the remaining queued packets can not advance, although they could advance if they were at the head of the queue. Nowadays, there are other important factors to consider when interconnection networks are designed, such as cost and fault tolerance. On the one hand, a high performance is desirable, but without a disproportionate increase in cost. On the other hand, the fact of increasing the size of the network implies an increase in the network components, thus the probability of occurrence of a failure is higher. For this reason, having some fault tolerance mechanism is vital in current interconnection networks of large machines. Putting all in a nutshell, a good performance-cost ratio is required in the network, with a high level of fault-tolerance. This thesis focuses on two main objectives. The first objective is to combine the advantages of the direct and indirect topologies to create a new family of topologies with the best of both worlds. The main goal is the design of the new family of topologies capable of interconnecting a large number of nodes being able to get very good performance with a low cost hardware. The family of topologies proposed, that will be referred to as k-ary n-direct s-indirect, has a n dimensional structure where the k different nodes of a given dimension are interconnected by a small indirect topology of s stages. We will also focus on designing a deterministic and an adaptive routing algorithm for the family of topologies proposed. Finally we will focus on analyzing the fault tolerance in the proposed family of topologies. For this, the existing fault tolerance mechanism for similar topologies will be studied and a mechanism able to exploit the features of this new family will be designed. The second objective is to develop routing algorithms specially deigned to reduce the pernicious effect of Head-of-Line blocking, which may shoot up in systems with a high number of computing nodes. To avoid this effect, routing algorithms able of efficiently classifying the packets in the different available virtual channels are designed, thus preventing that the occurrence of a hot node (Hot-Spot) could saturate the network and affect the remaining network traffic. / Hoy en día, los clústers de computadores son usados para solucionar grandes problemas. Estos clústers aprovechan la gran cantidad de nodos de computación para ofrecer un alto grado de paralelización. Para conectar todos estos nodos de computación, se utilizan redes de interconexión de altas prestaciones capaces de manejar de forma eficiente el tráfico generado. Estas redes tienen diferentes parámetros de diseño que definen su comportamiento, de los cuales podríamos destacar dos: la topología y el algoritmo de encaminamiento. La topología de una red de interconexión define como se conectan sus componentes, mientras que el algoritmo de encaminamiento determina la ruta que un paquete debe tomar desde su origen hasta su destino. Las topologías más utilizadas suelen seguir una estructura regular y pueden ser clasificadas en directas e indirectas, dependiendo de cómo estén interconectados los diferentes elementos de la red. Por otro lado, los algoritmos de encaminamiento también pueden clasificarse en dos categorías: deterministas y adaptativos. Para evaluar estas redes se suelen utilizar medidas tales como la latencia o la productividad de la red. La productividad mide el tráfico que es capaz de aceptar la red por unidad de tiempo. La latencia mide el tiempo que utiliza un paquete para alcanzar su destino. Este tiempo se puede dividir en dos partes. La primera corresponde al tiempo utilizado por el paquete en alcanzar a su destino en ausencia de tráfico en la red. La segunda sería la debida a la congestión de la red creada por el tráfico existente. Uno de los efectos de la congestión es el denominado Head-of-Line blocking, donde el paquete que encabeza una cola se queda bloqueado, por lo que el resto de paquetes de la cola no pueden avanzar, aunque pudieran hacerlo si ellos encabezaran dicha cola. Otros factores a tomar en cuenta son el coste y la tolerancia a fallos. Las prestaciones deben mantenerse conforme aumentamos el tamaño de la red, pero sin un aumento prohibitivo en el coste. Además, el hecho de aumentar el tamaño de la red implica un aumento en el número de elementos de dicha red, aumentando la probabilidad de la aparición de un fallo. Por ello, es vital contar con algún mecanismo de tolerancia a fallos en las redes para los grandes supercomputadores actuales. En otras palabras, es de esperar una buena relación coste-prestaciones con un alto nivel de tolerancia a fallos. Esta tesis tiene dos objetivos principales. El primer objetivo combina las ventajas de las topologías directas e indirectas para crear una nueva familia de topologías con lo mejor de ambas. En concreto, nos centramos en el diseño de una nueva familia de topologías capaz de interconectar una gran cantidad de nodos siendo capaz de obtener muy buenas prestaciones con un bajo coste hardware. La familia de topologías propuesta, que hemos llamado k-ary n-direct s-indirect, tiene una estructura n-dimensional, donde los diferentes k nodos de una dimensión se conectan entre sí mediante una pequeña topología indirecta con s etapas. También diseñaremos un algoritmo de encaminamiento determinista y otro adaptativo para la familia de topologías propuesta. Finalmente, nos centraremos en estudiar la tolerancia a fallos para la familia de topologías propuesta. Para ello se estudiarán los mecanismos de tolerancia a fallos existentes en topologías similares y se diseñará un mecanismo capaz de aprovechar al máximo las características de esta nueva familia. El segundo objetivo consiste en el desarrollo de algoritmos de encaminamiento capaces de evitar el pernicioso efecto Head-of-Line blocking, lo cual puede aumentar rápidamente en sistemas con un gran número de nodos de computación. Para evitar este efecto se diseñarán algoritmos de encaminamiento capaces de clasificar de forma eficiente los paquetes en los diferentes canales virtuales disponibles, evitando así que la aparición de un punto caliente (Hot-Spot) sat / Hui en dia, els clústers de computadors són utilitzats per solucionar grans problemes computacionals. Aquests clústers aprofiten la gran quantitat de nodes de computació per a oferir un alt grau de paral·lelització. Per a connectar tots aquests nodes de computació, s'utilitzen xarxes d'interconnexió d'altes prestacions capaços de manejar de manera eficient el trànsit generat. Aquestes xarxes tenen diferents paràmetres de disseny que defineixen el seu comportament, dels quals podríem destacar dues: la topologia i l'algoritme d'encaminament. La topologia d'una xarxa d'interconnexió ens defineix com es connecten els seus components, mentre que l'algoritme d'encaminament determina la ruta que un paquet ha de prendre des del seu node origen fins al seu node destí. Les topologies més utilitzades solen seguir una estructura regular i poden ser classificades en directes i indirectes, depenent de com estiguen interconnectats els diferents elements de la xarxa. D'altra banda, els algoritmes d'encaminament també poden classificar-se en dues categories: deterministes i adaptatius. Per avaluar estes xarxes es solen utilitzar mesures com ara la latència o la productivitat de la xarxa. La productivitat mesura el trànsit que és capaç d'acceptar la xarxa per unitat de temps. La latència mesura el temps que utilitza un paquet per arribar al seu destí. Aquest temps es pot dividir en dues parts. La primera correspon al temps emprat pel paquet a aconseguir al seu destí en absència de trànsit a la xarxa. La segona part seria la deguda a la congestió de la xarxa creada per el trànsit existent. Un dels efectes de la congestió és l'anomenat Head-of-line blocking, on el paquet que encapçala una cua es queda bloquejat, de manera que la resta de paquets de la cua no poden avançar, encara que poguessen fer-ho si ells encapçalessen la dita cua. Altres factors a tenir en compte són el cost i la tolerància a fallades. Per tant, les prestacions s'han de mantenir d'acord augmentem la mida de la xarxa, però sense un augment prohibitiu en el cost. A més, el fet d'augmentar la mida de la xarxa implica un augment en el número de elements d'aquesta xarxa, de manera que la probabilitat de l'aparició d'una fallada és més gran. Per això, és vital comptar amb algun mecanisme de tolerància a fallades en les xarxes d'interconnexió per als gran supercomputadors actuals. En altres paraules, és d'esperar bona relació cost-prestacions amb una alta tolerància a fallades. Aquesta tesi té dos objectius principals. El primer objectiu combina les avantatges de les topologies directes i indirectes per a crear una nova família de topologies amb el millor dels dos mons. En concret, ens centrem en el disseny de una nova família de topologies capaç d'interconnectar una gran quantitat de nodes sent capaç d'obtenir molt bones prestacions amb un baix cost hardware. La família de topologies proposada, que hem nomenat k-ary n-direct s-indirect, té una estructura n-dimensional, on els diferents k nodes d'una dimensió se connecten entre si mitjançant una petita topologia indirecta amb s etapes. També dissenyarem un algoritme d'encaminament determinista i un altre adaptatiu per a la família de topologies proposta. Finalment, ens centrarem en estudiar la tolerància a fallades per a la família de topologies proposada. Per a això s'estudiaran els mecanismes de tolerància a fallades existents en topologies similars i es dissenyarà un mecanisme capaç d'aprofitar al màxim les característiques d'aquesta nova família. El segon objectiu consisteix en la creació d'algoritmes d'encaminament capaços d'evitar el perniciós efecte Head-of-line blocking que pot créixer ràpidament amb un gran número de nodes de computació. Per a evitar aquest efecte es dissenyaran algoritmes d'encaminament capaços de classificar de forma eficient els paquets en els diferents canals virtuals disponibles, evitant així que l'aparició d'un punt calent ( / Peñaranda Cebrián, R. (2017). Improvement of interconnection networks for clusters: direct-indirect hybrid topology and HoL-blocking reduction routing [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/79550 / Compendio
78

The effects of some typical and atypical neuroleptics on gene regulation : implications for the treatment of schizophrenia

Chlan-Fourney, Jennifer 01 January 2000 (has links)
The mechanisms by which antipsychotics (neuroleptics) produce their therapeutic effects in schizophrenia are largely unknown. Although neuroleptic efficacy is attributed to central dopamine D2 and/or serotonin 5-HT2 receptor antagonism, clinical improvements in schizophrenia are not seen until two or three weeks after daily neuroleptic administration. The mechanisms underlying the neuroleptic response must therefore occur downstream from initial receptor blockade and be a consequence of chronic neurotransmitter receptor blockade. The goal of the present study was to use neuroleptics with varied dopamine vs. serotonergic receptor blocking profiles to elucidate some of these intracellular post receptor mechanisms. Since the final steps of both dopamine and serotonin synthesis require the enzyme aromatic L-amino acid decarboxylase (AADC), the effects of neuroleptics on AADC gene (mRNA) expression were examined in PC12 cells and compared to their effects on the synthetic enzyme tyrosine hydroxylase (TH) and ' c-fos' (an early immediate gene [IEG]) mRNA. The neuroleptics examined did not significantly regulate AADC mRNA in PC12 cells, and only haloperidol upregulated TH and 'c-fos' mRNA. Later studies in rats showed that acute neuroleptic administration increased ' c-fos' mRNA, whereas the immunoreactivity of a related IEG (delta FosB) was increased upon chronic treatment. These studies and a subsequent dose response study demonstrated that upregulation of both 'c-fos' mRNA and delta FosB immunoreactivity was most prominent in dopaminergic projection areas including the striatum and nucleus accumbens. Because it has been suggested that neuroleptic treatment might prevent neurodegeneration in schizophrenia, the effects of neuroleptics on the mRNA expression of neuroprotective target genes of delta FosB were examined both ' in vivo' and 'in vitro'. These genes included brain-derived neurotrophic factor (BDNF), the neuroprotective enzyme superoxide dismutase (SOD), and the low affinity nerve growth factor receptor (p75). While dopamine D2 blockade unfavorably regulated BDNF and p75 mRNA, 5-HT 2 blockade either had no effect on or favorably regulated BDNF, SOD, and p75 mRNA. Thus, although little about the contribution of serotonergic blockade in the neuroleptic response was determined, dopaminergic blockade regulated IEG's and several of their target genes. Future studies will be needed to understand the role of 5-HT2 receptor blockade in the neuroleptic response.
79

Optimalizace výpočtu v multigridu / Performance Engineering of Stencils Optimization in Geometric Multigrid

Janalík, Radim January 2015 (has links)
V této práci představujeme blokovou metodu pro zlepšení lokality v cache paměti u výpočtů typu stencil a dva nástroje, Pluto a PATUS, které tuto metodu používají ke generování optimalizovaného kódu. Provádíme různá měření a zkoumáme zrychlení výpočtu při použití různých optimalizací. Nakonec implementujeme vyhlazovací krok v multigridu s různými optimalizacemi a zkoumáme jak se tyto optimalizace projeví na výkonu multigridu.
80

Job Sequencing & WIP level determination in a cyclic CONWIP Flowshop with Blocking

Palekar, Nipun Pushpasheel 14 September 2000 (has links)
A CONWIP (Constant Work-In-Progress) system is basically a hybrid system with a PUSH-PULL interface at the first machine in the line. This research addresses the most general case of a cyclic CONWIP system by incorporating two additional constraints over earlier studies namely; stochastic processing times and limited intermediate storage. One of the main issues in the design of a CONWIP system is the WIP level 'M', to be maintained. This research proposes an iterative procedure to determine this optimal level. The second main issue is the optimization of the line by determining an appropriate job sequence. This research assumes a 'permutational' scheduling policy and proposes an iterative approach to find the best sequence. The approach utilizes a controlled enumerative approach called the Fast Insertion Heuristic (FIH) coupled with a method to appraise the quality of every enumeration at each iteration. This is done by using a modified version of the Floyd's algorithm, to determine the cycle time (or Flow time) of a partial/full solution. The performance measures considered are the Flow time and the Interdeparture time (inverse of throughput). Finally, both the methods suggested for the two subproblems, are tested through computer implementations to reveal their proficiency. / Master of Science

Page generated in 0.0524 seconds