• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129
  • 43
  • 16
  • 13
  • 11
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • Tagged with
  • 287
  • 28
  • 25
  • 23
  • 21
  • 18
  • 17
  • 16
  • 16
  • 15
  • 14
  • 14
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Stability of Diluted Neuromuscular Blocking Agents Utilized in Perioperative Hypersensitivity Evaluation

Brown, Stacy D., Archibald, Timothy, Mosier, Greg, Campbell, Bethany, Dinsmore, Kristen 01 November 2018 (has links)
Purpose: Neuromuscular blocking agents are a common cause of hypersensitivity reactions during surgery. An allergy evaluation, including skin testing of these drugs prior to future surgeries, may help prevent life threatening reactions. Drug concentrations utilized for skin testing vary by country and institution. The purpose of this study was to investigate the stored stability of clinically relevant dilutions of neuromuscular blocking agents (NMBAs), namely succinylcholine, atracurium, cisatracurium, rocuronium, pancuronium, and vecuronium, for skin prick/intradermal testing. Methods: Concentrations of NMBAs were monitored by liquid chromatography-mass spectrometry (LC-MS/MS) for a period of 14 days. Dilutions of NMBAs were prepared in saline by factors of 10x, 100x, 1,000x, 10,000x, and 100,000x, as sensitivity of the assay allowed. Diluted drug products were stored in a laboratory refrigerator until sampling. On sampling days, aliquots of each dilution were removed for analysis, and compared to a freshly prepared set of reference dilutions. Results: Acceptable potency of the stored preparations is defined as 90-110% of the initial drug concentration (versus a reference). All drugs were stable for at least 48 hours in the 1:10 dilution and for 24 hours in the 1:100 dilution. At higher dilution factors, the detectable amount of drug in the stored dilutions deteriorated rapidly, indicating that such preparations should be used immediately. Conclusion: With increasing dilution factors, the stability of these drugs in saline decreases, increasing deviation between samples and references. The most stable dilutions for each of the drugs tested were 10x and 100x.
32

Blocking vs. Non-blocking Communication under MPI on a Master-Worker Problem

Andr&eacute,, Fachat,, Hoffmann, Karl Heinz 30 October 1998 (has links) (PDF)
In this report we describe the conversion of a simple Master-Worker parallel program from global blocking communications to non-blocking communications. The program is MPI-based and has been run on different computer architectures. By moving the communication to the background the processors can use the former waiting time for computation. However we find that the computing time increases by the time the communication time decreases in the used MPICH implementation on a cluster of workstations. Also using non-global communication instead of the global communication slows the algorithm down on computers with optimized global communication routines like the Cray T3D.
33

CACHE OPTIMIZATION AND PERFORMANCE EVALUATION OF A STRUCTURED CFD CODE - GHOST

Palki, Anand B. 01 January 2006 (has links)
This research focuses on evaluating and enhancing the performance of an in-house, structured, 2D CFD code - GHOST, on modern commodity clusters. The basic philosophy of this work is to optimize the cache performance of the code by splitting up the grid into smaller blocks and carrying out the required calculations on these smaller blocks. This in turn leads to enhanced code performance on commodity clusters. Accordingly, this work presents a discussion along with a detailed description of two techniques: external and internal blocking, for data access optimization. These techniques have been tested on steady, unsteady, laminar, and turbulent test cases and the results are presented. The critical hardware parameters which influenced the code performance were identified. A detailed study investigating the effect of these parameters on the code performance was conducted and the results are presented. The modified version of the code was also ported to the current state-of-art architectures with successful results.
34

Chemical events at the myoneural junction

Kirschner, Leonard Burton, January 1951 (has links)
Thesis (Ph. D.)--University of Wisconsin, 1951. / Typescript (carbon copy). eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves [82]-86).
35

NON-BLOCKING ROLL-FORWARD RECOVERY APPROACH FOR DISTRIBUTED SYSTEMS

Surapu Reddy, Padmakar Reddy 01 December 2009 (has links)
In this work, a new roll-forward check pointing scheme is proposed using basic checkpoints. The direct-dependency concept used in the communication-induced check pointing scheme has been applied to basic checkpoints to design a simple algorithm to find a consistent global checkpoint. Both blocking (i.e. when the application processes are suspended during the execution of the algorithm) and non-blocking approaches have been presented. The use of the concept of forced checkpoints ensures a small re-execution time after recovery from a failure. The proposed approaches enjoy the main advantages of both the synchronous and the asynchronous approaches, i.e. simple recovery and simple way to create checkpoints. Besides, in the proposed blocking approach direct-dependency concept is implemented without piggybacking any extra information with the application message. A very simple scheme for avoiding the creation of useless checkpoints has also been proposed.
36

Migration from blocking to non-blocking web frameworks

Bilski, Mateusz January 2014 (has links)
The problem of performance and scalability of web applications is challenged by most of the software companies. It is difficult to maintain the performance of a web application while the number of users is continuously increasing. The common solution for this problem is scalability. A web application can handle incoming and outgoing requests using blocking or non-blocking Input/Output operation. The way that a single server handles requests affects its ability to scale and depends on a web framework that was used to build the web application. It is especially important for Resource Oriented Architecture (ROA) based applications which consist of distributed Representational State Transfer (REST) web services. This research was inspired by a real problem stated by a software company that was considering the migration to the non-blocking web framework but did not know the possible profits. The objective of the research was to evaluate the influence of web framework's type on the performance of ROA based applications and to provide guidelines for assessing profits of migration from blocking to non-blocking JVM web frameworks. First, internet ranking was used to obtain the list of the most popular web frameworks. Then, the web frameworks were used to conduct two experiments that investigated the influence of web framework's type on the performance of ROA based applications. Next, the consultations with software architects were arranged in order to find a method for approximating the performance of overall application. Finally, the guidelines were prepared based on the consultations and the results of the experiments. Three blocking and non-blocking highly ranked and JVM based web frameworks were selected. The first experiment showed that the non-blocking web frameworks can provide performance up to 2.5 times higher than blocking web frameworks in ROA based applications. The experiment performed on existing application showed average 27\% performance improvement after the migration. The elaborated guidelines successfully convinced the company that provided the application for testing to conduct the migration on the production environment. The experiment results proved that the migration from blocking to non-blocking web frameworks increases the performance of web application. The prepared guidelines can help software architects to decide if it is worth to migrate. However the guidelines are context depended and further investigation is needed to make it more general.
37

Internet blocking law and governance in the United Kingdom : an examination of the Cleanfeed system

McIntyre, Thomas Jeremiah January 2014 (has links)
This thesis examines the legal and governance issues presented by internet blocking (“filtering”) systems through the use of the United Kingdom’s Cleanfeed system as a national case study. The Cleanfeed system – which aims to block access to child abuse images – has been influential both domestically and internationally but has been the subject of relatively little sustained scrutiny in the literature. Using a mixed doctrinal and empirical methodology this work discusses the evolution of Cleanfeed and considers the way in which government pressure has led to a private body without any express legislative basis (the Internet Watch Foundation) being given the power to control what UK internet users can view. The thesis argues that the Cleanfeed system sits at the intersection of three distinct trends – the use of architectural regulation, regulation through intermediaries and self-regulation – which individually and collectively present significant risks for freedom of expression and good governance online. It goes on to identify and examine the fundamental rights norms and governance standards which should apply to internet blocking and tests the system against them, arguing in particular that Cleanfeed fails to meet the requirements developed by the European Court of Human Rights under Articles 6 and 10 ECHR. It considers the extent to which Cleanfeed might be made amenable to these principles through the use of judicial review or actions under the Human Rights Act 1998 and concludes that the diffuse structure of the system and the limited availability of horizontal effect against private bodies will leave significant aspects beyond the effective reach of the courts. This work also assesses claims that the Cleanfeed system is a proof of concept which should be extended so as to block other material considered objectionable (such as websites which “glorify terrorism”). It argues that the peculiar features of the system mean that it represents a best case scenario and does not support blocking of other types of content which are significantly more problematic. The thesis concludes by considering proposals for reform of the Cleanfeed system and the extent to which greater public law oversight might undermine the desirable features associated with self-regulation.
38

Behavioural and brain mechanisms of predictive fear learning in the rat

Cole, Sindy, Psychology, Faculty of Science, UNSW January 2009 (has links)
The experiments reported in this thesis studied the contributions of opioid and NMDA receptors to predictive fear learning, as measured by freezing in the rat. The first series of experiments (Chapter 2) used a within-subject one-trial blocking design to study whether opioid receptors mediate a direct action of predictive error on Pavlovian association formation. Systemic administrations of the opioid receptor antagonist naloxone or intra-vlPAG administrations of the selective μ-opioid receptor antagonist CTAP prior to Stage II training prevented one-trial blocking. These results show for the first time that opioid receptors mediate the direct actions of predictive error on Pavlovian association formation. The second series of experiments (Chapter 3) then studied temporal-difference prediction errors during Pavlovian fear conditioning. In Stage I rats received CSA ?? shock pairings. In Stage II they received CSA/CSB ?? shock pairings that blocked learning to CSB. In Stage III, a serial overlapping compound, CSB → CSA, was followed by shock. The change in intra-trial durations supported fear learning to CSB but reduced fear of CSA, revealing the selective operation of temporal-difference prediction errors. This bi-directional change in responding was prevented by systemic NMDA receptor antagonism prior to Stage III training. In contrast opioid receptor antagonism differentially affected the learning taking place during Stage III, enhancing learning to CSB while impairing the loss of fear to CSA. The final series of experiments (Chapter 4) then examined potential neuroanatomical loci for the systemic effects reported in Chapter 3. It was observed that intra-BLA infusion of ifenprodil, an antagonist of NMDA receptors containing the NR2B subunit, prevented all learning during Stage III, whereas intra-vlPAG infusion of the μ-opioid receptor antagonist CTAP facilitated learning to CSB but impaired learning to CSA. These results are consistent with the suggestion that opioid receptors in the vlPAG provide an important contribution to learning. Importantly, this contribution of the vlPAG is over and above its role in producing the freezing conditioned response. Furthermore, the findings of this thesis identify complementary but dissociable roles for amygdala NMDA receptors and vlPAG μ-opioid receptors in predictive fear learning.
39

Image Compression Using Bidirectional DCT to Remove Blocking Artifacts

Faridi, Imran Zafar 12 May 2005 (has links)
Discrete Cosine Transform (DCT) is widely used transform in many areas of the current information age. It is used in signal compression such as voice recognition, shape recognition and also in FBI finger prints. DCT is the standard compression system used in JPEG format. The DCT quality deteriorates at low-bit compression rate. The deterioration is due to the blocking artifact inherent in block DCT. One of the successful attempts to reduce these blocking artifacts was conversion of Block-DCT into Line-DCT. In this thesis we will explore the Line-DCT and introduce a new form of line-DCT called Bidirectional-DCT, which retains the properties of Line- DCT while improving computational efficiency. The results obtained in this thesis show significant reduction in processing time both in one dimensional and two dimensional DCT in comparison with the traditional Block-DCT. The quality analysis also shows that the least mean square error is considerably lower than the traditional Block-DCT which is a consequence of removing the blocking artifacts. Finally, unlike the traditional block DCT, the Bidirectional-DCT enables compression with very low bit rates and very low blocking artifacts.
40

Path Planning for Autonomous Heavy Duty Vehicles using Nonlinear Model Predictive Control / Ruttplanering för tunga autonoma fordon med olinjär modellbaserad prediktionsreglering

Norén, Christoffer January 2013 (has links)
In the future autonomous vehicles are expected to navigate independently and manage complex traffic situations. This thesis is one of two theses initiated with the aim of researching which methods could be used within the field of autonomous vehicles. The purpose of this thesis was to investigate how Model Predictive Control could be used in the field of autonomous vehicles. The tasks to generate a safe and economic path, to re-plan to avoid collisions with moving obstacles and to operate the vehicle have been studied. The algorithm created is set up as a hierarchical framework defined by a high and a low level planner. The objective of the high level planner is to generate the global route while the objectives of the low level planner are to operate the vehicle and to re-plan to avoid collisions. Optimal Control problems have been formulated in the high level planner for the use of path planning. Different objectives of the planning have been investigated e.g. the minimization of the traveled length between the start and the end point. Approximations of the static obstacles' forbidden areas have been made with circles. A Quadratic Programming framework has been set up in the low level planner to operate the vehicle to follow the high level pre-computed path and to locally re-plan the route to avoid collisions with moving obstacles. Four different strategies of collision avoidance have been implemented and investigated in a simulation environment.

Page generated in 0.0864 seconds