• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 645
  • 132
  • 64
  • 63
  • 15
  • 15
  • 15
  • 12
  • 10
  • 10
  • 4
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1205
  • 144
  • 134
  • 121
  • 93
  • 88
  • 77
  • 74
  • 72
  • 71
  • 70
  • 70
  • 69
  • 68
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Air entrainment relationship with water discharge of vortex drop structures

Pump, Cody N. 01 May 2011 (has links)
Vortex drop shafts are used to transport water or wastewater from over-stressed existing sewer systems to underground tunnels. During the plunge a large amount of air is entrained into the water and released downstream of the drop shaft into the tunnel. This air is unwanted and becomes costly to treat and move back to the surface. Determining the amount of air that will be entrained is a difficult task. A common method is to build a scale model and measure the air discharge and scale it back to prototype. This study investigated a possible relationship between the geometry of the drop structure, the water discharge and the amount of air entrained. The results have shown that air entrainment is still not entirely understood, however we are close to a solution. Using a relationship of the air core diameter, drop shaft length and terminal velocity of the water, a likely exponential relationship has been developed.
382

Role of rainfall variability in the statistical structure of peak flows

Mandapaka Venkata, Pradeep 01 December 2009 (has links)
This thesis examines the role of rainfall variability and uncertainties on the spatial scaling structure of peak flows using the Whitewater River basin in Kansas, and Iowa River basin in Iowa as illustrations. We illustrate why considering individual hydrographs at the outlet of a basin can lead to misleading interpretations of the effects of rainfall variability. The variability of rainfall is characterized in terms of storm intensity, duration, advection velocity, zero-rain intermittency, variance and spatial correlation structure. We begin with the simple scenario of a basin receiving spatially uniform rainfall of varying intensities and durations, and advection velocities. We then use a realistic space-time rainfall field obtained from a popular rainfall model that can reproduce desired storm variability and spatial structure. We employ a recent formulation of flow velocity for a network of channels and calculate peak flow scaling exponents, which are then compared to the scaling exponent of the channel network width function maxima. The study then investigates the role of hillslope characteristics on the peak flow scaling structure. The basin response at the smaller scales is driven by the rainfall intensities (and spatial variability), while the larger scale response is dominated by the rainfall volume as the river network aggregates the variability at the smaller scales. The results obtained from simulation scenarios can be used to make rigorous interpretations of the peak flow scaling structure obtained from actual space-time model, and actual radar-rainfall events measured by the NEXRAD weather radar network. An ensemble of probable rainfall fields conditioned on the given radar-rainfall field is then generated using a radar-rainfall error model and probable rainfall generator. The statistical structure of ensemble fields is then compared with that of given radar-rainfall field to quantify the impact of radar-rainfall errors on 1) spatial characterization of the rainfall events and 2) scaling structure of the peak flows. The effect of radar-rainfall errors is to introduce spurious correlations in the radar-rainfall fields, particularly at the smaller scales. However, preliminary results indicated that the radar-rainfall errors do not significantly affect the peak flow scaling exponents.
383

Investigation Of Oxide Thickness Dependence Of Fowler-Nordheim Parameter B

Bharadwaj, Shashank 25 March 2004 (has links)
During recent years the thickness of the gate oxide has been reduced considerably. The progressive miniaturization of devices has caused several phenomena to emerge such as quasi-breakdown, direct tunneling and stress induced leakage currents. Such phenomena significantly modify the performance of the scaled-down MOSFETs. As a part of this research work, an effort has been made to study the performance and characteristics of the thin Gate oxide for MOSFETs and Tunnel Oxide for Floating Gate (FG) MOS devices. The exponential dependence of tunnel current on the oxide-electric field causes some critical problems in process control. A very good process control is therefore required. This can be achieved by finding out the exact value of F-N tunneling parameter. This research work also is an effort of finding an accurate value for parameter B and its dependence on the oxide thickness as the device are scaled down to a level where the probability of Direct Tunneling mechanism gains more prominence. A fully automated Low Current Measurement workstation with noise tolerance as low as 10-15 A was set up as a part of this research. C-V and I-V curves were analyzed to extract, determine and investigate the oxide thickness dependence of F-N parameter B. For oxide thickness in the range10~13 nm, the parameter B ranged between 260 and 267. Thus it can be said that it is not sensitive to the change in oxide thickness in this range. However it was noticed that for thickness around 7nm wide variety of results were obtained for the Fowler-Nordheim parameter B (B ranged from 260 to 454). This can be attributed to the enhancement in the leakage current due to the direct tunneling. Hence to have tight control over VT for a NVM, new algorithms need to be developed for even better process control for oxide thickness in the range of 7 nm and below.
384

Efeito do fotossensibilizador butyl azul de toluidina na terapia fotodinâmica antimicrobiana para o tratamento da periodontite experimental em ratos /

Nuernberg, Marta Aparecida Alberton. January 2019 (has links)
Orientador: Leticia Helena Theodoro / Coorientador: Valdir Gouveia Garcia / Banca: Leonardo Perez Faverani / Banca: Edilson Ervolino / Banca: Cassius Carvalho Torres Pereira / Banca: Mark Wainwright / Resumo: O presente estudo avaliou pela primeira vez "in vivo" os efeitos de três concentrações do butyl azul de toluidina (BuTB) como agente fotossensibilizador na terapia fotodinâmica antimicrobiana (aPDT), como terapia coadjuvante a raspagem e alisamento radicular (RAR), para o tratamento de periodontite experimental (PE) em ratos. A PE foi induzida por meio da instalação de um fio de algodão ao redor do primeiro molar inferior esquerdo. Posteriormente os animais foram aleatoriamente distribuídos em 7 grupos com 15 animais cada, através de uma tabela gerada por computador, de acordo com os seguintes tratamentos: RAR (n=15) - RAR seguido de irrigação local de solução salina fisiológica; BuTB-0,1 (n=15) - RAR seguido de aplicação local de BuTB na concentração de 0,1 mg/mL; aPDT-0,1 (n=15) - RAR seguido da aplicação local de BuTB na concentração de 0,1 mg/mL e irradiação com laser de diodo (LD) de InGaAlP (660 nm, 40 mW, 60 s, 2,4 J); BuTB-0,5 (n=15) - RAR seguido de aplicação local de BuTB na concentração de 0,5 mg/mL; aPDT-0,5 (n=15) - RAR seguido da aplicação local de BuTB na concentração de 0,5 mg/mL e irradiação com LD; BuTB-2,0 (n=15) - RAR seguido de aplicação local de BuTB na concentração de 2 mg/mL; aPDT-2,0 (n=15) - RAR seguido da aplicação local de BuTB na concentração de 2 mg/mL e irradiação com LD. Decorridos 7, 15 e 30 dias pós-tratamento, 5 animais de cada grupo foram submetidos à eutanásia. A área de furca dos molares foi submetida às análises histológica, histométrica... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The present study evaluated for the first time the effects of three concentrations of butyl toluidine blue (BuTB) as a photosensitizing agent on antimicrobial photodynamic therapy (aPDT), as adjuvant therapy to scaling and root planing (SRP), for the treatment of experimental periodontitis (EP) in rats. EP was induced by placing a cotton thread around the lower left first molar. Subsequently, the animals were randomly distributed into seven groups with 15 animals each, through a computer generated table, according to the following treatments: SRP (n = 15), SRP followed by local irrigation of physiological saline solution; BuTB-0.1 (n = 15), SRP followed by local application of 0.1 mg/mL BuTB; aPDT-0.1 (n = 15), SRP followed by local application of BuTB at 0.1 mg/mL concentration and irradiation with InGaAlP diode laser (DL) (660 nm, 40 mW, 60 s, 4 J); BuTB-0.5 (n = 15), SRP followed by local application of BuTB at 0.5 mg/mL concentration; aPDT-0.5 (n = 15), SRP followed by local application of BuTB at 0.5 mg/mL concentration and DL irradiation; BuTB-2.0 (n = 15), SRP followed by local application of BuTB at 2 mg/mL concentration; aPDT-2.0 (n = 15), SRP followed by local application of BuTB at 2 mg/mL concentration and DL irradiation. The animals (n=5) from each group were submitted to euthanasia at 7, 15 and 30 days post-treatment. The furcation area of the first lower molar was submitted to histological, histometric and immunohistochemical analyses to identify TGF-ß1, OCN an... (Complete abstract click electronic access below) / Doutor
385

Performance and Power Optimization of GPU Architectures for General-purpose Computing

Wang, Yue 18 June 2014 (has links)
Power-performance efficiency has become a central focus that is challenging in heterogeneous processing platforms as the power constraints have to be established without hindering the high performance. In this dissertation, a framework for optimizing the power and performance of GPUs in the context of general-purpose computing in GPUs (GPGPU) is proposed. To optimize the leakage power of caches in GPUs, we dynamically switch the L1 and L2 caches into low power modes during periods of inactivity to reduce leakage power. The L1 cache can be put into a low-leakage (sleep) state when a processing unit is stalled due to no ready threads to be scheduled and the L2 can be put into sleep state during its idle period when there is no memory request. The sleep mode is state-retentive, which obviates the necessity to flush the caches after they are woken up, thereby, avoiding any performance degradation. Experimental results indicate that this technique can reduce the leakage power by 52% on average. Further, to improve performance, we redistribute the GPGPU workload across the computing units of the GPU during application execution. The fundamental idea is to monitor the workload on each multi-processing unit and redistribute it by having a portion of its unfinished threads executed in a neighboring multi-processing unit. Experimental results show this technique improves the performance of the GPGPU workload by 15.7%. Finally, to improve both performance and dynamic power of GPUs, we propose two dynamic frequency scaling (DFS) techniques implemented on CPU host threads, one of which is motivated by the significance of the pipeline stalls during GPGPU execution. It applies a feedback controlling algorithm, Proportional-Integral-Derivative (PID), to regulate the frequency of parallel processors and memory channels based on the occupancy of the memory buffering queues. The other technique targets on maximizing the average throughput of all parallel processors under the dynamic power constraints. We formalize this target as a linear programming problem and solve it on the runtime. According to the simulation results, the first technique achieves more than 22% power savings with a 4% improvement in performance and the second technique saves 11% power consumption with 9% performance improvement. The contributions of this dissertation represent a significant advancement in the quest for improving performance and reducing energy consumption of GPGPU.
386

Experimental verification of the simplified scaling laws for bubbling fluidized beds at large scales

Sanderson, Philip John, 1974- January 2002 (has links)
Abstract not available
387

Debridement Of Subgingival Periodontally Involved Root Surfaces With A Micro-Applicator Brush: A Macroscopic And Scanning Electron Microscope Study

Carey, Helen January 1998 (has links)
Master of Science in Dentistry / This work was digitised and made available on open access by the University of Sydney, Faculty of Dentistry and Sydney eScholarship . It may only be used for the purposes of research and study. Where possible, the Faculty will try to notify the author of this work. If you have any inquiries or issues regarding this work being made available please contact the Sydney eScholarship Repository Coordinator - ses@library.usyd.edu.au
388

A reformulation of Coombs' Theory of Unidimensional Unfolding by representing attitudes as intervals

Johnson, Timothy Kevin January 2004 (has links)
An examination of the logical relationships between attitude statements suggests that attitudes can be ordered according to favourability, and can also stand in relationships of implication to one another. The traditional representation of attitudes, as points on a single dimension, is inadequate for representing both these relations but representing attitudes as intervals on a single dimension can incorporate both favourability and implication. An interval can be parameterised using its two endpoints or alternatively by its midpoint and latitude. Using this latter representation, the midpoint can be understood as the �favourability� of the attitude, while the latitude can be understood as its �generality�. It is argued that the generality of an attitude statement is akin to its latitude of acceptance, since a greater semantic range increases the likelihood of agreement. When Coombs� Theory of Unidimensional Unfolding is reformulated using the interval representation, the key question is how to measure the distance between two intervals on the dimension. There are innumerable ways to answer this question, but the present study restricts attention to eighteen possible �distance� measures. These measures are based on nine basic distances between intervals on a dimension, as well as two families of models, the Minkowski r-metric and the Generalised Hyperbolic Cosine Model (GHCM). Not all of these measures are distances in the strict sense as some of them fail to satisfy all the metric axioms. To distinguish between these eighteen �distance� measures two empirical tests, the triangle inequality test, and the aligned stimuli test, were developed and tested using two sets of attitude statements. The subject matter of the sets of statements differed but the underlying structure was the same. It is argued that this structure can be known a priori using the logical relationships between the statement�s predicates, and empirical tests confirm the underlying structure and the unidimensionality of the statements used in this study. Consequently, predictions of preference could be ascertained from each model and either confirmed or falsified by subjects� judgements. The results indicated that the triangle inequality failed in both stimulus sets. This suggests that the judgement space is not metric, contradicting a common assumption of attitude measurement. This result also falsified eleven of the eighteen �distance� measures because they predicted the satisfaction of the triangle inequality. The aligned stimuli test used stimuli that were aligned at the endpoint nearest to the ideal interval. The results indicated that subjects preferred the narrower of the two stimuli, contrary to the predictions of six of the measures. Since these six measures all passed the triangle inequality test, only one measure, the GHCM (item), satisfied both tests. However, the GHCM (item) only passes the aligned stimuli tests with additional constraints on its operational function. If it incorporates a strictly log-convex function, such as cosh, the GHCM (item) makes predictions that are satisfied in both tests. This is also evidence that the latitude of acceptance is an item rather than a subject or combined parameter.
389

Ordonnancement de tâches efficace et à complexité maîtrisée pour des systèmes temps-réel

Muhammad, F. 09 April 2009 (has links) (PDF)
Les performances des algorithmes d'ordonnancement ont un impact direct sur les performances du système complet. Les algorithmes d'ordonnancement temps réel possèdent des bornes théoriques d'ordonnançabilité optimales mais cette optimalité est souvent atteinte au prix d'un nombre élevé d'événements d'ordonnancement à considérer (préemptions et migrations de tâches) et d'une complexité algorithmique importante. Notre opinion est qu'en exploitant plus efficacement les paramètres des tâches il est possible de rendre ces algorithmes plus efficaces et à coût maitrisé, et ce dans le but d'améliorer la Qualité de Service (QoS) des applications. Nous proposons dans un premier temps des algorithmes d'ordonnancement monoprocesseur qui augmentent la qualité de service d'applications hybrides c'est-à-dire qu'en situation de surcharge, les tâches à contraintes souples ont leur exécution maximisée et les échéances des tâches à contraintes strictes sont garanties. Le coût d'ordonnancement de ces algorithmes est aussi réduit (nombre de préemptions) par une meilleure exploitation des paramètres implicites et explicites des tâches. Cette réduction est bénéfique non seulement pour les performances du système mais elle agit aussi positivement sur la consommation d'énergie. Aussi nous proposons une technique associée à celle de DVFS (dynamic voltage and frequency scaling) afin de minimiser le nombre de changements de points de fonctionnement du fait qu'un changement de fréquence implique un temps d'inactivité du processeur et une consommation d'énergie. Les algorithmes d'ordonnancement multiprocesseur basés sur le modèle d'ordonnancement fluide (notion d'équité) atteignent des bornes d'ordonnançabilité optimales. Cependant cette équité n'est garantie qu'au prix d'hypothèses irréalistes en pratique du fait des nombres très élevés de préemptions et de migrations de tâches qu'ils induisent. Dans cette thèse un algorithme est proposé (ASEDZL) qui n'est pas basé sur le modèle d'ordonnancement fluide. Il permet non seulement de réduire les préemptions et les migrations de tâches mais aussi de relâcher les hypothèses imposées par ce modèle d'ordonnancement. Enfin, nous proposons d'utiliser ASEDZL dans une approche d'ordonnancement hiérarchique ce qui permet d'obtenir de meilleurs résultats que les techniques classiques.
390

On Perceived Exertion and its Measurement

Borg, Elisabet January 2007 (has links)
<p>The general aim of the thesis is to answer questions on general and differential aspects of perceived exertion and on the measurement of its intensity variation. Overall perceived exertion is commonly treated as a unidemensional construct. This thesis also explores its multidimensional character. Four empirical studies are summarized (Study I-IV). Psychophysical power functions of perceived exertion obtained with the new improved Borg CR100 (centiMax) scale were found to be consistent with results obtained with absolute magnitude estimation, and with the classical Borg CR10 and RPE scales. Women gave significantly higher perceived exertion scale values than men for the same levels of workload on a bicycle ergometer. This agrees with the fact that they were physically less strong than men. With regard to the measurement of “absolute” levels of intensity, RPE- and CR-scale values were validated by physiological measurements of heart rate and blood lactate. Predicted values of maximal individual performance obtained from psychophysical functions agreed well with actual maximal performance on the bicycle ergometer. This confirms the validity of the RPE and CR scales for measuring perceptual intensity and their value for interindividual comparisons. To study the multidimensional character of perceived exertion, 18 symptoms were measured with a CR scale: in a questionnaire, and in bicycle ergometer work tests. Five factors were extracted for the questionnaire: (1) Muscles and joints; (2) Perceived exertion; (3) Annoyance/lack of motivation; (4) Head/stomach symptoms; and (5) Cardiopulmonary symptoms. Four factors were extracted for the bicycle max test: (1) Physical distress; (2) Central perceived exertion; (3) Annoyance/lack of motivation; (4) Local perceived exertion. The questionnaire is suggested for clinical use to let patients express a variety of symptoms. The thesis also resulted in improvements of the Borg CR100 scale. An extended use of the scale is recommended.</p>

Page generated in 0.0548 seconds