• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 314
  • 62
  • 51
  • 48
  • 24
  • 19
  • 12
  • 11
  • 10
  • 5
  • 5
  • 5
  • 4
  • 4
  • 2
  • Tagged with
  • 780
  • 62
  • 57
  • 51
  • 51
  • 47
  • 46
  • 41
  • 40
  • 37
  • 36
  • 35
  • 30
  • 30
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Fluid Mud Formation in the Petitcodiac River, New Brunswick, Canada

Heath, Kristy Marie January 2009 (has links)
Thesis advisor: Gail C. Kineke / Experiments were conducted in the Petitcodiac River in New Brunswick, Canada during June and August 2006 to study high-concentrations of suspended sediment in a turbulent system. This study will evaluate the conditions necessary for fluid mud formation by investigating 1) the suppression of turbulence at gradient Richardson numbers greater than 0.25; 2) a threshold condition for the amount of sediment a flow can maintain in a turbulent suspension; and 3) the influence of flocculation on vertical suspended-sediment transport. Direct measurements of salinity, temperature, current velocity, and suspended-sediment concentration were collected during accelerating and decelerating flows and when fluid mud formed. In June, current velocities were typically above 1 m s<super>-1</super> and suspended-sediment concentrations were generally less than 10 g l <super>-1</super>. In August, current velocities were typically less than 1.5 m s<super>-1</super>, suspended-sediment concentrations were greater than 10 g l <super>-1</super>, and a high-concentration bottom layer formed rapidly during decelerating flood currents. Gradient Richardson numbers for concentrations greater than 10 g l <super>-1</super> were generally greater than 0.25, suggesting strong density gradients have the ability to suppress turbulence. Results from the Petitcodiac suggest a carrying capacity threshold might exist, but is based on a critical gradient Richardson number between 1.0 and 2.0 rather than the previously accepted value of 0.25. Differences in the evolution of disaggregated grain size distributions for settling suspensions suggest flocculation plays an important role for fluid mud formation by enhancing settling of fine sediments. / Thesis (MS) — Boston College, 2009. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Geology and Geophysics.
252

Hon presenterar sig ofta som en diva : En kvalitativ studie om vilka roller kvinnor respektive män tilldelas i talkshowen Skavlan / She often presents herself as a diva : A qualitative study about which roles women and men are assigned to in the talk show Skavlan

Johansson, Katja January 2018 (has links)
The aim of this study was to examine how women and men are presented in the public service entertainment journalism, and more exact how they are presented in the Swedish-Norwegian television talk show Skavlan and what roles they are assigned to in Skavlan. The main question for this study was “What roles are assigned to women and men in Skavlan?” With roles, I mean roles that are connected to the aspect of power between women and men that can be found in gender-theory. To answer the main question and the purpose of the study I analyzed four interviews from the latest season of Skavlan where both men and women are being interviewed, using critical discourse analysis according to Fairclough.  The result of the study shows that women and men are assigned to different kind of roles in Skavlan and the roles are gender stereotyped. Men are assigned to roles such as authoritarian and knowing and women are assigned to roles such as responsible and high-performance. Men are also presented as superior to the woman and in the majority of the interviews men are using verbal suppression techniques against each other and against women. Sometimes it can also be perceived as if women are using verbal suppression techniques, even if it’s more obvious that men are using it.
253

Molecular studies on hepatitis B virus induced hepatocellular carcinoma by est sequencing and suppression subtractive hybridization.

January 2000 (has links)
Yu Chi Hung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 124-139). / Abstracts in English and Chinese. / Acknowledgement --- p.i / Table of Contents --- p.ii / Abbreviations --- p.iv / Abstract --- p.v / 論文摘要 --- p.vi / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- General introduction / Chapter 1.2 --- HBV and its potential oncogenic properties / Chapter 1.3 --- Aim of the present study / Chapter 1.4 --- Expressed sequence tag (EST) analysis: an approach to reveal gene expression pattern in a specific tissue / Chapter 1.5 --- cDNA subtraction / Chapter Chapter 2 --- Materials and Methods --- p.17 / Chapter 2.1 --- Plating out the adult human normal liver cDNA library / Chapter 2.2 --- PCR amplification of cloned human normal liver cDNA inserts / Chapter 2.3 --- Cycle sequencing of cloned human normal liver cDNA inserts / Chapter 2.4 --- mRNA preparation from the HCC tissue and its surrounding normal counterpart / Chapter 2.5 --- PCR-Select cDNA subtraction / Chapter 2.6 --- Construction of HCC subtracted cDNA library by T/A cloning method / Chapter 2.7 --- PCR amplification of cloned subtracted cDNA / Chapter 2.8 --- Cycle sequencing of cloned subtracted cDNA / Chapter 2.9 --- Sequence analysis / Chapter 2.10 --- Differential hybridization of HCC subtracted clones / Chapter Chapter 3 --- Results --- p.46 / Chapter 3.1 --- The sequencing results of adult human normal liver cDNA clones / Chapter 3.2 --- Categorization of ESTs sequenced from the adult normal liver / Chapter 3.3 --- Adaptor ligation efficiency analysis / Chapter 3.4 --- Primary and secondary PCR Amplification / Chapter 3.5 --- PCR analysis of subtraction efficiency / Chapter 3.6 --- The sequencing results of subtracted HCC cDNA clones / Chapter 3.7 --- Categorization of ESTs sequenced from the subtracted HCC cDNA library / Chapter 3.8 --- Differential hybridization of subtracted cDNA clones / Chapter Chapter 4 --- Discussions --- p.90 / Chapter 4.1 --- Characterization of the ESTs generated from human normal liver cDNA library / Chapter 4.2 --- EST analysis on subtracted HCC cDNA clones / Chapter 4.3 --- Candidate genes differentially expressed in HCC / Appendix A The coordinates of dot blots (in numerical order according to clone numbers) / Appendix B The coordinates of dot blots (in alphabetical order according to putative identity) / References --- p.124
254

Extending Complex Event Processing for Advanced Applications

Wang, Di 30 April 2013 (has links)
Recently numerous emerging applications, ranging from on-line financial transactions, RFID based supply chain management, traffic monitoring to real-time object monitoring, generate high-volume event streams. To meet the needs of processing event data streams in real-time, Complex Event Processing technology (CEP) has been developed with the focus on detecting occurrences of particular composite patterns of events. By analyzing and constructing several real-world CEP applications, we found that CEP needs to be extended with advanced services beyond detecting pattern queries. We summarize these emerging needs in three orthogonal directions. First, for applications which require access to both streaming and stored data, we need to provide a clear semantics and efficient schedulers in the face of concurrent access and failures. Second, when a CEP system is deployed in a sensitive environment such as health care, we wish to mitigate possible privacy leaks. Third, when input events do not carry the identification of the object being monitored, we need to infer the probabilistic identification of events before feed them to a CEP engine. Therefore this dissertation discusses the construction of a framework for extending CEP to support these critical services. First, existing CEP technology is limited in its capability of reacting to opportunities and risks detected by pattern queries. We propose to tackle this unsolved problem by embedding active rule support within the CEP engine. The main challenge is to handle interactions between queries and reactions to queries in the high-volume stream execution. We hence introduce a novel stream-oriented transactional model along with a family of stream transaction scheduling algorithms that ensure the correctness of concurrent stream execution. And then we demonstrate the proposed technology by applying it to a real-world healthcare system and evaluate the stream transaction scheduling algorithms extensively using real-world workload. Second, we are the first to study the privacy implications of CEP systems. Specifically we consider how to suppress events on a stream to reduce the disclosure of sensitive patterns, while ensuring that nonsensitive patterns continue to be reported by the CEP engine. We formally define the problem of utility-maximizing event suppression for privacy preservation. We then design a suite of real-time solutions that eliminate private pattern matches while maximizing the overall utility. Our first solution optimally solves the problem at the event-type level. The second solution, at event-instance level, further optimizes the event-type level solution by exploiting runtime event distributions using advanced pattern match cardinality estimation techniques. Our experimental evaluation over both real-world and synthetic event streams shows that our algorithms are effective in maximizing utility yet still efficient enough to offer near real time system responsiveness. Third, we observe that in many real-world object monitoring applications where the CEP technology is adopted, not all sensed events carry the identification of the object whose action they report on, so called €œnon-ID-ed€� events. Such non-ID-ed events prevent us from performing object-based analytics, such as tracking, alerting and pattern matching. We propose a probabilistic inference framework to tackle this problem by inferring the missing object identification associated with an event. Specifically, as a foundation we design a time-varying graphic model to capture correspondences between sensed events and objects. Upon this model, we elaborate how to adapt the state-of-the-art Forward-backward inference algorithm to continuously infer probabilistic identifications for non-ID-ed events. More important, we propose a suite of strategies for optimizing the performance of inference. Our experimental results, using large-volume streams of a real-world health care application, demonstrate the accuracy, efficiency, and scalability of the proposed technology.
255

Development and Validation of a Modified Clean Agent Draining Model for Total Flooding Fire Suppression Systems

Hetrick, Todd M 21 January 2009 (has links)
This project analyzes the validity of theoretical models used to predict the duration (hold time) for which a halon-replacement suppression agent will remain within a protected enclosure. Two current models and one new formulation are investigated; the sharp descending interface model (as applied in NFPA 2001, Annex C), the wide descending interface model (implemented in ISO 14520.1, Annex E), and the thick descending interface model (introduced herein). The thick interface model develops the characteristic thickness as an additional input parameter. Experimental data from 34 full-scale tests designed to characterize the discharge and draining dynamics of seven clean extinguishing agents (CEA) is used to assess model validity. For purposes of model validation the characteristic thickness is regressed from the experimental data although further work may be required to establish the independence of this parameter to other system design and environmental variables. Results show that the wide and sharp interface models' validity is highly sensitive to the threshold of agent concentration decay being modeled; whereas the thick interface prediction method demonstrates increased robustness at any modeled threshold. When the hold time is defined as a 15% decay in agent concentration, experimentally obtained hold time values are roughly 10% shorter than sharp interface predictions, 60% longer than wide interface predictions, and 30% longer than the thick interface model predicts.
256

Magnitude do reforço como uma variável determinante da supressão condicionada da resposta humana de clicar / The reinforce magnitude as a variable determining conditioned suppression of human response of clicks

Silva, Ana Paula de Oliveira 11 November 2010 (has links)
Made available in DSpace on 2016-04-29T13:17:33Z (GMT). No. of bitstreams: 1 Ana Paula de Oliveira Silva.pdf: 5574963 bytes, checksum: e89c5f7d5d8c726d8e40e5d7586fb9ca (MD5) Previous issue date: 2010-11-11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The present study sought to investigate whether the magnitude of reinforcement is a relevant variable in determining the conditioned response suppression of "clicks", produced by presenting a negative punisher stimulus, in humans. Thirty participants were recruited and had tasked assemble puzzles 24 pieces on a computer. The responses observed was "click, drag, and engaging" pieces of the puzzle. Each side of piece placed in the correct position worth a point and, at the end of the experiment, the accumulated points were exchanged for cash. Participants were divided into 2 groups (A and B) in function maximum amount of cash received (R$10,00 or R$20,00 reais respectively). Subsequently, on the results obtained with the groups A and B, a new group was created with 6 participants who received R$10.00 reais and were exposed to ten presentations of conditioned aversive stimuli scheduled with a duration of 15 seconds each (groups A and B were exposed to only 3 presentations of this type, with duration of 1 minute each). Each participant, regardless of the group to which it belonged, performed the Test Task, and obtaining success, started the baseline. It was use the free operant procedure and the reinforcement schema FR1, both Baseline and Experimental Phase. The consequences for hits and errors also were the same. The pilot was initiated so that occurred the stabilization of response rate and the rate of reinforcements obtained. For participants of group a and group B, a schema VT or variable time entered into force at that stage, overlaid on the existing line FR1 schema: in three moments of the game the computer screen was green for 60 seconds and after that time, a fixed number of points (four points which is equivalent to around 10.5% of the total number of points that can be obtained on a puzzle) was withdrawn from the amount of points obtained so far. Loss was flagged by computer through a light sound (similar to coins falling on the floor). For the Group C conditions were identical, except the number of exhibits to the stimuli conditioned. None of the participants presented clearly a performance that could be considered response suppression. The performances were quite varied, and not consistent. The curves of the Experimental Phase present similar variations observed on the Baseline ones and are not consistent and exclusively quotas (and contiguous) upon presentation of the green screen / O presente trabalho pretendeu investigar se a magnitude do reforço é uma variável relevante na determinação da supressão condicionada da resposta de clicar , produzida pela apresentação de um estímulo punidor negativo , em humanos. Trinta participantes foram recrutados e tinham como tarefa montar quebra-cabeças de 24 peças em um computador. A cadeia de respostas observada foi clicar, arrastar e acoplar peças dos quebra-cabeças. Cada lado da peça colocado na posição correta valia um ponto e, ao final do experimento, os pontos acumulados eram trocados por dinheiro. Os participantes foram distribuídos em 2 grupos (A e B) em função quantidade máxima de dinheiro recebida (R$ 10,00 ou R$ 20,00 reais respectivamente). Posteriormente, diante dos resultados obtidos com os grupos A e B, um novo grupo foi criado com 6 participantes que receberam R$ 10,00 reais e foram expostos a dez apresentações dos estímulos aversivos condicionados programados com duração de 15 segundos cada (em quanto os grupos A e B foram expostos a apenas 3 apresentações desse tipo, com duração de 1 minuto cada). Cada participante, independentemente do grupo a que pertencia, realizada o Teste de Tarefa e, obtendo êxito, iniciava a Linha de Base. Foram utilizados o procedimento de operante livre e o esquema de reforçamento FR1 tanto na Linha de Base quanto na Fase Experimental. As conseqüências para acertos e erros também eram as mesmas. A fase Experimental era iniciada assim que ocorresse a estabilização da taxa de respostas e da taxa de reforços obtidos. Para os participantes do Grupo A e do Grupo B, um esquema VT ou tempo variável entrava em vigor nessa fase, sobreposto ao esquema FR1 vigente: em três momentos do jogo a tela do computador ficava verde por 60 segundos e ao final desse período, um número fixo de pontos (quatro pontos que é equivalente a em torno de 10,5% do total de pontos possíveis de serem obtidos em um quebra-cabeça) era retirado do montante de pontos obtidos até o momento. A perda de pontos era sinalizada pelo computador através de um som ameno (semelhante ao de moedas caindo no chão). Para o Grupo C as condições eram idênticas, exceto o número de exposições aos estímulos condicionados. Nenhum dos participantes apresentou claramente um desempenho que poderia ser considerado supressão de resposta. Os desempenhos foram bastante variados, e não consistentes. As curvas da Fase Experimental apresentam variações semelhantes e às observadas na Linha de Base e não são consistentes e exclusivamente contingentes (e contíguas) à apresentação da tela verde
257

Differential gene expression during neonatal myocardial development revealed by suppression subtractive hybridization & expressed sequence tag sequencing. / CUHK electronic theses & dissertations collection

January 2000 (has links)
Stephen Siu-chung Chim. / "June 2000." / Thesis (Ph.D.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (p. 152-166). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
258

Neural mapping of binocular and amblyopic suppression

Chima, Akash S. January 2015 (has links)
Inter-ocular suppression occurs when very different images are presented to each eye. Diplopia ensues if different images are superimposed and perceived. The brain removes this unfavourable viewing experience by suppressing one eye’s input to enable clear single vision. Inter-ocular suppression during visual development occurs in response to sufficiently disparate images caused by strabismus (misalignment of the visual axis) or anisometropia (uncorrected difference in refractive error), and if persistent may result in amblyopia. This is reduced visual sensitivity, usually in one eye, to a range of visual functions that cannot be corrected by refraction. Furthermore, binocular vision is reduced or absent. Depth and extent of suppression is measured across the central visual field in healthy participants with monocularly blurred vision, healthy participants with monocularly reduced luminance using neutral density (ND) filters, and participants with naturally disrupted binocular vision and/or amblyopia. Suppression of spatial stimuli defined by luminance (L) and luminancemodulated noise (LM) was compared to that measured for stimuli defined by contrast-modulated noise (CM), for which there is no change in mean luminance. For all stimuli suppression depth increased with increased imbalance of binocular input. Suppression was of a similar depth across the visual field with imposed blur and localised central suppression was found with ND filters. Microstrabismics showed central suppression, while strabismic amblyopes showed central in addition to hemifield suppression. Suppression for all participants was measured to be deeper for CM spatial stimuli than for LM spatial stimuli. This is suggested to be a result of CM stimuli engaging more binocular mechanisms of processing, than LM stimuli, thereby becoming more sensitive to disruptions of binocularity such as those produced in the participants in the present study. CM stimuli are therefore more sensitive to detecting suppression, which is associated with amblyopia.
259

Estudo numérico de uma asa com controle ativo de flutter por realimentação da pressão medida num ponto / Numeric study of a wing with flutter active control by feedback of the pressure measured in one point

Tiago Francisco Gomes da Costa 06 July 2007 (has links)
Neste trabalho é desenvolvido um sistema de controle ativo para supressão de flutter de uma asa utilizando-se sensores de pressão em pontos estratégicos de sua superfície. O flutter é um fenômeno aeroelástico que caracteriza um acoplamento instável entre estrutura flexível e escoamento aerodinâmico não estacionário. Quando a modificação da estrutura ou da aerodinâmica da asa não é viável, o uso de sistemas de controle passa a ser uma boa opção. Para o desenvolvimento do sistema de controle proposto, é primeiramente desenvolvido um modelo numérico de asa flexível. Com esse modelo numérico e a pressão na superfície da asa medida em certos pontos e realimentada ao sistema controlador, são determinadas correções no ângulo de uma superfície de controle no bordo de fuga. A tentativa de se utilizar um sistema de controle bem simples, com o uso de um único sensor de pressão, mostra a viabilidade de se implementar um sistema deste tipo em aeronaves reais. Esse sistema pode tornar-se uma alternativa aos desenvolvidos até então com o uso de acelerômetros, além de ser útil em sistemas onde se procura prever o estol e observar o comportamento da distribuição de pressão sobre a asa em vôo. / In this work, a wing flutter suppression active control system using pressure sensors in strategic points is developed. Flutter is an aeroelastic phenomenon characterized by an unstable coupling of a flexible structure and a non-stationary aerodynamic flow. When changes of the wing structure or of the aerodynamics are not viable, the use of automatic control systems becomes a good option. For the developing of the suggested control system, a numeric model of a finite flexible wing is firstly done. With this model and the pressure over the wing surface read in certain points and fedback to the control system, changes of the control surface angle on the trailing edge are determined. The attempt to use a simple control system, with a unique pressure sensor shows the viability of implanting this kind of system in real aircrafts. This system may become an alternative to those developed until now, using accelerometers. Yet, it could be useful, in systems where it is necessary to predict stall and observe the pressure load behavior over the wing in flight.
260

Filtros de Kalman no tempo e freqüência discretos combinados com subtração espectral / Kalman filters of time and frequency discrete combined with spectral subtraction

Leandro Aureliano da Silva 20 July 2007 (has links)
Este trabalho tem a finalidade de apresentar e comparar técnicas de redução de ruído utilizando como critérios de avaliação a mínima distorção espectral e a redução de ruído, na reconstrução dos sinais de voz degradados por ruído. Para tanto, utilizou-se os filtros de Kalman de tempo discreto e de freqüência discreta em conjunto com a técnica de subtração espectral de potência. Os sinais utilizados foram contaminados por ruídos branco e colorido, e a avaliação do desempenho dos algoritmos foi realizada tendo-se como parâmetros a relação sinal/ruído segmentada (SNRseg) e a distância de Itakura-Saito (d(a,b)). Após o processamento, verificou-se que a técnica, proposta neste trabalho, de filtragem de Kalman no tempo em conjunto com a subtração espectral de potência, apresentou resultados um pouco melhores em relação à filtragem de Kalman na freqüência em conjunto com a subtração espectral de potência. / This work has as main objective to present and to compare techniques of noise reduction using as evaluation criterion the low spectral distortion and the noise reduction in the reconstruction of corrupted speech signals. For so much, it was used the Kalman\'s filters in the time and frequency domain together with the technique of power spectral subtraction. The used signals were corrupted by white and colored noises and the evaluation of effectiveness of the algorithms was accomplished using the segmental signal-to-noise ratio (SNRseg) and the Itakura-Saito distance (d(a,b)). After the processing, it was noticed that the Kalman filtering in the time together with power spectral subtraction presented better results than the Kalman filtering in the frequency together with power spectral subtraction.

Page generated in 0.0831 seconds