• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 588
  • 99
  • 64
  • 44
  • 24
  • 22
  • 21
  • 15
  • 14
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1150
  • 199
  • 139
  • 90
  • 87
  • 82
  • 74
  • 68
  • 64
  • 63
  • 56
  • 56
  • 55
  • 50
  • 50
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Multisensor Segmentation-based Noise Suppression for Intelligibility Improvement in MELP Coders

Demiroglu, Cenk 18 January 2006 (has links)
This thesis investigates the use of an auxiliary sensor, the GEMS device, for improving the quality of noisy speech and designing noise preprocessors to MELP speech coders. Use of auxiliary sensors for noise-robust ASR applications is also investigated to develop speech enhancement algorithms that use acoustic-phonetic properties of the speech signal. A Bayesian risk minimization framework is developed that can incorporate the acoustic-phonetic properties of speech sounds and knowledge of human auditory perception into the speech enhancement framework. Two noise suppression systems are presented using the ideas developed in the mathematical framework. In the first system, an aharmonic comb filter is proposed for voiced speech where low-energy frequencies are severely suppressed while high-energy frequencies are suppressed mildly. The proposed system outperformed an MMSE estimator in subjective listening tests and DRT intelligibility test for MELP-coded noisy speech. The effect of aharmonic comb filtering on the linear predictive coding (LPC) parameters is analyzed using a missing data approach. Suppressing the low-energy frequencies without any modification of the high-energy frequencies is shown to improve the LPC spectrum using the Itakura-Saito distance measure. The second system combines the aharmonic comb filter with the acoustic-phonetic properties of speech to improve the intelligibility of the MELP-coded noisy speech. Noisy speech signal is segmented into broad level sound classes using a multi-sensor automatic segmentation/classification tool, and each sound class is enhanced differently based on its acoustic-phonetic properties. The proposed system is shown to outperform both the MELPe noise preprocessor and the aharmonic comb filter in intelligibility tests when used in concatenation with the MELP coder. Since the second noise suppression system uses an automatic segmentation/classification algorithm, exploiting the GEMS signal in an automatic segmentation/classification task is also addressed using an ASR approach. Current ASR engines can segment and classify speech utterances in a single pass; however, they are sensitive to ambient noise. Features that are extracted from the GEMS signal can be fused with the noisy MFCC features to improve the noise-robustness of the ASR system. In the first phase, a voicing feature is extracted from the clean speech signal and fused with the MFCC features. The actual GEMS signal could not be used in this phase because of insufficient sensor data to train the ASR system. Tests are done using the Aurora2 noisy digits database. The speech-based voicing feature is found to be effective at around 10 dB but, below 10 dB, the effectiveness rapidly drops with decreasing SNR because of the severe distortions in the speech-based features at these SNRs. Hence, a novel system is proposed that treats the MFCC features in a speech frame as missing data if the global SNR is below 10 dB and the speech frame is unvoiced. If the global SNR is above 10 dB of the speech frame is voiced, both MFCC features and voicing feature are used. The proposed system is shown to outperform some of the popular noise-robust techniques at all SNRs. In the second phase, a new isolated monosyllable database is prepared that contains both speech and GEMS data. ASR experiments conducted for clean speech showed that the GEMS-based feature, when fused with the MFCC features, decreases the performance. The reason for this unexpected result is found to be partly related to some of the GEMS data that is severely noisy. The non-acoustic sensor noise exists in all GEMS data but the severe noise happens rarely. A missing data technique is proposed to alleviate the effects of severely noisy sensor data. The GEMS-based feature is treated as missing data when it is detected to be severely noisy. The combined features are shown to outperform the MFCC features for clean speech when the missing data technique is applied.
242

Aggregation-Induced Emission Enhancement of organic and polymeric fluorophores containing 2,4,6-Triphenylpyridine moiety

Li, Yi-wei 13 July 2012 (has links)
In this study, organic and polymeric fluorophores based on 2,4,6-triphenylpyridiene (TPP) structure were found to have the novel aggregation-induced emission enhancement (AIEE) property. For the small-mass fluorophores, TPP and cyano-functionalized TPP (TPP-CN) were prepared from the facile Chichibabin reaction and their AIEE properties were characterized by the emission behavior in solvent/nonsolvent pair. Solid samples of TPP and TPP-CN exhibit enhanced emission with increasing annealing time under solvent vapors, thus, both compounds have the characters of crystallization-induced emission enhancement (CIEE) behavior. For polymeric system, copolymers PTPPF containing alternative TPP and 9,9-dioctylfluorene units were prepared through Suzuki coupling and study on its fluorescence behavior also reveals its AIEE feature. The rigid polymer possesses high emission intensity in its dilute solution state and remains almost unchanged with increasing nonsolvent fraction. The solid sample was found to have a high quantum yield (ФF) of 70 %.
243

A Comparative Evaluation Of Super

Erbay, Fulya 01 May 2011 (has links) (PDF)
In this thesis, it is proposed to get the high definition color images by using super &ndash / resolution algorithms. Resolution enhancement of RGB, HSV and YIQ color domain images is presented. In this study, three solution methods are presented to improve the resolution of HSV color domain images. These solution methods are suggested to beat the color artifacts on super resolution image and decrease the computational complexity in HSV domain applications. PSNR values are measured and compared with the results of other two color domain experiments. In RGB color space, super &ndash / resolution algorithms are applied three color channels (R, G, B) separately and PSNR values are measured. In YIQ color domain, only Y channel is processed with super resolution algorithms because Y channel is luminance component of the image and it is the most important channel to improve the resolution of the image in YIQ color domain. Also, the third solution method suggested for HSV color domain offers applying super resolution algorithm to only value channel. Hence, value channel carry brightness data of the image. The results are compared with the YIQ color domain experiments. During the experiments, four different super resolution algorithms are used that are Direct Addition, MAP, POCS and IBP. Although, these methods are widely used reconstruction of monochrome images, here they are used for resolution enhancement of color images. Color super resolution performances of these algorithms are tested.
244

Preserving dynamic reconfiguration consistency in aspect oriented middleware

Surajbali, Bholanathsingh, Grace, Paul, Coulson, Geoff January 2010 (has links)
Aspect-oriented middleware is a promising technology for the realisation of dynamic reconfiguration in heterogeneous distributed systems. However, like other dynamic reconfiguration approaches, AO-middleware-based reconfiguration requires that the consistency of the system is maintained across reconfigurations. AO-middleware-based reconfiguration is an ongoing research topic and several consistency approaches have been proposed. However, most of these approaches tend to be targeted at specific contexts, whereas for distributed systems it is crucial to cover a wide range of operating conditions. In this paper we propose an approach that offers distributed, dynamic reconfiguration in a consistent manner, and features a flexible framework-based consistency management approach to cover a wide range of operating conditions. We evaluate our approach by investigating the configurability and transparency of our approach and also quantify the performance overheads of the associated consistency mechanisms.
245

CRISPR AND THETREATMENT/EN DISTINCTION : On Vagueness, Borderline Cases and Germline Genome Editing / CRISPR OCH DISTINKTIONEN MELLAN BEHANDLING/FÖRBÄTTRING : Om vaghet, borderline fall och ärftlig genredigering

Svensson, Ellen January 2021 (has links)
In this thesis, I argue that the treatment/enhancement distinction that is central to the ethical debate concerning germline genome editing and CRISPR is too vague to be ethically and normatively guiding. The problem of vagueness is twofold, being both a semantic and epistemic issue. This vagueness creates borderline cases, cases that cannot be properly defined as either treatment or enhancement, I call this The Borderline Cases Argument. These borderline cases enable a slippery slope towards eugenic practices, radical enhancement and dangerous applications of CRISPR. The distinction therefore fails to be action guiding as it cannot distinguish treatment from enhancement as well as failing to correspond to what is genuinely morally problematic with germline genome editing and not, I call this The Argument of Missing the Point. In using the treatment/enhancement distinction we therefore risk losing control over how CRISPR is used and for what purposes.
246

Design of a Digitally Enhanced, Low Power, High Gain, High Linearity CMOS Mixer and CppSim Evaluation

Saidev, Sriram 28 September 2016 (has links)
No description available.
247

Late enhancement CT (LIE-CT) zur Beurteilung der Myokardvitalität: eine Metaanalyse

Fischer, Robin 03 June 2024 (has links)
In der vorliegenden Arbeit wurden bisher publizierte Daten über LIE-CT des Myokards in Form einer Metaanalyse systematisch ausgewertet. Das primäre Ziel der Metaanalyse war, bisherige Erkenntnisse über die Treffsicherheit der LIE-CT in Bezug auf infarzierte Myokardabschnitte quantitativ zusammenzufassen. Nach Entfernung der Duplikate ergab die Suche in den vier Datenbanken insgesamt 914 unterschiedliche Treffer. Letztlich wurden 12 Studien eingeschlossen (336 Patienten; 268 männlich, 68 weiblich; das mittlere Alter der Patienten variierte zwischen 55 und 67,7 Jahren). Die AUC (area under the curve) der SROC-Kurve (summary receiver operating characteristic curve) als Maß für die Treffsicherheit der LIE-CT für infarzierte Segmente beträgt 0,95 (95 % CI: 0,93 - 0,97). Die gepoolte Sensitivität der LIE-CT beträgt 0,81 (95 % CI: 0,75 - 0,87) und eine gepoolte Spezifität von 0,97 (95 % CI: 0,95 - 0,98). Fünf der eingeschlossenen 12 Studien (mit insgesamt 126 Patienten) haben zusätzlich den transmuralen Charakter der Myokardnarben ausgewertet. In Bezug auf transmurale infarzierte Myokardsegmente beträgt die AUC 0,94 (95 % CI: 0,91 - 0,95). Die gepoolte Sensitivität der LIE-CT für transmurale Infarkte beträgt 0,81 (95 % CI: 0,74 -0 ,86) und die gepoolte Spezifität 0,96 (95 % CI: 0,93 - 0,98). Das diagnostische Odds Ratio der gepoolten Daten war 146 (95 % CI: 65 - 331). I2 beträgt 97,95 % (95 % CI: 94 - 99 %), vereinbar mit einer hohen Heterogenität. Die Ergebnisse des Deeks-Tests (als Maß für die Publikationsbias) waren nicht statistisch signifikant (p = 0,08 beziehungsweise p = 0,84). Zu den Limitationen der Arbeit gehören die Verwendung anderer bildgebender Verfahren als Referenzstandard, die Heterogenität der eingeschlossenen Studien, den Einschluss sowohl von Patienten mit subakuten als auch mit alten Myokardinfarkten und die geringe Anzahl von Studien, die Angaben zum transmuralen Charakter der Infarkte machen. Insgesamt sprechen die Ergebnisse dafür, dass die LIE-CT eine relativ hohe Sensitivität und eine sehr hohe Spezifität für Myokardinfarkte hat. Die Sensitivität ist niedriger als die Referenzmethode LGE-MRT, während die hohe Spezifität darauf hindeutet, dass beide Methoden zumindest in dieser Hinsicht ähnlich gut sind. Anhand der vorliegenden Daten kann die LIE-CT nicht als Ersatz für die LGE-MRT empfohlen werden, wenn die Myokardvitalität die primäre diagnostische Fragestellung ist. Dagegen könnte die LIE-CT eine sinnvolle Erweiterung des CT-Protokolls bei Patienten sein, bei denen im Rahmen der CT-Koronarangiographie hochgradige Stenosen oder Verschlüsse gefunden werden.:Abkürzungsverzeichnis 1 Einleitung 1.1 Pathophysiologie und Epidemiologie der Myokardischämie 1.2 Therapie des chronischen Koronarsyndroms 1.3 Myokardvitalität und bildgebende Diagnostik 1.3.1 Echokardiographie 1.3.2 Einzelphotonen-Emissionscomputertomographie (SPECT) 1.3.3 Positronenemissionstomographie (PET) 1.3.4 Magnetresonanztomographie (MRT) 1.3.5 Computertomographie (CT) 1.4 Fragestellung 2 Material und Methoden 2.1 Studiensuche und -selektion 2.2 Ethikvotum 2.3 Datenextraktion 2.4 Statistische Auswertung 2.4.1 Heterogenität 2.4.2 Publikationsbias 2.4.3 Methodische Qualität 3 Ergebnisse 3.1 Literaturrecherche 3.2 Eingeschlossene Studien 3.2.1 Studiendesign und Patientenzahlen 3.2.2 akute vs. chronische Infarkte 3.2.3 Referenzstandard 3.2.4 CT-Untersuchungstechnik 3.2.5 Heterogenität der Studien 3.3 Publikationsbias 3.4 Methodische Qualität 4 Diskussion 4.1 LIE-CT in der Metaanalyse und in den eingeschlossenen Einzelstudien 4.2 Transmuralität der Infarktareale 4.3 Strahlenbelastung 4.4 LIE-CT und LGE-MRT 4.5 LIE-CT und andere Verfahren zur Beurteilung der Myokardvitalität 4.5.1 SPECT 4.5.2 PET 4.5.3 Dobutamin-Stressechokardiographie (DSE) 4.6 LIE-CT bei nicht-ischämischen Fragestellungen 4.7 Methodische Qualität 4.8 Limitationen 4.9 Optimierungspotential der LIE-CT und Ausblick Zusammenfassung Summary Tabellenverzeichnis Abbildungsverzeichnis Literaturverzeichnis / We performed a meta-analysis of published data on late iodine enhancement computed tomography (LIE-CT) in patients with myocardial infarction. The aim of the meta-analysis was to evaluate the accuracy of LIE-CT in detecting infarcted myocardial segments. After removal of duplicates the initial search of the four databases identified 914 potentially matching articles. After further selection based on article titles, abstracts and then full texts, 12 studies were finally included in the meta-analysis (336 patients; 268 male, 68 female; median patient age between 55 und 67.7 years). The AUC (area under the curve) of the SROC curve (summary receiver operating characteristic curve) as a measure of the accuracy of LIE CT for infarcted segments is 0.95 (95 % CI: 0.93 - 0.97). The pooled sensitivity of LIE-CT was 0.81 (95 % CI: 0.75 - 0.87) and the pooled specificity was 0,97 (95 % CI: 0.95 - 0.98). Five of the 12 included studies also have evaluated the performance of LIE-CT for differentiating between transmural and non-transmural infarctions. In this case the AUC was 0.94 (95 % CI: 0.91 - 0.95). The pooled sensitivity of LIE-CT for transmural Infarctions was 0.81 (95 % CI: 0.74 - 0.86) and the pooled specificity 0.96 (95 % CI: 0.93 - 0.98). The diagnostic odds ratio of the pooled data was 146 (95 % CI: 65 - 331). I2 was 97.95 % (95 % CI: 94 - 99 %), consistent with high heterogeneity. The results of the Deek`s test, which served as a measure for publication bias were not statistically significant (p = 0.08 and p = 0.84, respectively). Among the limitations of the current study are the use of other imaging modalities as a reference standard, the heterogeneity of included studies, the inclusion of both subacute and chronic infarct cases and the low number of studies which have evaluated the transmurality of the infarctions. The results of this meta-analysis confirm that LIE-CT has a relatively high sensitivity and a very high specificity for myocardial infarctions. The sensitivity is lower compared to the reference standard LGE-MRI, while the high specificity shows that both methods are comparable at least in this regard. Based on the data summarized in this meta-analysis, LIE-CT cannot be recommended as a replacement for LGE-MRI when evaluating myocardial viability is the main purpose of the imaging examination. LIE-CT can be a helpful addition to cardiac CT exam protocols, when a high-grade stenosis or occlusion is found on coronary CT angiography.:Abkürzungsverzeichnis 1 Einleitung 1.1 Pathophysiologie und Epidemiologie der Myokardischämie 1.2 Therapie des chronischen Koronarsyndroms 1.3 Myokardvitalität und bildgebende Diagnostik 1.3.1 Echokardiographie 1.3.2 Einzelphotonen-Emissionscomputertomographie (SPECT) 1.3.3 Positronenemissionstomographie (PET) 1.3.4 Magnetresonanztomographie (MRT) 1.3.5 Computertomographie (CT) 1.4 Fragestellung 2 Material und Methoden 2.1 Studiensuche und -selektion 2.2 Ethikvotum 2.3 Datenextraktion 2.4 Statistische Auswertung 2.4.1 Heterogenität 2.4.2 Publikationsbias 2.4.3 Methodische Qualität 3 Ergebnisse 3.1 Literaturrecherche 3.2 Eingeschlossene Studien 3.2.1 Studiendesign und Patientenzahlen 3.2.2 akute vs. chronische Infarkte 3.2.3 Referenzstandard 3.2.4 CT-Untersuchungstechnik 3.2.5 Heterogenität der Studien 3.3 Publikationsbias 3.4 Methodische Qualität 4 Diskussion 4.1 LIE-CT in der Metaanalyse und in den eingeschlossenen Einzelstudien 4.2 Transmuralität der Infarktareale 4.3 Strahlenbelastung 4.4 LIE-CT und LGE-MRT 4.5 LIE-CT und andere Verfahren zur Beurteilung der Myokardvitalität 4.5.1 SPECT 4.5.2 PET 4.5.3 Dobutamin-Stressechokardiographie (DSE) 4.6 LIE-CT bei nicht-ischämischen Fragestellungen 4.7 Methodische Qualität 4.8 Limitationen 4.9 Optimierungspotential der LIE-CT und Ausblick Zusammenfassung Summary Tabellenverzeichnis Abbildungsverzeichnis Literaturverzeichnis
248

Frost nucleation and growth on hydrophilic, hydrophobic, and biphilic surfaces

Van Dyke, Alexander Scott January 1900 (has links)
Master of Science / Department of Mechanical and Nuclear Engineering / Amy R. Betz / The purpose of this research was to test if biphilic surfaces mitigate frost and ice formation. Frost, which forms when humid air comes into contact with a surface that is below the dew point and freezing temperature of water, hinders engineering systems such as aeronautics, refrigeration systems, and wind turbines. Most previous research has investigated increasingly superhydrophobic materials to delay frost formation; however, these materials are dependent on fluctuating operating conditions and surface roughness. Therefore, the hypothesis for this research was that a biphilic surface would slow the frost formation process and create a less dense frost layer, and water vapor would preferentially condense on hydrophilic areas, thus controlling where nucleation initially occurs. Preferential nucleation can control the size, shape, and location of frost nucleation. To fabricate biphilic surfaces, a hydrophobic material was coated on a silicon wafer, and a pattern of hydrophobic material was removed using photolithography to reveal hydrophilic silicon-oxide. Circles were patterned at various pitches and diameters. The heat sink was comprised of two parts: a solid bottom half and a finned upper half. Half of the heat sink was placed inside a polyethylene base for insulation. Tests were conducted in quiescent air at room temperature, 22 °C, and two relative humidities, 30% and 60%. Substrate temperatures were held constant throughout all tests. All tests showed a trend that biphilic surfaces suppress freezing temperature more effectively than plain hydrophilic or hydrophobic surfaces; however, no difference between pattern orientation or size was noticed for maximum freezing temperature. However, the biphilic patterns did affect other aspects such as time to freezing and volume of water on the surface. These effects are from the patterns altering the nucleation and coalescence behavior of condensation.
249

A case study of the physics enhancement project for two year colleges, its effects and outcomes on the teaching of undergraduate physics at two year colleges

Leif, Todd Robert January 1900 (has links)
Doctor of Philosophy / Curriculum and Instruction Programs / Nobel S. Rebello / This dissertation reports on a naturalistic evaluation study of a series of NSF grant projects collectively known as PEPTYC -- Physics Enhancement Project for Two Year College Physics Instructors. The project encompassed seven different cycles of professional development occurring during the 1990's via May Institutes, held at Texas A&M University. Follow-up meetings were held at American Association of Physics Teachers - Texas Section Meetings. The research was conducted post hoc. The research evaluated the characteristics of effective professional development under an evaluation frame work designed by D.L. Kirkpatrick (1959) and adapted by the researcher to address issues that are pertinent to the professional development of faculty. This framework was adapted to be viewed through an educator's eye in an effort to ascertain the long term affects of the program and determine how the program affected the participants' attitudes, pedagogical knowledge, and instructional practices. The PEPTYC program philosophy was based on the premise, supported by research, that professional development programs addressing specific teaching practices are more successful than generic programs. Furthermore, professional development is more effective in helping teachers use alternative approaches when teachers are engaged in active learning experiences rather than passively listening to lectures or presentations. The naturalistic study was based on surveys and semi-structured interviews with 14 individuals who participated in PEPTYC workshops, as well as presenters of the PEPTYC program. The interviews were analyzed to describe how the PEPTYC project influenced the participants long after they had completed their training. This project can inform the development of similar evaluation studies of other professional development programs.
250

A comparison of needle-free and needle injection methods and solutions for enhancement of beef Longissimus lumborum muscles

Crow, Brett Alan January 1900 (has links)
Master of Science / Department of Animal Sciences and Industry / Michael E. Dikeman / Objectives were to determine the effects of needle-free (NF) versus needle (N) injection methods and/or solutions for enhancement of beef longissimus lumborum muscle (LM) on color, instrumental tenderness, sensory attributes, pump yields, and cooking losses. In experiment 1, LM (n=15) at 9 d postmortem were halved before random assignment to N or NF injection enhancement with a solution containing 2.2% salt, 4.4% sodium tripolyphosphate (STPP), and 1.5% K lactate. Different steaks from each loin half were either placed on a 5 d color display, frozen for later sensory analysis, or aged until d 13 postmortem for LM slice shear force measurements. Pump yields tended (P=0.08) to be higher for NF injection. Needle injected steaks were darker (P<0.05) on day 1, but not after that. Discoloration was not different (P>0.05) between treatments. The NF treatment had greater (P<0.05) instrumental tenderness and intensity of off-flavors but less (P<0.05) cooking loss and beef flavor. In Experiment 2, LM (n=28) at 5 d postmortem were halved before random assignment to one of four treatments: 1) N, or 2) NF injection with a solution containing 2.2% salt, 4.4% STPP, 15% K lactate, and 0.58% rosemary; 3) N, or 4) NF injection with a solution containing 2.4% Ca lactate and 0.58% rosemary. Steaks from each loin half were either frozen for later sensory analysis or aged until d 14 postmortem for LM slice shear force measurements. Loins phosphate enhanced with the NF injector had the highest (P<0.05) pumped yields with no differences (P>0.05) among other treatment combinations. Instrumental tenderness was not different (P>0.05) between N and NF treatments but was higher with the phosphate solution than the Ca lactate solution. The NF treatment had lower (P<0.05) cooking losses when the phosphate solution was used, which resulted in less (P<0.05) cooking loss than the Ca lactate solution. More (P<0.05) off-flavors and abnormal texture resulted from NF injection. The phosphate solution resulted in greater (P<0.05) myofibrillar and overall tenderness, juiciness, off-flavors and abnormal texture with less (P<0.05) connective tissue than the Ca lactate solution. Enhancing beef LM with a phosphate solution and NF injection might improve yields, tenderness, and juiciness while harming texture and flavor.

Page generated in 0.0506 seconds