• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 249
  • 133
  • 42
  • 36
  • 36
  • 10
  • 10
  • 7
  • 7
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 650
  • 84
  • 60
  • 59
  • 55
  • 46
  • 40
  • 36
  • 35
  • 33
  • 32
  • 32
  • 32
  • 32
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Χαρακτηρισμός ασύρματου καναλιού για εφαρμογές στον κυψελοειδή σχεδιασμό των επίγειων δικτύων κινητών επικοινωνιών

Αθανασακοπούλου, Θεοδώρα 16 May 2014 (has links)
Στόχος της διπλωματικής εργασίας είναι ο κυψελοειδής σχεδιασμός ενός ασύρματου δικτύου κινητής τηλεφωνίας σε μια συγκεκριμένη γεωγραφική περιοχή με βάση το πρότυπο GSM και η μελέτη των χαρακτηριστικών του ασύρματου καναλιού επικοινωνίας. Στο πρώτο κεφάλαιο κεφάλαιο αναλύονται η βασική ιδέα του κυψελοειδούς σχεδιασμού, τα βήματα που ακολουθούνται για την πραγματοποίηση του και οι βασικές έννοιες των κυψελοειδών συστημάτων. Στο δεύτερο κεφάλαιο αναφέρεται ο τρόπος μοντελοποίησης του ασύρματου καναλιού με βάση τα προβλήματα που δημιουργούνται κατά τη διάδοση του σήματος μας και αναλύονται τα διάφορα στατιστικά μοντέλα που χρησιμοποιούνται για την περιγραφή του ασύρματου καναλιού ανάλογα με τις συνθήκες κάτω από τις οποίες πρέπει να υφίστανται. Στο τρίτο περιγράφεται το πρότυπο GSM και πιο συγκεκριμένα η αρχιτεκτονική του και οι λειτουργίες που εκτελεί. Στο τέταρτο κεφάλαιο γίνεται ο σχεδιασμός του κυψελοειδούς μας δικτύου στη γεωγραφική περιοχή του δήμου Πατρέων και γίνεται μια εκτίμηση της λαμβανομένης ισχύος σύμφωνα με τα μοντέλα διάδοσης που χρησιμοποιούμε. / The main purpose of this thesis is the cellular design of a wireless mobile network in a specific geographic area based on the GSM standard and the research to the characteristics of the wireless communication channel. In the first chapter, the concept of cellular design , the steps followed for the realization and the basic characteristics of cellular systems are being held. In the second chapter, the modeling of wireless channel based on the problems created by signal propagationand various statistical models used to describe the wireless channel depending on the conditions under which they must exist are being held. In the third, the GSM standard is described and more specifically its architecture and the functions it performs. In the fourth chapter, the design of our cellular network in the geographical area of the municipality of Patras is made and estimation of the received power is made based on the models we use.
222

Μοντελοποίηση δορυφορικού καναλιού / Satellite channel modeling

Ζαχαρίας, Ηλίας 14 May 2007 (has links)
Σκοπός της διπλωματικής εργασία είναι η μελέτη και μοντελοποίηση ενός δορυφορικού καναλιού. Το δορυφορικό κανάλι, όπως κάθε κανάλι είναι ένα μη-γραμμικό σύστημα που χαρακτηρίζεται από ποικίλους και απρόβλεπτους παράγοντες που αλλοιώνουν την αρχική πληροφορία. Οι καιρικές συνθήκες που επικρατούν σε μια περιοχή για παράδειγμα επηρεάζουν το κανάλι προκαλώντας μεταβολή της ισχύος με τυχαίο τρόπο, γεγονός το οποίο δυσκολεύει την πρόβλεψη της συμπεριφοράς του. Για το λόγο αυτό έγινε προσπάθεια ανάπτυξης ενός δυναμικού μοντέλου που θα μπορεί να εξομοιώνει τη συμπεριφορά ενός τέτοιου καναλιού, δίνοντας τις πιθανές καταστάσεις στις οποίες μπορεί να βρεθεί. Συγκεκριμένα τα φαινόμενα τα οποία μελετήθηκαν είναι η εξασθένιση λόγω βροχής, η απορρόφηση από υγρασία και οξυγόνο και η τροποσφαιρική σκέδαση. Η κατασκευή του μοντέλου έγινε με τη χρήση και επεξεργασία μετεωρολογικών δεδομένων από την Ε.Μ.Υ. Επίσης, έγινε προσπάθεια μοντελοποίησης του ενισχυτή ισχύος (TWTA) ο οποίος συναντάται τόσο στο δορυφόρο όσο και κεντρικό σταθμό βάσης στη γη. / The objective of this project is the study and the modeling of a satellite channel. The satellite channel, as any satellite channel, is a non-linear system that is characterized by multiple and unpredictable factors that alter the initial information. The weather conditions that exist in an region affect the channel causing random power fluctuations and result in unpredictable performance. Thus, a dynamic model has been developed that simulates the performance of such a channel by predicting possible conditions that can arise. More specifically, the phenomena that have been studied is the rain attenuation, gaseous absorption and the tropospheric scintillation. The model was implemented through the use and analysis of meteorological data taken from the Hellenic National Meteorological Service. In addition, a high power amplifier (TWTA) that can be found both in the satellite and the central base station.
223

Toxicity of Engineered Nanoparticles to Anaerobic Wastewater Treatment Processes

Gonzalez-Estrella, Jorge Gonzalez January 2014 (has links)
Nanotechnology is an increasing market. Engineered nanoparticles (NPs), materials with at least one dimension between 1 and 100 nm, are produced on a large scale. NPs are vastly used in industrial processes and consumer products and they are most likely discharged into wastewater treatment plants after being used. Activated Sludge is one of the most applied biological wastewater treatment processes for the degradation of organic matter in sewage. Activated sludge produces an excess of sludge that is commonly treated and stabilized by anaerobic digestion. Recent studies have found that NPs accumulate in the activated sludge; thus, there is a potential for the concentrations of NPs to magnify as concentrated waste sludge is fed into the anaerobic digestion process. For this reason, it is important to study the possible toxic effects of NPs on the microorganisms involved in the anaerobic digestion process and the approaches to overcome toxicity if necessary. The present work evaluates the toxic effect of NPs on anaerobic wastewater treatment processes and also presents approaches for toxicity attenuation. The first objective of this dissertation (Chapter III) was to evaluate the toxicity of high concentrations (1, 500 mg L⁻¹) of Ag⁰, Al₂O₃, CeO₂, Cu⁰, CuO, Fe⁰, Fe₂O₃, Mn₂O₃, SiO₂, TiO₂, and ZnO NPs to acetoclastic and hydrogenotrophic methanogens and the effect of a dispersant on the NPs toxicity to methanogens. The findings indicated that only Cu⁰ and ZnO NPs caused severe toxicity to hydrogenotrophic methanogens and Cu⁰, CuO, and ZnO NPs to acetoclastic methanogens. The dispersant did not impact the NPs toxicity. The concentrations of Cu⁰ and ZnO causing 50% of inhibition (IC₅₀) to hydrogenotrophic methanogens were 68 and 250 mg L⁻¹, respectively. Whereas the IC₅₀ values for acetoclastic methanogens were 62, 68, and 179 for Cu⁰, ZnO, and CuO-Cu NPs respectively. These findings indicate that acetoclastic methanogens are more sensitive to NP toxicity compared to hydrogenotrophic methanogens and that Cu⁰ and ZnO NPs are highly toxic to both. Additionally, it was observed that the toxicity of any given metal was highly correlated with its final dissolved concentration in the assay irrespective of whether it was initially added as a NP or chloride salt, indicating that corrosion and dissolution of metals from NPs may have been responsible for the toxicity. The second objective of this dissertation (Chapter IV) was to evaluate the Cu⁰ NP toxicity to anaerobic microorganisms of wastewater treatment processes. Cu⁰ is known to be toxic to methanogens; nonetheless, little is known about its toxic effects on microorganisms of upper trophic levels of anaerobic digestion or other anaerobic process used for nitrogen removal. This specific objective evaluated Cu⁰ NP toxicity to glucose fermentation, syntrophic propionic oxidation, methanogenesis, denitrification and anaerobic ammonium oxidation (anammox). Chapter IV showed that anammox and glucose fermentation were the least and most inhibited processes with inhibition constants (K(i)) values of 0.324 and 0.004 mM of added Cu⁰ NPs, respectively. The Ki values obtained from the residual soluble concentration of the parallel experiments using CuCl₂ indicated that Cu⁰ NP toxicity is most likely caused by the release of soluble ions for each one of the microorganisms tested. The results taken as a whole demonstrate that Cu⁰ NPs are toxic to a variety of anaerobic microorganisms of wastewater treatment processes. The third objective of this document (Chapter V) was to study the role of biogenic sulfide in attenuating Cu⁰ and ZnO NP toxicity to acetoclastic methanogens. Previous literature results and research presented in this dissertation indicated that the release of soluble ions from Cu and ZnO NPs cause toxicity to methanogens. In the past, the application of sulfide to precipitate heavy metals as inert non-soluble sulfides was used to attenuate the toxicity of Cu and Zn salts. Building on this principle, Chapter V evaluated the toxicity of Cu⁰ and ZnO NPs in sulfate-containing (0.4 mM) and sulfate-free conditions. The results show that Cu⁰ and ZnO were 7 and 14x less toxic in sulfate-containing than in sulfate-free assays as indicated by the differences in K(i) values. The K(i) values obtained based on the residual metal concentration of the sulfate-free and sulfate-containing assays were very similar, indicating that the toxicity is well correlated with the release of soluble ions. Overall, this study demonstrated that biogenic sulfide is an effective attenuator of Cu⁰ and ZnO NP toxicity to acetoclastic methanogens. Finally, the last objective (Chapter VI) of this dissertation was to evaluate the effect of iron sulfide (FeS) on the attenuation of Cu⁰ and ZnO toxicity to acetoclastic methanogens. FeS is formed by the reaction of iron(II) and sulfide. This reaction is common in anaerobic sediments where the reduction of iron(III) to iron(II) and sulfate to sulfide occurs. FeS plays a key role controlling the soluble concentrations of heavy metals and thus their toxic effects in aquatic sediments. This study evaluated the application of FeS as an approach to attenuate Cu⁰ and ZnO NP toxicity and their salt analogs to acetoclastic methanogens. Two particle sizes, coarse FeS (FeS-c, 500-1200 µm) and fine FeS (FeS-f, 25-75 µm) were synthesized and used in this study. The results showed 2.5x less FeS-f than FeS-c was required to recover the methanogenic activity to the same extent from the exposure to highly inhibitory concentrations of CuCl₂ and ZnCl₂ (0.2 mM). The results also showed that a molar ratio of FeS-f/Cu⁰, FeS-f/ZnO, FeS-f/Zn Cl₂, and FeS-f/CuCl₂ of 3, 3, 6, and 12 respectively, was necessary to provide a high recovery of methanogenic activity (>75%). The excess of FeS needed to overcome the toxicity indicates that not all the sulfide in FeS was readily available to attenuate the toxicity. Overall, Chapter VI demonstrated that FeS is an effective attenuator of the toxicity of Cu⁰ NP and ZnO NPs and their salt analogs to methanogens, albeit molar excesses of FeS were required.
224

The Use of Nanoparticles on Nanometer Patterns for Protein Identification

Powell, Tremaine Bennett January 2008 (has links)
This dissertation describes the development of a new method for increasing the resolution of the current protein microarray technology, down to the single molecule detection level. By using a technique called size-dependent self-assembly, different proteins can be bound to different sized fluorescent nanostructures, and then located on a patterned silicon substrate based on the sized pattern which is closest to the size of the bead diameter.The protein nanoarray was used to detect antibody-antigen binding, specifically anti-mouse IgG binding to mouse IgG. The protein nanoarray is designed with the goal of analyzing rare proteins. However, common proteins, such as IgG, are used in the initial testing of the array functionality. Mouse IgG, representing rare proteins, is conjugated to fluorescent beads and the beads are immobilized on a patterned silicon surface. Then anti-mouse IgG binds to the mouse IgG on the immobilized beads. The binding of the antibody, anti-mouse IgG, to the antigen, mouse IgG is determined by fluorescent signal attenuation.The first objective was to bind charged nanoparticles, conjugated with proteins, to an oppositely charged silicon substrate. Binding of negatively charged gold nanoparticles (AuNP), conjugated with mouse IgG, to a positively charged silicon surface was successful.The second objective was to demonstrate the method of size-dependent self-assembly at the nanometer scale (<100 >nm). Different-sized, carboxylated, fluorescent beads and AuNP, which were conjugated with proteins, were serially added to a patterned polymethyl methacrylate (PMMA) coated silicon surface. Size-dependent self-assembly was successfully demonstrated, down to the nanometer scale.The final objective was to obtain a signal from antibody-antigen binding within the protein array. Conjugated fluorescent beads were bound to e-beam patterns and signal attenuation was measured when the antibodies bound to the conjugated beads. The size-dependent self-assembly is a valuable new method that can be used for the detection and quantification of proteins.
225

Curvelet processing and imaging: adaptive ground roll removal

Yarham, Carson, Trad, Daniel, Herrmann, Felix J. January 2004 (has links)
In this paper we present examples of ground roll attenuation for synthetic and real data gathers by using Contourlet and Curvelet transforms. These non-separable wavelet transforms are locoalized both (x,t)- and (k,f)-domains and allow for adaptive seperation of signal and ground roll. Both linear and non-linear filtering are discussed using the unique properties of these basis that allow for simultaneous localization in the both domains. Eventhough, the linear filtering techniques are encouraging the true added value of these basis-function techniques becomes apparent when we use these decompositions to adaptively substract modeled ground roll from data using a non-linear thesholding procedure. We show real and synthetic examples and the results suggest that these directional-selective basis functions provide a usefull tool for the removal of coherent noise such as ground roll
226

GAS HYDRATE ANOMALIES IN SEISMIC VELOCITIES, AMPLITUDES AND ATTENUATION: WHAT DO THEY IMPLY?

Chand, Shyam 07 1900 (has links)
Gas hydrates are found worldwide and many studies have been carried out to develop an efficient method to identify and quantify them using various geophysical as well as other anomalies. In this study, various seismic anomalies related to gas hydrates and the underlying gas are analysed, and correlated them to rock physics properties. Observations of velocities in sediments containing gas hydrates show that the rigidity, and hence the velocity of sediments increases with increase of hydrate saturation. The increase of velocity due to the presence of gas hydrate can be explained in terms of gradual cementation of the sediment matrix. In the case of seismic attenuation, gas hydrate bearing sediments are quite different from common sedimentary rock behaviour of low seismic attenuation with high rigidity. In contrary gas hydrate bearing sediments is observed to have increased seismic attenuation of higher frequencies with increase of hydrate saturation. This strange phenomenon can be explained in terms of differential fluid flow within sediment and hydrate matrix. Also it is observed that the presence of large amount of gas hydrate can result in an increase of seismic amplitudes, a signature similar to the presence of small amount of gas. Hence misinterpretation of these enhanced amplitudes could result in the under estimation of gas present not only as shallow drilling hazard but also on the resource potential of the region. The increase of seismic reflection amplitude results from the formation of gas hydrates in selective intervals causing strong positive and negative impedance contrasts across the formations with and without gas hydrates.
227

Improving attenuation corrections obtained using singles-mode transmission data in small-animal PET

Vandervoort, Eric 05 1900 (has links)
The images in positron emission tomography (PET) represent three dimensional dynamic distributions of biologically interesting molecules labelled with positron emitting radionuclides (radiotracers). Spatial localisation of the radio-tracers is achieved by detecting in coincidence two collinear photons which are emitted when the positron annihilates with an ordinary electron. In order to obtain quantitatively accurate images in PET, it is necessary to correct for the effects of photon attenuation within the subject being imaged. These corrections can be obtained using singles-mode photon transmission scanning. Although suitable for small animal PET, these scans are subject to high amounts of contamination from scattered photons. Currently, no accurate correction exists to account for scatter in these data. The primary purpose of this work was to implement and validate an analytical scatter correction for PET transmission scanning. In order to isolate the effects of scatter, we developed a simulation tool which was validated using experimental transmission data. We then presented an analytical scatter correction for singles-mode transmission data in PET. We compared our scatter correction data with the previously validated simulation data for uniform and non-uniform phantoms and for two different transmission source radionuclides. Our scatter calculation correctly predicted the contribution from scattered photons to the simulated data for all phantoms and both transmission sources. We then applied our scatter correction as part of an iterative reconstruction algorithm for simulated and experimental PET transmission data for uniform and non-uniform phantoms. We also tested our reconstruction and scatter correction procedure using transmission data for several animal studies (mice, rats and primates). For all studies considered, we found that the average reconstructed linear attenuation coefficients for water or soft-tissue regions of interest agreed with expected values to within 4%. Using a 2.2 GHz processor, the scatter correction required between 6 to 27 minutes of CPU time (without any code optimisation) depending on the phantom size and source used. This extra calculation time does not seem unreasonable considering that, without scatter corrections, errors in the reconstructed attenuation coefficients were between 18 to 45% depending on the phantom size and transmission source used.
228

Akies lęšiuko ultragarsinių, biocheminių ir mechaninių savybių įvertinimas / Evaluation of ultrasonic, biochemical and mechanical properties of eye lens

Raitelaitienė, Ramunė 13 January 2006 (has links)
Aging lens undergoes the changes in the amount of water soluble lens proteins and their redistribution from low molecular weight to high. This results in the development of high molecular weight aggregates, which change the lens transparency, increase light scattering and contribute to the hardening of the lens. All these disturbances cause changes in ultrasound attenuation. Hardness of the cataractous lens is one of the major factors influencing the suitability of a patient for fhacoemulsification. Age and nuclear color have been shown to be good clinical markers of lens hardness, but there is a need of more quantitative and objective examination method. In this work a new methodics of assessement of eye lens hardness in vitro was developed, the hardness of dogs and human eye lenses was assessed experimentally, the investigation of ultrasound attenuation coefficient, the amount of water soluble lens proteins and their distribution to the fractions of different molecular weight was performed. A strong correlation between lens hardness and ultrasound attenuation was investigated. The results enable to evaluate lens hardness pre-operatively and non invasively and help surgeons when choosing patients to the phacoemulsification method of cataract extraction. The performed investigation confirmed that the opacification of the lens is due to the changes in the amount of water soluble lens proteins and the presence of high molecular weight compounds, which disturb the light... [to full text]
229

Medical Image Processing Techniques for the Objective Quantification of Pathology in Magnetic Resonance Images of the Brain

Khademi, April 16 August 2013 (has links)
This thesis is focused on automatic detection of white matter lesions (WML) in Fluid Attenuation Inversion Recovery (FLAIR) Magnetic Resonance Images (MRI) of the brain. There is growing interest within the medical community regarding WML, since the total WML volume per patient (lesion load) was shown to be related to future stroke as well as carotid disease. Manual segmentation of WML is time consuming, labourious, observer-dependent and error prone. Automatic WML segmentation algorithms can be used instead since they give way to lesion load computation in a quantitative, efficient, reproducible and reliable manner. FLAIR MRI are affected by at least two types of degradations, including additive noise and the partial volume averaging (PVA) artifact, which affect the accuracy of automated algorithms. Model-based methods that rely on Gaussian distributions have been extensively used to handle these two distortions, but are not applicable to FLAIR with WML. The distribution of noise in multicoil FLAIR MRI is non-Gaussian and the presence of WML modifies tissue distributions in a manner that is difficult to model. To this end, the current thesis presents a novel way to model PVA artifacts in the presence of noise. The method is a generalized and adaptive approach, that was applied to a variety of MRI weightings (with and without pathology) for robust PVA quantification and tissue segmentation. No a priori assumptions are needed regarding class distributions and no training samples or initialization parameters are required. Segmentation experiments were completed using simulated and real FLAIR MRI. Simulated images were generated with noise and PVA distortions using realistic brain and pathology models. Real images were obtained from Sunnybrook Health Sciences Centre and WML ground truth was generated through a manual segmentation experiment. The average DSC was found to be 0.99 and 0.83 for simulated and real images, respectively. A lesion load study was performed that examined interhemispheric WML volume for each patient. To show the generalized nature of the approach, the proposed technique was also employed on pathology-free T1 and T2 MRI. Validation studies show the proposed framework is classifying PVA robustly and tissue classes are segmented with good results.
230

Medical Image Processing Techniques for the Objective Quantification of Pathology in Magnetic Resonance Images of the Brain

Khademi, April 16 August 2013 (has links)
This thesis is focused on automatic detection of white matter lesions (WML) in Fluid Attenuation Inversion Recovery (FLAIR) Magnetic Resonance Images (MRI) of the brain. There is growing interest within the medical community regarding WML, since the total WML volume per patient (lesion load) was shown to be related to future stroke as well as carotid disease. Manual segmentation of WML is time consuming, labourious, observer-dependent and error prone. Automatic WML segmentation algorithms can be used instead since they give way to lesion load computation in a quantitative, efficient, reproducible and reliable manner. FLAIR MRI are affected by at least two types of degradations, including additive noise and the partial volume averaging (PVA) artifact, which affect the accuracy of automated algorithms. Model-based methods that rely on Gaussian distributions have been extensively used to handle these two distortions, but are not applicable to FLAIR with WML. The distribution of noise in multicoil FLAIR MRI is non-Gaussian and the presence of WML modifies tissue distributions in a manner that is difficult to model. To this end, the current thesis presents a novel way to model PVA artifacts in the presence of noise. The method is a generalized and adaptive approach, that was applied to a variety of MRI weightings (with and without pathology) for robust PVA quantification and tissue segmentation. No a priori assumptions are needed regarding class distributions and no training samples or initialization parameters are required. Segmentation experiments were completed using simulated and real FLAIR MRI. Simulated images were generated with noise and PVA distortions using realistic brain and pathology models. Real images were obtained from Sunnybrook Health Sciences Centre and WML ground truth was generated through a manual segmentation experiment. The average DSC was found to be 0.99 and 0.83 for simulated and real images, respectively. A lesion load study was performed that examined interhemispheric WML volume for each patient. To show the generalized nature of the approach, the proposed technique was also employed on pathology-free T1 and T2 MRI. Validation studies show the proposed framework is classifying PVA robustly and tissue classes are segmented with good results.

Page generated in 0.0294 seconds