• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 1
  • Tagged with
  • 67
  • 67
  • 67
  • 67
  • 67
  • 19
  • 17
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Renewable Energy and the Smart Grid: Architecture Modelling, Communication Technologies and Electric Vehicles Integration

Wang, Qi January 2015 (has links)
Renewable Energy is considered as an effective solution for relieving the energy crisis and reducing the greenhouse gas emissions. It is also be recognized as an important energy resource for power supplying in the next generation power grid{smart grid system. For a long time, the unsustainable and unstable of renewable energy generation is the main challenge to the combination of the renewable energy and the smart grid. The short board on the utilities' remote control caused low-efficiency of power scheduling in the distribution power area, also increased the difficulty of the local generated renewable energy grid-connected process. Furthermore, with the rapid growth of the number of electrical vehicles and the widely established of the fast power charging stations in urban and rural area, the unpredictable power charging demand will become another challenge to the power grid in a few years. In this thesis we propose the corresponding solutions for the challenges enumerated in the above. Based on the architecture of terminal power consumer's residence, we introduce the local renewable energy system into the residential environment. The local renewable energy system can typically support part of the consumer's power demand, even more. In this case, we establish the architecture of the local smart grid community based on the structure of distribution network of the smart grid, includes terminal power consumer, secondary power substation, communication links and sub data management center. Communication links are employed as the data transmission channels in our scheme. Also the local power scheduling algorithm and the optimal path selection algorithm are created for power scheduling requirements and stable expansion of the power supply area. Acknowledging the fact that the information flow of the smart grid needs appropriate communication technologies to be the communication standards, we explore the available communication technologies and the communication requirements and performance metrics in the smart grid networks. Also, the power saving mechanism of smart devices in the advanced metering infrastructure is proposed based on the two-state-switch scheduling algorithm and improved 802.11ah-based data transmission model. Renewable energy system can be employed in residential environment, but also can be deployed in public environment, like fast power charging station and public parking campus. Due to the current capacity of electrical vehicles (EV), the fast power charging station is required not just by the EV drivers, but also demanded by the related enterprises. We propose a upgraded fast power charging station with local deployed renewable energy system in public parking campus. Based on the queueing model, we explore and deliver a stochastic control model for the fast power charging station. A new status called "Service Jumped" is created to express the service state of the fast power charging station with and without the support from the local renewable energy in real-time.
62

Towards Uncovering the True Use of Unlabeled Data in Machine Learning

Sansone, Emanuele January 2018 (has links)
Knowing how to exploit unlabeled data is a fundamental problem in machine learning. This dissertation provides contributions in different contexts, including semi-supervised learning, positive unlabeled learning and representation learning. In particular, we ask (i) whether is possible to learn a classifier in the context of limited data, (ii) whether is possible to scale existing models for positive unlabeled learning, and (iii) whether is possible to train a deep generative model with a single minimization problem.
63

Semantic Image Interpretation - Integration of Numerical Data and Logical Knowledge for Cognitive Vision

Donadello, Ivan January 2018 (has links)
Semantic Image Interpretation (SII) is the process of generating a structured description of the content of an input image. This description is encoded as a labelled direct graph where nodes correspond to objects in the image and edges to semantic relations between objects. Such a detailed structure allows a more accurate searching and retrieval of images. In this thesis, we propose two well-founded methods for SII. Both methods exploit background knowledge, in the form of logical constraints of a knowledge base, about the domain of the images. The first method formalizes the SII as the extraction of a partial model of a knowledge base. Partial models are built with a clustering and reasoning algorithm that considers both low-level and semantic features of images. The second method uses the framework Logic Tensor Networks to build the labelled direct graph of an image. This framework is able to learn from data in presence of the logical constraints of the knowledge base. Therefore, the graph construction is performed by predicting the labels of the nodes and the relations according to the logical constraints and the features of the objects in the image. These methods improve the state-of-the-art by introducing two well-founded methodologies that integrate low-level and semantic features of images with logical knowledge. Indeed, other methods, do not deal with low-level features or use only statistical knowledge coming from training sets or corpora. Moreover, the second method overcomes the performance of the state-of-the-art on the standard task of visual relationship detection.
64

THz Radiation Detection Based on CMOS Technology

Khatib, Moustafa January 2019 (has links)
The Terahertz (THz) band of the electromagnetic spectrum, also defined as sub-millimeter waves, covers the frequency range from 300 GHz to 10 THz. There are several unique characteristics of the radiation in this frequency range such as the non-ionizing nature, since the associated power is low and therefore it is considered as safe technology in many applications. THz waves have the capability of penetrating through several materials such as plastics, paper, and wood. Moreover, it provides a higher resolution compared to conventional mmWave technologies thanks to its shorter wavelengths. The most promising applications of the THz technology are medical imaging, security/surveillance imaging, quality control, non-destructive materials testing and spectroscopy. The potential advantages in these fields provide the motivation to develop room-temperature THz detectors. In terms of low cost, high volume, and high integration capabilities, standard CMOS technology has been considered as an excellent platform to achieve fully integrated THz imaging systems. In this Ph.D. thesis, we report on the design and development of field effect transistor (FET) THz direct detectors operating at low THz frequency (e.g. 300 GHz), as well as at higher THz frequencies (e.g. 800 GHz – 1 THz). In addition, we investigated the implementation issues that limit the power coupling efficiency with the integrated antenna, as well as the antenna-detector impedance-matching condition. The implemented antenna-coupled FET detector structures aim to improve the detection behavior in terms of responsivity and noise equivalent power (NEP) for CMOS based imaging applications. Since the detected THz signals by using this approach are extremely weak with limited bandwidth, the next section of this work presents a pixel-level readout chain containing a cascade of a pre-amplification and noise reduction stage based on a parametric chopper amplifier and a direct analog-to-digital conversion by means of an incremental Sigma-Delta converter. The readout circuit aims to perform a lock-in operation with modulated sources. The in-pixel readout chain provides simultaneous signal integration and noise filtering for the multi-pixel FET detector arrays and hence achieving similar sensitivity by the external lock-in amplifier. Next, based on the experimental THz characterization and measurement results of a single pixel (antenna-coupled FET detector + readout circuit), the design and implementation of a multispectral imager containing 10 x 10 THz focal plane array (FPA) as well as 50 x 50 (3T-APS) visible pixels is presented. Moreover, the readout circuit for the visible pixel is realized as a column-level correlated double sampler. All of the designed chips have been implemented and fabricated in 0.15-Âμm standard CMOS technology. The physical implementation, fabrication and electrical testing preparation are discussed.
65

Novel data-driven analysis methods for real-time fMRI and simultaneous EEG-fMRI neuroimaging

Soldati, Nicola January 2012 (has links)
Real-time neuroscience can be described as the use of neuroimaging techniques to extract and evaluate brain activations during their ongoing development. The possibility to track these activations opens the doors to new research modalities as well as practical applications in both clinical and everyday life. Moreover, the combination of different neuroimaging techniques, i.e. multimodality, may reduce several limitations present in each single technique. Due to the intrinsic difficulties of real-time experiments, in order to fully exploit their potentialities, advanced signal processing algorithms are needed. In particular, since brain activations are free to evolve in an unpredictable way, data-driven algorithms have the potentials of being more suitable than model-driven ones. In fact, for example, in neurofeedback experiments brain activation tends to change its properties due to training or task eects thus evidencing the need for adaptive algorithms. Blind Source Separation (BSS) methods, and in particular Independent Component Analysis (ICA) algorithms, are naturally suitable to such kind of conditions. Nonetheless, their applicability in this framework needs further investigations. The goals of the present thesis are: i) to develop a working real-time set up for performing experiments; ii) to investigate different state of the art ICA algorithms with the aim of identifying the most suitable (along with their optimal parameters), to be adopted in a real-time MRI environment; iii) to investigate novel ICA-based methods for performing real-time MRI neuroimaging; iv) to investigate novel methods to perform data fusion between EEG and fMRI data acquired simultaneously. The core of this thesis is organized around four "experiments", each one addressing one of these specic aims. The main results can be summarized as follows. Experiment 1: a data analysis software has been implemented along with the hardware acquisition set-up for performing real-time fMRI. The set-up has been developed with the aim of having a framework into which it would be possible to test and run the novel methods proposed to perform real-time fMRI. Experiment 2: to select the more suitable ICA algorithm to be implemented in the system, we investigated theoretically and compared empirically the performance of 14 different ICA algorithms systematically sampling different growing window lengths, model order as well as a priori conditions (none, spatial or temporal). Performance is evaluated by computing the spatial and temporal correlation to a target component of brain activation as well as computation time. Four algorithms are identied as best performing without prior information (constrained ICA, fastICA, jade-opac and evd), with their corresponding parameter choices. Both spatial and temporal priors are found to almost double the similarity to the target at not computation costs for the constrained ICA method. Experiment 3: the results and the suggested parameters choices from experiment 2 were implemented to monitor ongoing activity in a sliding-window approach to investigate different ways in which ICA-derived a priori information could be used to monitor a target independent component: i) back-projection of constant spatial information derived from a functional localizer, ii) dynamic use of temporal , iii) spatial, or both iv) spatial-temporal ICA constrained data. The methods were evaluated based on spatial and/or temporal correlation with the target IC component monitored, computation time and intrinsic stochastic variability of the algorithms. The results show that the back-projection method offers the highest performance both in terms of time course reconstruction and speed. This method is very fast and effective as far as the monitored IC has a strong and well defined behavior, since it relies on an accurate description of the spatial behavior. The dynamic methods oer comparable performances at cost of higher computational time. In particular the spatio-temporal method performs comparably in terms of computational time to back-projection, offering more variable performances in terms of reconstruction of spatial maps and time courses. Experiment 4: finally, Higher Order Partial Least Square based method combined with ICA is proposed and investigated to integrate EEG-fMRI data acquired simultaneously. This method showed to be promising, although more experiments are needed.
66

Test-retest Reliability of Intrinsic Human Brain Default-Mode fMRI Connectivity: Slice Acquisition and Physiological Noise Correction Effects

Marchitelli, Rocco January 2016 (has links)
This thesis aims at evaluating, in two separate studies, strategies for physiological noise and head motion correction in resting state brain FC-fMRI. In particular, as a general marker of noise correction performance we use the test-retest reproducibility of the DMN. The guiding hypothesis is that methods that improve reproducibility should reflect more efficient corrections and thus be preferable in longitudinal studies. The physiological denoising study evaluated longitudinal changes in a 3T harmonized multisite fMRI study of healthy elderly participants from the PharmaCog Consortium (Jovicich et al., 2016). Retrospective physiological noise correction (rPNC) methods were here implemented to investigate their influence on several DMN reliability measures within and between 13 MRI sites. Each site involved five different healthy elderly participants who were scanned twice at least a week apart (5 participants per site). fMRI data analysis was performed once without rPNC and then with WM/CSF regression, with physiological estimation by temporal ICA (PESTICA) (Beall & Lowe, 2007) and FMRIB's ICA-based Xnoiseifier (FSL-FIX) (Griffanti et al., 2014; Salimi-Khorshidi et al., 2014). These methods differ for their data-based computational approach to identify physiological noise fluctuations and need to be applied at different stages of data preprocessing. As a working hypothesis, physiological denoising was in general expected to improve DMN reliability. The head motion study evaluated longitudinal changes in the DMN connectivity from a 4T single-site study of 24 healthy young volunteers who were scanned twice within a week. Within each scanning session, RS-fMRI scans were acquired once using interleaved and then sequential slice-order acquisition methods. Furthermore, brain volumes were corrected for motion using once rigid-body volumetric and then slice-wise methods. The effects of these choices were then evaluated computing multiple DMN reliability measures and investigating single regions within the DMN to assess the existence of inter-regional effects associated with head-motion. In this case, we expected to find slice-order acquisition effects in reliability estimates under standard volumetric motion correction and no slice-order acquisition effect under 2D slice-based motion correction. Both studies used ICA to characterize the DMN using group-ICA and dual regression procedures (Beckmann et al., 2009). This methodology proved successful at defining consistent DMN connectivity metrics in longitudinal and clinical RS-fMRI studies (Zuo & Xing, 2014). Automatic DMN selection procedures and other quality assurance analyses were made to supervise ICA performance. Both studies considered several test-retest (TRT) reliability estimates (Vilagut, 2014) for some DMN connectivity measurements: absolute percent error between the sessions, intraclass correlation coefficients (ICC) between sessions and multiple sites, the Jaccard index to evaluate the degree of voxel-wise spatial pattern actiavtion overlap between sessions.
67

Innovative methodologies for the synthesis of large array antennas for communications and space applications.

Caramanica, Federico January 2011 (has links)
Modern communication and space systems such as satellite communication devices, radars, SAR and radio astronomy interferometers are realized with large antenna arrays since this kind of radiating systems are able to generate radiation patterns with high directivity and resolution. In such a framework conventional arrays with uniform inter-element spacing could be not satisfactory in terms of costs and dimensions. An interesting alternative is to reduce the array elements obtaining the so called "thinned arrays". Large isophoric thinned arrays have been exploited because of their advantages in terms of weight, consumption, hardware complexity, and costs over their filled counterparts. Unfortunately, thinning large arrays reduces the control of the peak sidelobe level (PSL) and does not give automatically optimal spatial frequency coverage for correlators. First of all the state of the art methodologies used to overcome such limitations, e.g., random and algorithmic approaches, dynamic programming and stochastic optimization algorithms such as genetic algorithms, simulated annealing or particle swarm optimizers, are analyzed and described in the introduction. Successively, innovative guidelines for the synthesis of large radiating systems are proposed, and discussed in order to point out advantages and limitations. In particular, the following specific issues are addressed in this work: 1. A new class of analytical rectangular thinned arrays with low peak sidelobe level (PSL). The proposed synthesis technique exploits binary sequences derived from McFarland difference sets to design thinned layouts on a lattice of P(P+2) positions for any prime P. The pattern features of the arising massively-thinned arrangements characterized by only P(P+1) active elements are discussed and the results of an extensive numerical analysis are presented to assess advantages and limitations of the McFarland-based arrays. 2. A set of techniques is presented that is based on the exploitation of low correlation Almost Difference Sets (ADSs) sequences to design correlator arrays for radioastronomy applications. In particular three approaches are discussed with different objectives and performances. ADS-based analytical designs, GA-optimized arrangements, and PSO optimized arrays are presented and applied to the synthesis of open-ended "Y" and "Cross" array configurations to maximize the coverage u-v or to minimize the peak sidelobe level (PSL). Representative numerical results are illustrated to point out the features and performances of the proposed approaches, and to assess their effectiveness in comparison with state-of-the-art design methodologies, as well. The presented analysis indicates that the proposed approaches overcome existing PSO-based correlator arrays in terms of PSL control (e.g., >1.0dB reduction) and tracking u-v coverage (e.g., up to 2\% enhancement), also improving the speed of convergence of the synthesis process. 3. A genetic algorithm (GA)-enhanced almost difference set (ADS)-based methodology to design thinned planar arrays with low-peak sidelobe levels (PSLs). The method allows to overcome the limitations of the standard ADS approach in terms of flexibility and performance. The numerical validation, carried out in the far-field and for narrow-band signals, points out that with affordable computational efforts it is possible to design planar array arrangements that outperform standard ADS-based designs as well as standard GA design approaches.

Page generated in 0.1192 seconds