Spelling suggestions: "subject:"ING-INF/03 telecomunicazioni"" "subject:"ING-INF/03 lelecommunicazioni""
101 |
Remote Sensing-based Channel Modeling and Deployment Planning for Low-power Wireless NetworksDemetri, Silvia January 2018 (has links)
The deployment of low power wireless networks is notoriously effort-demanding, as costly in-field campaigns are required to assess the connectivity properties of the target location and understand where to place the wireless nodes. The characteristics of the environment, both static (e.g., obstacles obstructing the link line of sight) and dynamic (e.g., changes in weather conditions) cause variability in the communication performance, thus affecting the network operation quality and reliability. This translates into difficulties in effectively deploy, plan and manage these networks in real-world scenarios, especially outdoor. Despite the large literature on node placement, existing approaches make over-simplifying assumptions neglecting the complexity of the radio environment.
Airborne and satellite Remote Sensing (RS) systems acquire data and images over wide areas, thus enabling one to derive information about these areas at large scale. In this dissertation, we propose to leverage RS systems and related data processing techniques to i) automatically derive the static characteristics of the deployment environment that affect low power wireless communication; ii) model the relation between such characteristics and the communication quality; and iii) exploit this knowledge to support the deployment planning. We focus on two main scenarios: a) the deployment of Wireless Sensor Networks (WSNs) in forests; and b) the communication performance of Internet of Things (IoT) networks based on Long Range (LoRa) wireless technology in the presence of mixed environments.
As a first major contribution, we propose a novel WSN node placement approach (LaPS) that integrates remote sensing data acquired by airborne Light Detection and Ranging (LiDAR) instruments, a specialized path loss model and evolutionary computation to identify (near-)optimal node position in forests, automatically and prior to the actual deployment. When low-power WSNs operating at 2.4 GHz are deployed in forests, the presence of trees greatly affects communication. We define a processing architecture that automatically derives local forest attributes (e.g., tree density) from LiDAR data acquired over the target forest. This information is incorporated into a specialized path loss model, which is validated in deployments in a real forest, enabling fine-grained, per-link estimates of the radio signal attenuation induced by trees. Combining the forest attributes derived from LiDAR data with the specialized path loss model and a genetic algorithm, LaPS provides node placement solutions with higher quality than approaches based on a regular placement or on a standard path loss model, while satisfying the spatial and network requirements provided by the user. In addition, LaPS enables the exploration of the impact of changes in the user requirements on the resulting topologies in advance, thus reducing the in-field deployment effort.
Moreover, to explore a different low-power wireless technology with starkly different trade-offs, we consider a LoRa-based IoT network operating in i) a free space like communication environment, i.e., the LoRa signal is transmitted from an high altitude weather balloon, traverses a free-of-obstacles space and is received by gateways on the ground; and ii) a mixed environment that contains built-up areas, farming fields and groups of trees, with both LoRa transmitters and receiving gateways close to the ground. These scenarios show a huge gap in terms of communication range, thus revealing to which extent the presence of objects affects the coverage that LoRa gateways can provide. To characterize the mixed environment we exploit detailed land cover maps (i.e., with spatial grain 10x10m2) derived by automatically classifying multispectral remote sensing satellite images. The land cover information is jointly analyzed with LoRa connectivity traces, enabling us to observe a correlation between the land cover types involved in LoRa links and the trend of the signal attenuation with the distance. This analysis opens interesting research venues aimed at defining LoRa connectivity models that quantitatively account for the type of environment involved in the communication by leveraging RS data.
|
102 |
An Innovative Learning-by-Example Methodological Strategy for Advanced Reflectarray Antenna DesignTenuti, Lorenza January 2018 (has links)
Reflectarray antennas are reflector structures which combine characteristics of both reflector and array antennas. They exhibit electrically large apertures in order to generate significant gain as conventional metallic reflector antennas. At the same time they are populated by several radiating elements which can be controlled individually like conventional phased array antennas. They are usually flat and can be folded and deployed permitting important saving in terms of volume. For these reasons they have been considered since several years for satellite applications. Initially constituted by truncated metallic waveguides and mainly considered for radar applications, they are now mainly constituted by a dielectric substrate, backed by a metallic plane (groundplane) on which microstrip elements with variable shape/size/orientation are printed. These elements are illuminated by the primary feed. The reflected wave from each element has a phase that can be controlled by the geometry of the element itself. By a suitable design of the elements that make up the reflectarray, it is therefore possible to compose the phase front of the reflected waves in the desired direction (steering direction), and to ensure that the obtained overall radiation pattern exhibits a secondary lobe profile which meets the design specifications. Reflectarrays may be used to synthesize pencil or shaped beams. The synthesis methods commonly used to achieve this goal are based on three different steps: (a) calculation of the nearfield “phase distribution” that the wave reflected by the reflectarray must exhibit to get the desired far-field behaviour; (b) discretization of such distribution into cells of size comparable to that of the elements of interest (i.e., the patches); (c) calculation of the geometry of each elementary cell that will provide the desired reflection coefficient. The first step (a) is a Phase Only approach and permits already to achieve fast preliminary indications on the performance achievable. Accurate results require the implementation of the steps (b) and (c) as well and it is thus of fundamental importance to have techniques capable of efficiently and accurately calculating the reflection coefficient associated with a given geometry of the element [in order to efficiently solve the step (c)]. This coefficient is mathematically represented by a 2x2 complex matrix, which takes into account the relationships between co-polar and cross-polar components of the incident (due to the feed) and reflected field. This matrix naturally depends on the geometry of the element, the direction of incidence of the wave (azimuth and elevation) and the operating frequency of the system. The computation of the reflection coefficient is usually performed using electromagnetic full-wave (FW) simulators; the computation is however time consuming and the generation of the unit cells scattering response database becomes often unfeasible. In this work, an innovative strategy based on an advanced statistical learning method is introduced to efficiently and accurately predict the electromagnetic response of complex-shaped reflectarray elements. The computation of the scattering coefficients of periodic arrangements, characterized by an arbitrary number of degrees-of-freedom, is firstly recast as a vectorial regression problem, then solved with a learning-by-example strategy exploiting the Ordinary Kriging paradigm. A set of representative numerical experiments dealing with different element geometries is presented to assess the accuracy, the computational efficiency, and the flexibility of the proposed technique also in comparison with state-of-the-art machine learning methods.
|
103 |
Economics of Privacy: Users’ Attitudes and Economic Impact of Information Privacy ProtectionFrik, Alisa January 2017 (has links)
This doctoral thesis consists of three essays within the field of economics of information privacy examined through the lens of behavioral and experimental economics. Rapid development and expansion of Internet, mobile and network technologies in the last decades has provided multitudinous opportunities and benefits to both business and society proposing the customized services and personalized offers at a relatively low price and high speed. However, such innovations and progress have also created complex and hazardous issues. One of the main problems is related to the management of extensive flows of information, containing terabytes of personal data. Collection, storage, analysis, and sharing of this information imply risks and trigger usersâ concerns that range from nearly harmless to significantly pernicious, including tracking of online behavior and location, intrusive or unsolicited marketing, price discrimination, surveillance, hacking attacks, fraud, and identity theft. Some users ignore these issues or at least do not take an action to protect their online privacy. Others try to limit their activity in Internet, which in turn may inhibit the online shopping acceptance. Yet another group of users gathers personal information protection, for example, by deploying the privacy-enhancing technologies, e.g., ad-blockers, e-mail encryption, etc. The ad-blockers sometimes reduce the revenue of online publishers, which provide the content to their users for free and do not receive the income from advertisers in case the user has blocked ads. The economics of privacy studies the trade-offs related to the positive and negative economic consequences of personal information use by data subjects and its protection by data holders and aims at balancing the interests of both parties optimising the expected utilities of various stakeholders. As technology is penetrating every aspect of human life raising numerous privacy issues and affecting a large number of interested parties, including business, policy-makers, and legislative regulators, the outcome of this research is expected to have a great impact on individual economic markets, consumers, and society as a whole. The first essay provides an extensive literature review and combines the theoretical and empirical evidence on the impact of advertising in both traditional and digital media in order to gain the insights about the effects of ad-blocking privacy-enhancing technologies on consumersâ welfare. It first studies the views of the main schools of advertising, informative and persuasive. The informative school of advertising emphasizes the positive effects of advertising on sales, competition, product quality, and consumersâ utility and satisfaction by matching buyers to sellers, informing the potential customers about available goods and enhancing their informed purchasing decisions. In contrast, the advocates of persuasive school view advertising as a generator of irrational brand loyalty that distorts consumersâ preferences, inflates product prices, and creates entry barriers. I pay special attention to the targeted advertising, which is typically assumed to have a positive impact on consumersâ welfare if it does not cause the decrease of product quality and does not involve the extraction of consumersâ surplus through the exploitation of reservation price for discriminating activities. Moreover, the utility of personalized advertising appears to be a function of its accuracy: the more relevant is a targeted offer, the more valuable it is for the customer. I then review the effects of online advertising on the main stakeholders and users and show that the low cost of online advertising leads to excessive advertising volumes causing information overload, psychological discomfort and reactance, privacy concerns, decreased exploration activities and opinion diversity, and market inefficiency. Finally, as ad-blocking technologies filter advertising content and limit advertising exposure, I analyze the consequences of ad-blocking deployment through the lens of the models on advertising restrictions. The control of advertising volume and its partial restriction would benefit both consumers and businesses more than a complete ban of advertising. For example, advertising exposure caps, which limit the number of times that the same ad is to be shown to a particular user, general reduction of the advertising slots, control of the advertising quality standards, and limitation of tracking would result in a better market equilibrium than can offer an arms race of ad-blockers and anti-ad-blockers. Finally, I review the solutions alternative to the blocking of advertising content, which include self regulation, non-intrusive ads programs, paywall, intention economy approach that promotes business models, in which user initiates the trade and not the marketer, and active social movements aimed at increasing social awareness and consumer education. The second essay describes a model of factors affecting Internet usersâ perceptions of websitesâ trustworthiness with respect to their privacy and the intentions to purchase from such websites. Using focus group method I calibrate a list of websitesâ attributes that represent those factors. Then I run an online survey with 117 adult participants to validate the research model. I find that privacy (including awareness, information collection and control practices), security, and reputation (including background and feedback) have strong effect on trust and willingness to buy, while website quality plays a marginal role. Although generally trustworthiness perceptions and purchase intentions are positively correlated, in some cases participants are likely to purchase from the websites that they have judged as untrustworthy. I discuss how behavioral biases and decision-making heuristics may explain this discrepancy between perceptions and behavioral intentions. Finally, I analyze and suggest what factors, particular websitesâ attributes, and individual characteristics have the strongest effect on hindering or advancing customersâ trust and willingness to buy. In the third essay I investigate the decision of experimental subjects to incur the risk of revealing personal information to other participants. I do so by using a novel method to generate personal information that reliably induces privacy concerns in the laboratory. I show that individual decisions to incur privacy risk are correlated with decisions to incur monetary risk. I find that partially depriving subjects of control over the revelation of their personal information does not lead them to lose interest in protecting it. I also find that making subjects think of privacy decisions after financial decisions reduces their aversion to privacy risk. Finally, surveyed attitude to privacy and explicit willingness to pay or to accept payments for personal information correlate with willingness to incur privacy risk. Having shown that privacy loss can be assimilated to a monetary loss, I compare decisions to incur risk in privacy lotteries with risk attitude in monetary lotteries to derive estimates of the implicit monetary value of privacy. The average implicit monetary value of privacy is about equal to the average willingness to pay to protect private information, but the two measures do not correlate at the individual level. I conclude by underlining the need to know individual attitudes to risk to properly evaluate individual attitudes to privacy as such.
|
104 |
A Lexi-ontological Resource for Consumer Healthcare: The Italian Consumer Medical VocabularyCardillo, Elena January 2011 (has links)
In the era of Consumer Health Informatics, healthcare consumers and patients play an active role because they increasingly explore health related information sources on their own, and they become more responsible for their personal healthcare, trying to find information on the web, consulting decision-support healthcare systems, trying to interpret clinical notes or test results provided by their physician, or filling in parts of their own Personal Health Record (PHR).
In spite of the advances in Healthcare Informatics for answering consumer needs, it is still difficult for laypersons who do not have a good level of healthcare literacy to find, understand, and act on health information, due to the communication gap which still persists between consumer and professional language (in terms of lexicon, semantics, and explanation). Significant
effort has been devoted to promote access to and the integration of medical information, and many standard terminologies have been developed for this aim, some of which have been formalized into ontologies.
Many of these terminological resources are used in healthcare information systems, but one of the most important problems is that these types of terminologies have been developed according to the physiciansâ€TMperspective, and thus cannot provide sufficient support when integrated into consumer-oriented applications, such as Electronic Health Records, Personal Health Records, etc. This highlights the need for intermediate consumer-understandable terminologies or ontologies being integrated with more technical ones in order to support communication between patient-applications and those designed for experts. The aim of this thesis is to develop a lexical-ontological resource for consumer-oriented healthcare applications which is based on the construction of a Consumer-oriented Medical Vocabulary for Italian, able to reflect the different ways consumers and patients express and think about health topics, helping to bridge the vocabulary gap, integrated with standard medical terminologies/ontologies used by professionals in the general practice for representing the process of care, by means of Semantic Web technologies, in order to have a coherent semantic medical resource useful both for professionals and for consumers.
The feasibility of this consumer-oriented resource and of the Integration Framework has been tested by its application to an Italian Personal Health Record in order to help consumers and patients in the process of querying healthcare information, and easily describe their problems, complaints and clinical history.
|
105 |
Renewable Energy and the Smart Grid: Architecture Modelling, Communication Technologies and Electric Vehicles IntegrationWang, Qi January 2015 (has links)
Renewable Energy is considered as an effective solution for relieving the energy crisis and reducing the greenhouse gas emissions. It is also be recognized as an important energy resource for power supplying in the next generation power grid{smart grid system. For a long time, the unsustainable and unstable of renewable energy generation is the main challenge to the combination of the renewable energy and the smart grid. The short board on the utilities' remote control caused low-efficiency of power scheduling in the distribution power area, also increased the difficulty of the local generated renewable energy grid-connected process. Furthermore, with the rapid growth of the number of electrical vehicles and the widely established of the fast power charging stations in urban and rural area, the unpredictable power charging demand will become another challenge to the power grid in a few years. In this thesis we propose the corresponding solutions for the challenges enumerated in the above. Based on the architecture of terminal power consumer's residence, we introduce the local renewable energy system into the residential environment. The local renewable energy system can typically support part of the consumer's power demand, even more. In this case, we establish the architecture of the local smart grid community based on the structure of distribution network of the smart grid, includes terminal power consumer, secondary power substation, communication links and sub data management center. Communication links are employed as the data transmission channels in our scheme. Also the local power scheduling algorithm and the optimal path selection algorithm are created for power scheduling requirements and stable expansion of the power supply area. Acknowledging the fact that the information flow of the smart grid needs appropriate communication technologies to be the communication standards, we explore the available communication technologies and the communication requirements and performance metrics in the smart grid networks. Also, the power saving mechanism of smart devices in the advanced metering infrastructure is proposed based on the two-state-switch scheduling algorithm and improved 802.11ah-based data transmission model. Renewable energy system can be employed in residential environment, but also can be deployed in public environment, like fast power charging station and public parking campus. Due to the current capacity of electrical vehicles (EV), the fast power charging station is required not just by the EV drivers, but also demanded by the related enterprises. We propose a upgraded fast power charging station with local deployed renewable energy system in public parking campus. Based on the queueing model, we explore and deliver a stochastic control model for the fast power charging station. A new status called "Service Jumped" is created to express the service state of the fast power charging station with and without the support from the local renewable energy in real-time.
|
106 |
Towards Uncovering the True Use of Unlabeled Data in Machine LearningSansone, Emanuele January 2018 (has links)
Knowing how to exploit unlabeled data is a fundamental problem in machine learning. This dissertation provides contributions in different contexts, including semi-supervised learning, positive unlabeled learning and representation learning. In particular, we ask (i) whether is possible to learn a classifier in the context of limited data, (ii) whether is possible to scale existing models for positive unlabeled learning, and (iii) whether is possible to train a deep generative model with a single minimization problem.
|
107 |
Semantic Image Interpretation - Integration of Numerical Data and Logical Knowledge for Cognitive VisionDonadello, Ivan January 2018 (has links)
Semantic Image Interpretation (SII) is the process of generating a structured description of the content of an input image. This description is encoded as a labelled direct graph where nodes correspond to objects in the image and edges to semantic relations between objects. Such a detailed structure allows a more accurate searching and retrieval of images. In this thesis, we propose two well-founded methods for SII. Both methods exploit background knowledge, in the form of logical constraints of a knowledge base, about the domain of the images. The first method formalizes the SII as the extraction of a partial model of a knowledge base. Partial models are built with a clustering and reasoning algorithm that considers both low-level and semantic features of images. The second method uses the framework Logic Tensor Networks to build the labelled direct graph of an image. This framework is able to learn from data in presence of the logical constraints of the knowledge base. Therefore, the graph construction is performed by predicting the labels of the nodes and the relations according to the logical constraints and the features of the objects in the image. These methods improve the state-of-the-art by introducing two well-founded methodologies that integrate low-level and semantic features of images with logical knowledge. Indeed, other methods, do not deal with low-level features or use only statistical knowledge coming from training sets or corpora. Moreover, the second method overcomes the performance of the state-of-the-art on the standard task of visual relationship detection.
|
108 |
THz Radiation Detection Based on CMOS TechnologyKhatib, Moustafa January 2019 (has links)
The Terahertz (THz) band of the electromagnetic spectrum, also defined as sub-millimeter waves, covers the frequency range from 300 GHz to 10 THz. There are several unique characteristics of the radiation in this frequency range such as the non-ionizing nature, since the associated power is low and therefore it is considered as safe technology in many applications. THz waves have the capability of penetrating through several materials such as plastics, paper, and wood. Moreover, it provides a higher resolution compared to conventional mmWave technologies thanks to its shorter wavelengths.
The most promising applications of the THz technology are medical imaging, security/surveillance imaging, quality control, non-destructive materials testing and spectroscopy.
The potential advantages in these fields provide the motivation to develop room-temperature THz detectors. In terms of low cost, high volume, and high integration capabilities, standard CMOS technology has been considered as an excellent platform to achieve fully integrated THz imaging systems.
In this Ph.D. thesis, we report on the design and development of field effect transistor (FET) THz direct detectors operating at low THz frequency (e.g. 300 GHz), as well as at higher THz frequencies (e.g. 800 GHz – 1 THz). In addition, we investigated the implementation issues that limit the power coupling efficiency with the integrated antenna, as well as the antenna-detector impedance-matching condition. The implemented antenna-coupled FET detector structures aim to improve the detection behavior in terms of responsivity and noise equivalent power (NEP) for CMOS based imaging applications.
Since the detected THz signals by using this approach are extremely weak with limited bandwidth, the next section of this work presents a pixel-level readout chain containing a cascade of a pre-amplification and noise reduction stage based on a parametric chopper amplifier and a direct analog-to-digital conversion by means of an incremental Sigma-Delta converter. The readout circuit aims to perform a lock-in operation with modulated sources. The in-pixel readout chain provides simultaneous signal integration and noise filtering for the multi-pixel FET detector arrays and hence achieving similar sensitivity by the external lock-in amplifier.
Next, based on the experimental THz characterization and measurement results of a single pixel (antenna-coupled FET detector + readout circuit), the design and implementation of a multispectral imager containing 10 x 10 THz focal plane array (FPA) as well as 50 x 50 (3T-APS) visible pixels is presented. Moreover, the readout circuit for the visible pixel is realized as a column-level correlated double sampler. All of the designed chips have been implemented and fabricated in 0.15-Âμm standard CMOS technology. The physical implementation, fabrication and electrical testing preparation are discussed.
|
109 |
Novel data-driven analysis methods for real-time fMRI and simultaneous EEG-fMRI neuroimagingSoldati, Nicola January 2012 (has links)
Real-time neuroscience can be described as the use of neuroimaging techniques to extract and evaluate brain activations during their ongoing development. The possibility to track these activations opens the doors to new research modalities as well as practical applications in both clinical and everyday life. Moreover, the combination of different neuroimaging techniques, i.e. multimodality, may reduce several limitations present in each single technique. Due to the intrinsic difficulties of real-time experiments, in order to fully exploit their potentialities, advanced signal processing algorithms are needed. In particular, since brain activations are free to evolve in an unpredictable way, data-driven algorithms have the potentials of being more suitable than model-driven ones. In fact, for example, in neurofeedback experiments brain activation tends to change its properties due to training or task eects thus evidencing the need for adaptive algorithms. Blind Source Separation (BSS) methods, and in particular Independent Component Analysis (ICA) algorithms, are naturally suitable to such kind of conditions. Nonetheless, their applicability in this framework needs further investigations. The goals of the present thesis are: i) to develop a working real-time set up for performing experiments; ii) to investigate different state of the art ICA algorithms with the aim of identifying the most suitable (along with their optimal parameters), to be adopted in a real-time MRI environment; iii) to investigate novel ICA-based methods for performing real-time MRI neuroimaging; iv) to investigate novel methods to perform data fusion between EEG and fMRI data acquired simultaneously. The core of this thesis is organized around four "experiments", each one addressing one of these specic aims. The main results can be summarized as follows. Experiment 1: a data analysis software has been implemented along with the hardware acquisition set-up for performing real-time fMRI. The set-up has been developed with the aim of having a framework into which it would be possible to test and run the novel methods proposed to perform real-time fMRI. Experiment 2: to select the more suitable ICA algorithm to be implemented in the system, we investigated theoretically and compared empirically the performance of 14 different ICA algorithms systematically sampling different growing window lengths, model order as well as a priori conditions (none, spatial or temporal). Performance is evaluated by computing the spatial and temporal correlation to a target component of brain activation as well as computation time. Four algorithms are identied as best performing without prior information (constrained ICA, fastICA, jade-opac and evd), with their corresponding parameter choices. Both spatial and temporal priors are found to almost double the similarity to the target at not computation costs for the constrained ICA method. Experiment 3: the results and the suggested parameters choices from experiment 2 were implemented to monitor ongoing activity in a sliding-window approach to investigate different ways in which ICA-derived a priori information could be used to monitor a target independent component: i) back-projection of constant spatial information derived from a functional localizer, ii) dynamic use of temporal , iii) spatial, or both iv) spatial-temporal ICA constrained data. The methods were evaluated based on spatial and/or temporal correlation with the target IC component monitored, computation time and intrinsic stochastic variability of the algorithms. The results show that the back-projection method offers the highest performance both in terms of time course reconstruction and speed. This method is very fast and effective as far as the monitored IC has a strong and well defined behavior, since it relies on an accurate description of the spatial behavior. The dynamic methods oer comparable performances at cost of higher computational time. In particular the spatio-temporal method performs comparably in terms of computational time to back-projection, offering more variable performances in terms of reconstruction of spatial maps and time courses. Experiment 4: finally, Higher Order Partial Least Square based method combined with ICA is proposed and investigated to integrate EEG-fMRI data acquired simultaneously. This method showed to be promising, although more experiments are needed.
|
110 |
Test-retest Reliability of Intrinsic Human Brain Default-Mode fMRI Connectivity: Slice Acquisition and Physiological Noise Correction EffectsMarchitelli, Rocco January 2016 (has links)
This thesis aims at evaluating, in two separate studies, strategies for physiological noise and head motion correction in resting state brain FC-fMRI. In particular, as a general marker of noise correction performance we use the test-retest reproducibility of the DMN. The guiding hypothesis is that methods that improve reproducibility should reflect more efficient corrections and thus be preferable in longitudinal studies. The physiological denoising study evaluated longitudinal changes in a 3T harmonized multisite fMRI study of healthy elderly participants from the PharmaCog Consortium (Jovicich et al., 2016). Retrospective physiological noise correction (rPNC) methods were here implemented to investigate their influence on several DMN reliability measures within and between 13 MRI sites. Each site involved five different healthy elderly participants who were scanned twice at least a week apart (5 participants per site). fMRI data analysis was performed once without rPNC and then with WM/CSF regression, with physiological estimation by temporal ICA (PESTICA) (Beall & Lowe, 2007) and FMRIB's ICA-based Xnoiseifier (FSL-FIX) (Griffanti et al., 2014; Salimi-Khorshidi et al., 2014). These methods differ for their data-based computational approach to identify physiological noise fluctuations and need to be applied at different stages of data preprocessing. As a working hypothesis, physiological denoising was in general expected to improve DMN reliability. The head motion study evaluated longitudinal changes in the DMN connectivity from a 4T single-site study of 24 healthy young volunteers who were scanned twice within a week. Within each scanning session, RS-fMRI scans were acquired once using interleaved and then sequential slice-order acquisition methods. Furthermore, brain volumes were corrected for motion using once rigid-body volumetric and then slice-wise methods. The effects of these choices were then evaluated computing multiple DMN reliability measures and investigating single regions within the DMN to assess the existence of inter-regional effects associated with head-motion. In this case, we expected to find slice-order acquisition effects in reliability estimates under standard volumetric motion correction and no slice-order acquisition effect under 2D slice-based motion correction. Both studies used ICA to characterize the DMN using group-ICA and dual regression procedures (Beckmann et al., 2009). This methodology proved successful at defining consistent DMN connectivity metrics in longitudinal and clinical RS-fMRI studies (Zuo & Xing, 2014). Automatic DMN selection procedures and other quality assurance analyses were made to supervise ICA performance. Both studies considered several test-retest (TRT) reliability estimates (Vilagut, 2014) for some DMN connectivity measurements: absolute percent error between the sessions, intraclass correlation coefficients (ICC) between sessions and multiple sites, the Jaccard index to evaluate the degree of voxel-wise spatial pattern actiavtion overlap between sessions.
|
Page generated in 0.1362 seconds