Spelling suggestions: "subject:"informatics""
331 |
BugBits: Making tangibles with childrenConci, Andrea January 2018 (has links)
The thesis presents and discusses the processes that lead to the development of a tangible toolkit for supporting design workshops aimed at building tangible interfaces with children. The toolkit, called BugBits, was used to explore and instantiate participatory design workshops with children enabling them to be creative and develop new prototypes. BugBits was tested in three case studies with children of different ages. The first study was conducted in a modern art museum, where children aged between 13 and 15 years old (N=185) built personalised artefacts with the toolkit. The artefacts were then used to perform an augmented visit to some of the exhibition rooms of the museum. The second study (N=31) was conducted in a kindergarten with children between 3 and 6 years old. The toolkit was adopted to perform two educational exercises about colours characteristics. The third study (N=24) explored how the toolkit can be used to instantiate creative processes during participatory design workshops with children between 7 and 11 years old. During the studies, qualitative and quantitative data were collected and analysed.
The outcomes of the analysis show that the toolkit can be used with success to keep the children engaged (study 1, 2, 3) and obtain an active and effective participation (study 3) and allow them to build new and evolving TUI prototypes (study 3). By retrospectively reflecting on the process, the thesis presents the KPW process to guide and instantiate the design of generative tools for TUI design with children. The KPW process poses particular attention to the children roles, and how the technological choices affect the design.
|
332 |
Smartphone Data Transfer Protection According to Jurisdiction RegulationsEskandari, Mojtaba January 2017 (has links)
The prevalence of mobile devices and their capability to access high speed Internet have transformed them into a portable pocket cloud interface. The sensitivity of a user’s personal data demands adequate level of protection in the cloud. In this regard, the European Union Data Protection regulations (e.g., article 25.1) restricts the transfer of European users’ personal data to certain locations. The matter of concern, however, is the enforcement of such regulations. Since cloud service provision is independent of physical location and data can travel to various servers, it is a challenging task to determine the location of data and enforce jurisdiction policies. In this dissertation, first we demonstrate how mobile apps mishandle personal data collection and transfer by analyzing a wide range of popular Android apps in Europe. Then we investigate approaches to monitor and enforce the location restrictions of collected personal data. Since there are multiple entities such as mobile devices, mobile apps, data controllers and cloud providers in the process of collecting and transferring data, we study each one separately. We introduce design and prototyping of a suitable approach to perform or at least facilitate the enforcement procedure with respect to the duty of each entity. Cloud service providers, provide their infrastructure to data controllers in form of virtual machines or containers; therefore, we design and implemented a tool, named VLOC, to verify the physical location of a virtual machine in cloud. Since VLOC requires the collaboration of the data controller, we design a framework, called DLOC, which enables the end users to determine the location of their data after being transferred to the cloud and probably replicated. DLOC is a distributed framework which does not need the data controller or cloud provider to participate or modify their systems; thus, it is economical to implement and to be used widely.
|
333 |
Correspondence among connectomes as combinatorial optimizationSharmin, Nusrat January 2017 (has links)
Diffusion magnetic resonance imaging (dMRI) data allows the reconstruction of the neural pathways of the white matter of the brain as a set of 3D polylines, by means of tractography algorithms. The neuronal axons within the white matter form the anatomical links between regions of the brain, which are referred to as anatomical connectivity, or structural connectivity. The complete collection of structural connectivity is referred to the structural connectome, which helps to understand the human brain functionality. The 3D polylines are called streamlines and the set of all streamlines is called tractogram, which represents the structural connectome of the brain. In neurological studies, it is often important to identify the group of streamlines belonging to the same anatomical structure, called tract or bundle, like the cortico-spinal tract or the arcuate fasciculus. The statistical analysis of the diffusion data of tracts is used in multiple applications, for example, to study gender differences, to observe the changes in age and to correlate with diseases. This kind of studies requires the analysis of groups of subjects, which creates two important problems: aligning tractography data across subjects, a problem called tractogram alignment, and the extraction of tracts of interest, named tract segmentation problem. Due to the anatomical variability across subjects, these two problems are difficult to solve. In this thesis, we investigate those two problems and propose a novel approach and efficient algorithms for their solution.
Typically, the alignment of two tractograms is obtained with registration methods. Registration methods transform the tractograms in order to increase their mutual similarity. In the literature, the best practice for tractogram registration is based on finding one global affine transformation that minimizes their differences. Unfortunately, this approach fails to reconcile local differences between the tractograms. In contrast to transformation-based registration methods, we propose the concept of tractogram correspondence, whose aim is to find which streamline of one tractogram corresponds to which streamline in another tractogram, i.e., a map from one tractogram to another. As a further contribution, we propose to use the relational information of each streamline, i.e., its distances from the other streamlines in its own tractogram, as the building block to define the optimal correspondence. We provide an operational procedure to find the optimal correspondence through a combinatorial optimization problem and we discuss its similarity to the graph matching problem. Finally, we adapted an approximate solution of the graph matching to solve the correspondence.
Several automatic tract segmentation methods have been developed over the last years. Segmentation approaches can be categorized into unsupervised and supervised. A common criticism to unsupervised methods, like clustering, is that there is no guarantee to obtain anatomically meaningful tracts. For this reason, in this thesis, we focus on supervised tract segmentation, which is based on prior knowledge. We propose a novel supervised tract segmentation method, that segments the tract of interest, e.g. the arcuate fasciculus, by exploiting a set of example tracts from different subjects. In analogy to tractogram alignment, our proposed supervised segmentation approach is based on the concept of the streamline correspondence, i.e. on finding which streamline in one tractogram corresponds to which streamline in the other tractogram. We showed that streamline correspondence can be a powerful principle to transfer the anatomical knowledge of a given bundle from one subject to another one. In the literature of supervised segmentation, streamline correspondence has been addressed with a nearest neighbour strategy. We observed that segmenting tracts with a nearest neighbour strategy has a number of limitations. Conversely, in this thesis we address the tract segmentation problem as a linear assignment problem (LAP), a cornerstone of combinatorial optimization. With respect to nearest neighbor, the LAP introduces a constraint of one-to-one correspondence that substantially improves the quality of segmentation. We draw from the literature of algorithms for solving the LAP and adopt one of the most efficient solutions available. To this, we add a strategy for merging correspondences coming from different examples, i.e. from different subjects, in order to account for the anatomical variability across the population.
In order to implement graph matching and LAP for alignment and segmentation of tractograms, we needed to address the very large computational cost due to the large number of streamlines involved. To reduce the amounts of computations, in both cases we represented streamlines as vectors through a Euclidean embedding technique called dissimilarity representation. With such representation, we obtained fast nearest neighbor queries through the use of kd-trees, which were instrumental to dramatically reduce the amount of computations: from months to minutes.
|
334 |
Events based Multimedia Indexing and RetrievalAhmad, Kashif January 2017 (has links)
Event recognition is one of multimedia applications that has been gaining ground recently. However, it has received scarce attention relatively to other applications. The methodologies presented hereby are aimed at event-based analysis of multimedia content, which is achieved from three perspectives, namely (i) event recognition in single images, (ii) event recognition in personal photo collections and (iii) fusion of social media information and satellite imagery for natural disaster detection. A close look at the relevant literature suggests that more attention has been paid to event recognition in single images. Event recognition in personal photo collection has also received a number of interesting solutions. Natural disaster detection in images from social media and satellite imagery, however, is relatively new. As a matter of fact, many issues remain unsolved mostly due to the heterogeneity, multi-modality and the unstructured nature of the data. In this dissertation, such open problems are presented and analyzed. New perspectives and approaches are suggested, alongside a detailed experimental validation and analysis. In details, our contribution is multi-fold. On the one hand, we aim at demonstrating that the fusion of different feature extraction and classification strategies can outperform the single methods by jointly exploiting the learning capabilities of individual deep models. On the other side, we analyze the importance of event-salient objects and local image regions in event recognition. We also present a novel framework for event recognition in personal photo collections. Moreover, we also present our system JORD, and our Convolutional Neural Networks (CNNs) and Generative Adversarial Network (GAN) based fusion of social media and satellite images for natural disaster detection. A thorough experimental analysis of each proposed solution is provided on benchmark datasets along with the potential direction of future work.
|
335 |
Participatory Design For Community Energy - Designing the Renewable Energy CommonsCapaccioli, Andrea January 2018 (has links)
The energy sector is facing a major paradigm shift from centralised production and management to distributed energy generation and management. Digital technologies play a crucial role in enabling such scenario; emphasis and attention has been given to Smart Grids and new energy management systems both for final users and companies. Energy, its consumption, and its production are at the centre of our everyday lives and are connected to everyday practices and habits. However, while this scenario can be seen as mundane, new spaces can be created for citizens and communities to participate and be empowered. This thesis presents the work done by the author within a three-years European Project used as his main research field. The focal points were: (i) the participatory design process of a community energy digital platform; and (ii) the advantages and disadvantages of a commons based approach to renewable energy management on the development and empowerment of local communities. First will be presented how a participatory design process opens a new space for citizen participation to design as an alternative energy management model. Then will be presented the energy budgeting framework designed within this process, discussing how social acceptance of technology affected the design and how energy has been translated to a new kind of value within this framework. Afterwards, it will be discussed how the participatory process and the framework contributed to the construction-in-practice of energy justice, and how this process reconfigured the relationships among civil society, the energy sector, and politics. Finally, the whole three years project experience will be analysed retrospectively using the interaction spaces framework, highlighting how participatory configurations evolved over time and how cross-participation is crucial for the boundary-spanning of design issues. Therefore, concluding reflections will be drawn based on this content, they will consider lessons learned, limitations of the experience and possible future work to continue explore the relationship between energy, digital technologies and participatory design.
|
336 |
A framework for integrating user-centred Design and agile Development in small CompaniesBordin, Silvia January 2017 (has links)
The integration of user-centred design (UCD) and Agile development is gaining increasing momentum in industry: the two approaches show promising complementarities and their convergence can lead to a more holistic software engineering approach relative to the application of just one of them. However, the practicalities of this integration are not trivial and the topic is currently of interest to a variety of research communities. This thesis aims at understanding the integration of user-centred design and Agile development and at supporting its adoption in small companies. Based on a qualitative approach, it is positioned at the intersection of Computer Supported Cooperative Work and software engineering, and is grounded on several empirical studies performed in industry. The work is organised in three stages inspired by the action research approach. The first stage was dedicated to understanding software development practice: through literature review and two ethnographically-informed studies, it resulted in the first contribution of this thesis, that is a set of communication breakdowns that may hinder the integration of user-centred design and Agile development. The second stage was dedicated to deliberating improvements of practice: the set of communication breakdowns was elaborated into the second contribution of this thesis, that is a framework of focal points meant to help the organisation diagnose and assess communication breakdowns in its work practice. The third stage of the thesis was dedicated to implementing and evaluating improvements. The framework was further elaborated into a training on the adoption of the framework itself. Such training was instantiated in two iterations of action research performed in small development organisations, with the aim of establishing a supportive organisational environment and mitigating communication breakdowns. Once validated through these cases, the training constituted the third contribution of this thesis. Results show that the intervention has benefited companies at several levels, enriching work practice with fresh techniques, favouring team collaboration and cooperation, and resulting in a shift from a technology-centred mindset to a more user-centred one.
|
337 |
The Connective Power of Reminiscence: Designing a Reminiscence-based Tool to Increase Social Interactions in Residential CareIbarra, Francisco January 2018 (has links)
Reminiscence powers therapeutic interventions such as Life review and Reminiscence therapy, with well known positive outcomes on the wellbeing of older adults. In particular, reminiscence therapy supported by technology can increase self-esteem, facilitate social interactions and increase opportunities for conversation. Much research on reminiscence technology has in fact focused on improving interactions and conversation, mainly for people with dementia. Nonetheless, the potential for reminiscence to discover common life points among residents in residential care facilities and especially to use this information to foster bonding between residents has been little explored. The focus of this thesis is to design a reminiscence-based tool, to be used in nursing homes to stimulate interactions among older adults, family members, and nursing home staff.
We start by describing early work that reinforces the potential of ICT interventions on improving the wellbeing of older adults. These studies highlight the importance of social interactions on social wellbeing and of doing activities together in engagement and motivation. Through review works and exploratory studies we confirm the positive effects of social interaction on the wellbeing of older adults, the benefits associated to contributing, and the opportunities to improve social interactions, not only from distance but also in co-located settings.
In nursing homes we find a context that requires improving social interactions, and in reminiscence we find an ideal activity to make contributors out of older adults, stimulate conversation, and possibly increase connectedness between older adults and their networks. A series of studies were conducted with nursing homes stakeholders to define and design a tool suitable to their current practices, that could be used and adopted in nursing homes to stimulate co-located interactions.
In this thesis, we present the work carried out to define and validate and concept of a reminiscence-based tool, and describe how input from nursing home stakeholders has been integrated into the design of a tool to improve social interactions in residential care facilities.
|
338 |
A methodology for the design and security assessment of mobile identity management: applications to real-world scenariosSciarretta, Giada January 2018 (has links)
The widespread use of digital identities in our everyday life, along with the release of sensitive data on many online transactions, calls for Identity Management (IdM) solutions that are secure, privacy-aware, and compatible with new technologies, such as mobile and cloud computing. While there exist many secure IdM solutions for web applications, their adaptation in the mobile context is a new and open challenge. The majority of mobile IdM solutions currently used are based on proprietary protocols and their security analysis lacks standardization in the structure, definitions of notions and entities, and specific considerations to identify the attack surface that turns out to be quite different from well understood web scenarios. This makes a comparison among different solutions very complex or, in the worst case, misleading. To overcome these difficulties, we propose a novel methodology for the design and security assessment of mobile IdM solutions. The design space is characterized by the identification of: (i) national (e.g., SPID for Italy) and European (e.g., eIDAS) laws, regulations and guideline principles that are particularly relevant to digital identity and privacy; (ii) a list of security and usability requirements that are related to IdM solutions (e.g., single sign-on and multi-factor authentication); (iii) a set of implementation mechanisms that are relevant to authentication and authorization on mobile devices and simplify the satisfaction of the requirements in (ii). All the designed solutions use as blueprint a reference model resulting from a rational reconstruction of the mobile IdM solution adopted by Facebook and a study of the OAuth specification for native applications. Regarding the security assessment, our methodology supports
analyses ranging from semi-formal to formal. For the former, an IdM designer is required to specify the security relevant parts of the protocol using message sequence charts, the threat model and the security properties; these offer the starting point to argue whether the protocol satisfies the specified properties. For the latter, an IdM designer is required to specify the protocol flow, the attacker properties and the security properties using one of the available formal specification languages for the description of cryptographic and browser-based protocols, and verify the security property violations using an automated tool for protocol analysis. To validate our approach, we applied it to four different real-world scenarios that represent different functional and usability requirements:
1. TreC: a multi-factor authentication solution with a single sign-on experience for mobile e-Health applications.
2. Smart Community: a secure delegated access solution in the context of smart-cities.
3. DigiMat-Lab (Istituto Poligrafico e Zecca dello Stato): a mobile multi-factor authentication solution that uses as second factor the Italian electronic identity card.
4. FIDES: an IdM solution that combines federation and cross-border aspects in the context of the European single digital market.
The custom designs obtained by applying our methodology in the four scenarios above show the generality and effectiveness of our methodology.
When using formal analysis, we have re-used the specification language and tools developed in the context of the AVANTSSAR EU-founded project.
|
339 |
Controlling the effect of crowd noisy annotations in NLP TasksAbad, Azad January 2017 (has links)
Natural Language Processing (NLP) is a sub-field of Artificial Intelligence and Linguistics, with the aim of studying problems in the automatic generation and understanding of natural language. It involves identifying and exploiting linguistic rules and variation with code to translate unstructured language data into information with a schema. Empirical methods in NLP employ machine learning techniques to automatically extract linguistic knowledge from big textual data instead of hard-coding the necessary knowledge. Such intelligent machines require input data to be prepared in such a way that the computer can more easily find patterns and inferences. This is feasible by adding relevant metadata to a dataset. Any metadata tag used to mark up elements of the dataset is called an annotation over the input. In order for the algorithms to learn efficiently and effectively, the annotation done on the data must be accurate, and relevant to the task the machine is being asked to perform. In other words, the supervised machine learning methods intrinsically can not handle the inaccurate and noisy annotations and the performance of the learners have a high correlation with the quality of the input data labels. Hence, the annotations have to be prepared by experts. However, collecting labels for large dataset is impractical to perform by a small group of qualified experts or when the experts are unavailable. This is special crucial for the recent deep learning methods which the algorithms are starving for big supervised data. Crowdsourcing has emerged as a new paradigm for obtaining labels for training machine learning models inexpensively and for high level of data volume. The rationale behind this concept is to harness the “wisdom of the crowd” where groups of people pool their abilities to show collective intelligence. Although crowdsourcing is cheap and fast but collecting high quality data from the non-expert crowd requires careful attention to the task quality control management. The quality control process consists of selection of appropriately qualified workers, providing a clear instruction or training that are understandable to non-experts and performing sanitation on the results to reduce the noise in annotations or eliminate low quality workers. This thesis is dedicated to control the effect of crowd noisy annotations use for training the machine learning models in variety of natural language processing tasks namely: relation extraction, question answering and recognizing textual entailment. The first part of the thesis deals with design a benchmark for evaluation Distant Supervision (DS) for relation extraction task. We propose a baseline which involves training a simple yet accurate one-vs-all strategy using SVM classifier. Moreover, we exploit automatic feature extraction technique using convolutional tree kernels and study several example filtering techniques for improving the quality of the DS output. In the second part, we focused on the problem of the crowd noisy annotations in training two important NLP tasks, i.e., question answering and recognizing textual entailment. We propose two learning methods to handle the noisy labels by (i) taking into account the disagreement between crowd annotators as well as their skills for weighting instances in learning algorithms; and (ii) learning an automatic label selection model based on combining annotators characteristic and the task syntactic structure representation as features in a joint manner. Finally, we observe that in fine-grained tasks like relation extraction where the annotators need to have some deeper expertise, training the crowd workers has more impact on the results than simply filter-out the low quality crowd workers. Training crowd workers often requires high-quality labeled data (namely, gold standard) to provide the instruction and feedback to the crowd workers. We conversely, introduce a self-training strategy for crowd workers where the training examples are automatically selected via a classifier. Our study shows that even without using any gold standard, we still can train workers which open doors toward inexpensive crowd training procedure for different NLP tasks.
|
340 |
Computational Systems Biology Applied To Human Metabolism. Mathematical Modelling and Network Analysis.Misselbeck, Karla January 2019 (has links)
Human metabolism, an essential and highly organized process, which is required to run and maintain cellular processes and to respond to shifts in external and internal conditions, can be described as a complex and interconnected network of metabolic pathways.
Computational systems biology provides a suitable framework to study the mechanisms and interactions of this network and to address questions that are difficult to reproduce in vitro or in vivo.
This dissertation contributes to the development of computational strategies which help to investigate aspects of human metabolism and metabolic-related disorders.
In the first part, we introduce mathematical models of folate-mediated one-carbon metabolism in the cytoplasm and subsequently in the nucleus.
A hybrid-stochastic framework is applied to investigate the behavior and stability of the complete metabolic network in response to genetic and nutritional factors.
We analyse the effect of a common polymorphism of MTHFR, B12 and folate deficiency, as well as the role of the 5-formyltetrahydrofolate futile cycle on network dynamics.
Furthermore, we study the impact of multienzyme complex formation and substrate channelling, which are key aspects related to nuclear folate-mediated one-carbon metabolism.
Model simulations of the nuclear model highlight the importance of these two factors for normal functioning of the network and further identify folate status and enzyme levels as important influence factors for network dynamics.
In the second part, we focus on metabolic syndrome, a highly prevalent cluster of metabolic disorders.
We develop a computational workflow based on network analysis to characterise underlying molecular mechanisms of the disorder and to explore possible novel therapeutic strategies by means of drug repurposing.
To this end, genetic data, text mining results, drug expression profiles and drug target information are integrated in the setting of tissue-specific background networks and a proximity score based on topological distance and functional similarity measurements is defined to identify potential new therapeutic applications of already approved drugs. A filtering and prioritization analysis allow us to identify ibrutinib, an inhibitor of bruton tyrosine kinase, as the most promising repurposing candidate.
|
Page generated in 0.0643 seconds