• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1686
  • 341
  • 13
  • 11
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 2052
  • 708
  • 488
  • 366
  • 346
  • 279
  • 253
  • 251
  • 236
  • 227
  • 224
  • 217
  • 192
  • 191
  • 180
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Competitive Robotic Car: Sensing, Planning and Architecture Design

Rizano, Tizar January 2013 (has links)
Research towards a complete autonomous car has been pushed through by industries as it offers numerous advantages such as the improvement to traffic flow, vehicle and pedestrian safety, and car efficiency. One of the main challenges faced in this area is how to deal with different uncertainties perceived by the sensors on the current state of the car and the environment. An autonomous car needs to employ efficient planning algorithm that generates the vehicle trajectory based on the environmental sensing implemented in real-time. An complete motion planning algorithm is an algorithm that returns a valid solution if one exist in finite time and returns no path exist when none exist. The algorithm is optimal when it returns an optimal path based on some criteria. In this thesis we work on a special case of motion planning problem: to find an optimal trajectory for a robotic car in order to win a car race. We propose an efficient realtime vision based technique for localization and path reconstruction. For our purpose of winning a car race we identify a characterization of the alphabet of optimal maneuvers for the car, an optimal local planning strategy and an optimal graph-based global planning strategy with obstacle avoidance. We have also implemented the hardware and software of this approach on as a testbed of the planning strategy.
52

Non-Redundant Overlapping Clustering: Algorithms and Applications

Truong, Duy Tin January 2013 (has links)
Given a dataset, traditional clustering algorithms often only provide a single partitioning or a single view of the dataset. On complex tasks, many different clusterings of a dataset exist, thus alternative clusterings which are of high quality and different from given trivial clusterings are asked to have complementary views. The task is therefore a clear multi-objective optimization problem. However, most approaches in the literature optimize these objectives sequentially (one after another one) or indirectly (by some heuristic combination). This can result in solutions which are not Pareto- optimal. The problem is even more difficult for high-dimensional datasets as clusters can be located in various subspaces of the original feature space. Besides, many practical applications require that subspace clusters can still overlap but the overlap must be below a predefined threshold. Nonetheless, most of the state-of-the-art subspace clustering algorithms can only generate a set of disjoint or significantly overlapping subspace clusters. To deal with the above issues, for full-space alternative clustering, we develop an algorithm which fully acknowledges the multiple objectives, optimizes them directly and simultaneously, and produces solutions approximating the Pareto front. As for non-redundant subspace clustering, we propose a general framework for generating K overlapping subspace clusters where the maximum overlap between them is guaranteed to be below a predefined threshold. In both cases, our algorithms can be applied for several domains as different analyzing models can be used without modifying the main parts of the algorithms.
53

Evolutionary Test Case Generation via Many Objective Optimization and Stochastic Grammars

Kifetew, Fitsum Meshesha January 2015 (has links)
In search based test case generation, most of the research works focus on the single-objective formulation of the test case generation problem. However, there are a wide variety of multi- and many-objective optimization strategies that could offer advantages currently not investigated when addressing the problem of test case generation. Furthermore, existing techniques and available tools mainly handle test generation for programs with primitive inputs, such as numeric or string input. The techniques and tools applicable to such types of programs often do not effectively scale up to large sizes and complex inputs. In this thesis work, at the unit level, branch coverage is reformulated as a many-objective optimization problem, as opposed to the state of the art single-objective formulation, and a novel algorithm is proposed for the generation of branch adequate test cases. At the system level, this thesis proposes a test generation approach that combines stochastic grammars with genetic programming for the generation of branch adequate test cases. Furthermore, the combination of stochastic grammars and genetic programming is also investigated in the context of field failure reproduction for programs with highly structured input.
54

Large-scale Structural Reranking for Hierarchical Text Categorization

JU, QI January 2013 (has links)
Current hierarchical text categorization (HTC) methods mainly fall into three directions: (1) Flat one-vs.-all approach, which flattens the hierarchy into independent nodes and trains a binary one-vs.-all classifier for each node. (2) Top-down method, which uses the hierarchical structure to decompose the entire problem into a set of smaller sub-problems, and deals with such sub-problems in top-down fashion along the hierarchy. (3) Big-bang approach, which learns a single (but generally complex) global model for the class hierarchy as a whole with a single run of the learning algorithm. These methods were shown to provide relatively high performance in previous evaluations. However, they still suffer from two main drawbacks: (1) relatively low accuracy as they disregard category dependencies, or (2) low computational efficiency when considering such dependencies. In order to build an accurate and efficient model we adopted the following strategy: first, we design advanced global reranking models (GR) that exploit structural dependencies in hierarchical multi-label text classification (TC). They are based on two algorithms: (1) to generate the k-best classification of hypotheses based on decision probabilities of the flat one-vs.-all and top-down methods; and (2) to encode dependencies in the reranker by: (i) modeling hypotheses as trees derived by the hierarchy itself and (ii) applying tree kernels (TK) to them. Such TK-based reranker selects the best hierarchical test hypothesis, which is naturally represented as a labeled tree. Additionally, to better investigate the role of category relationships, we consider two interesting cases: (i) traditional schemes in which node-fathers include all the documents of their child-categories; and (ii) more general schemes, in which children can include documents not belonging to their fathers. Second, we propose an efficient local incremental reranking model (LIR), which combines a top-down method with a local reranking model for each sub-problem. These local rerankers improve the accuracy by absorbing the local category dependencies of sub-problems, which alleviate the errors of top-down method in the higher levels of the hierarchy. The application of LIR recursively deals with the sub-problems by applying the corresponding local rerankers in top-down fashion, resulting in high efficiency. In addition, we further optimize LIR by (i) improving the top-down method by creating local dictionaries for each sub-problem; (ii) using LIBLINEAR instead of LIBSVM; and (iii) adopting the compact representation of hypotheses for learning the local reranking model. This makes LIR applicable for large-scale hierarchical text categorization. The experimentation on different hierarchical datasets has shown promising enhancements by exploiting the structural dependencies in large-scale hierarchical text categorization.
55

Black-Box Security Testing of Browser-Based Security Protocols

Sudhodanan, Avinash January 2017 (has links)
Millions of computer users worldwide use the Internet every day for consuming web-based services (e.g., for purchasing products from online stores, for storing sensitive files in cloud-based file storage web sites, etc.). Browser-based security protocols (i.e. security protocols that run over the Hypertext Transfer Protocol and are executable by commercial web-browsers) are used to ensure the security of these services. Multiple parties are often involved in these protocols. For instance, a browser-based security protocol for Single Sign-On (SSO in short) typically consists of a user (controlling a web browser), a Service Provider web site and an Identity Provider (who authenticates the user). Similarly, a browser-based security protocol for Cashier-as-a-Service (CaaS) scenario consists of a user, a Service Provider web site (e.g., an online store) and a Payment Service Provider (who authorizes payments). The design and implementation of browser-based security protocols are usually so complex that several vulnerabilities are often present even after intensive inspection. This is witnessed, for example, by vulnerabilities found in various browser-based security protocols such as SAML SSO v2.0, OAuth Core 1.0, etc. even years after their publication, implementation, and deployment. Although techniques such as formal verification and white-box testing can be used to perform security analysis of browser- based security protocols, currently they have limitations: the necessity of having formal models that can cope with the complexity of web browsers (e.g., cookies, client-side scripting, etc.), the poor support offered for certain programming languages by white-box testing tools, to name a few. What remains is black-box security testing. However, currently available black-box security testing techniques for browser-based security protocols are either scenario-specific (i.e. they are specific for SSO or CaaS, not both) or do not support very well the detection of vulnerabilities enabling replay attacks (commonly referred to as logical vulnerabilities) and Cross-Site Request Forgery (CSRF in short). The goal of this thesis is to overcome the drawbacks of the black-box security testing techniques mentioned above. At first this thesis presents an attack pattern-based black-box testing technique for detecting vulnerabilities enabling replay attacks and social login CSRF in multi-party web applications (i.e. web applications utilizing browser-based security protocols involving multiple parties). These attack patterns are inspired by the similarities in the attack strategies of previously-discovered attacks against browser-based security protocols. Second, we present manual and semi-automatic black-box security testing strategies for detecting 7 different types of CSRF attacks, targeting the authentication and identity management functionalities of web sites. We also provide proof-of-concept implementations of our ideas. These implementations are based on OWASP ZAP (a prominent, free and open-source penetration testing tool). This thesis being in the context of an industrial doctorate, we had the opportunity to analyse the use-cases provided by our industrial partner, SAP, to further improve our approach. In addition, to access the effectiveness of the techniques we propose, we applied them against the browser-based security protocols of many prominent web sites and discovered nearly 340 serious security vulnerabilities affecting more than 200 web sites, including the web sites of prominent vendors such as Microsoft, eBay, etc.
56

Touching Autism Spectrum Disorder: Somatosensory Abnormalities in Shank3b and Cntnap2 Mouse Models

Balasco, Luigi 27 February 2023 (has links)
Autism spectrum disorders (ASDs) represent a heterogeneous group of neurodevelopmental disorders characterised by deficits in social interaction and communication, and by restricted and stereotyped behaviour. The diagnosis of autism is based on behavioural observation of the subject as research has not yet identified specific markers. Today, several studies show that disturbances in sensory processing are a crucial feature of autism. Indeed, around 90% of individuals diagnosed with autism show atypical responses to various sensory stimuli. These sensory abnormalities (described as hyper- or hypo-reactivity to sensory stimulation) are currently recognised as diagnostic criteria for autism. Among the sensory defects, tactile abnormalities represent a very common finding impacting the life of autistic individuals. It has been shown how abnormal responses to tactile stimuli not only correlate with the diagnosis of autism but also predict its severity. Indeed hypo-responsiveness to tactile stimuli is associated with greater severity of the main symptoms of autism. To date, the neural substrates of these behaviours are still poorly understood. Over the years, the use of genetically modified animal models has enabled a major step forward in the study of the aetiology of autism spectrum disorders. Interestingly, several animal models that carry autism-related mutations also show deficits of a sensory nature. This is the case with the Shank3b-/- and Cntnap2-/- mouse models, strains in which the expression of the gene in question is suppressed. The SHANK3 gene encodes for a crucial protein in the structure of the postsynaptic density of glutamatergic synapses. In humans, haploinsufficiency of SHANK3 causes the Phelan-McDermid syndrome, a neurodevelopmental disorder characterised by ASD-like behaviour, developmental delay, intellectual disability and absent or severely delayed speech. Individuals with Phelan-McDermid syndrome often show dysfunctions in somatosensory processing, including disturbances in tactile sensitivity. CNTNAP2 codes for CASPR2, a transmembrane protein of the neurexin superfamily involved in neuron-glia interactions and clustering of potassium channels in myelinated axons. Missense mutation in CNTNAP2 is causative of cortical dysplasia-focal epilepsy syndrome (CDFE), a rare disorder characterized by epileptic seizures, language regression, intellectual disability, and autism. Following these findings, mice lacking the Shank3b isoform (Shank3b-/-) and Cntnap2 gene (Cntnap2-/-) show autistic-like behaviours. In this study, we used an interdisciplinary approach (behavioural, molecular, and imaging techniques) to study the neuronal substrates of whisker-mediated behaviours in genetic mouse models of ASD. We performed two behavioural tests, namely the textured novel object recognition test (tNORT) and the whisker nuisance test (WN) to have in-depth insight in whisker dependent behaviours. Following behavioural assessment, through a molecular approach, we investigated the neural underpinnings of this aberrant behaviour. We evaluated neuronal activation in key brain areas involved in the processing of sensory stimuli via c-fos mRNA in situ hybridization. Finally, using a seed-based approach in resting-state functional magnetic resonance imaging (rsfMRI) we probed the functional connectivity phenotype of these mutant mice. The contribution of the peripheral nervous system to sensory processing was also assessed via RT-qPCR at the level of the trigeminal ganglion. Sensory abnormalities that characterize ASDs represent a symptom of primary relevance in the life of autistic individuals. Scientific research has only recently addressed this important aspect and animal models represent a useful preclinical tool to investigate the causal role of genetic mutations in the aetiology of ASDs. In such context, the complementary approach used in this work represents a crucial step to the understanding of sensory-related deficits which characterize ASD.
57

A Large Scale Distributed Knowledge Organization System

Noori, Sheak Rashed Haider January 2011 (has links)
The revolution of Internet and the Web takes the computer and information technology into a new age. The information on the web is growing very fast. The progress of information and communication technologies has made accessible a large amount of information, which have provided each of us with access to far more information than we can comprehend or manage. This emphasizes the difficulty with the resulting semantic heterogeneity of the diverse sources. Human knowledge is a living organism and as such evolves in time where different people having different viewpoints and using different terminology among people of different cultures and languages, intensify the heterogeneity of the sources even more. These introduce some concrete problems like natural language disambiguation, information retrieval and information integration. Nevertheless, the problem is quite well known in almost every branch of knowledge and has been independently approached by several communities for several decades. To make this huge amount of existing information accessible and manageable while also solving the semantic heterogeneity problem, namely the problem of diversity in knowledge, and therefore support interoperability, it is essential to have a large scale high quality collaborative knowledge base along with a suitable structure as a common ground on which interoperability among people and different systems should be possible. It will play the role of a reference point for communication, assigning clear meaning by accurate disambiguation to exchanged information, communication and automating complex tasks. However, successfully building large scale knowledge bases with maximum coverage is not possible by a single person or a small group of people without collaborative support. It extremely depends on expert community based support. Therefore, it is necessary for experts to work together on knowledge base building. Furthermore, it is very natural that these expert users will be geographically distributed. Web 2.0 has the potential to support information sharing, interoperability and collaboration on the Web. Simplicity, flexibility and easy to use services make it an interactive and collaborative platform which allows them to create or edit their content. The exponential expansion of the Web users and the potentials of Web 2.0 make it the natural platform of choice for developing knowledge bases collaboratively. We propose a highly flexible knowledge base system, which takes into account diversity of knowledge and its evolution in time. The work presented in this thesis is part of a larger project. More specifically the goal of this thesis is to create a powerful and easy to use knowledge base management system to help people in building, organizing a high quality knowledge base and making accessible their knowledge and to support interoperability in real world scenarios.
58

Identifiability of small rank tensors and related problems

Santarsiero, Pierpaola 01 April 2022 (has links)
In this thesis we work on problems related to tensor decomposition from a geometrical perspective. In the first part of the thesis we focus on the identifiability problem, which amounts to understand in how many ways a tensor can be decomposed as a minimal sum of elementary tensors. In particular we completely classify the identifiability of any tensor up to rank 3. In the second part of the thesis we continue to work with specific elementsand we introduce the notion of r-thTerracini locus of a Segre variety. This is the locus containing all points for which the differential of the map between the r-th abstarct secant variety and the r-th secant variety of a Segre variety drops rank. We completely determine the r-th Terracini locus of any Segre variety in the case of r = 2, 3.
59

CHD mutations in autism spectrum disorders and epilepsy: alterations of epigenetic landscape and new approaches for therapeutic development.

Arnoldi, Michele 28 April 2022 (has links)
Recurrent disruptive mutations in chromodomain helicase DNA-binding protein 2 and 8 (CHD2 and CHD8) are emerging as prominent risk factors for Epilepsy and ASD, respectively. While both CHD2 and CHD8 play important roles in chromatin regulation and transcription, not fully dissected are the molecular consequences of the inactivating mutations described in patients. Here, we first investigated how chromatin reacts to CHD8 suppression by analysing a panel of histone modifications in human induced pluripotent stem cell-derived neural progenitors (hiNPC). CHD8 suppression led to significant reduction (47.82%) in histone H3K36me3 peaks at gene bodies, particularly impacting on transcriptional elongation chromatin states. H3K36me3 reduction specifically affected highly expressed, CHD8-bound genes. Strikingly, the levels of transcription in cells presenting reduced H3K36me3 in the gene body appeared unchanged, however the process of alternative splicing was instead significantly affected. In particular, we have found aberrant alternative splicing patterns – mainly affecting alternative first and exon skipping events - of ~ 2000 protein coding genes implicated in “RNA splicing”, “mitotic cell cycle phase transition” and “mRNA processing”. Mechanistically, mass-spectrometry analysis uncovered a novel interaction between CHD8 and the splicing regulator Heterogeneous Nuclear Ribonucleoprotein L (hnRNPL), providing the first mechanistic insights to explain CHD8-suppression splicing phenotype, partially implicating SETD2, H3K36me3 methyltransferase. Most of the mutations in CHD2 and CHD8 genes are disruptive, leading to protein haploinsufficiency. Thus, any molecular manipulation eliciting an increase in CHD2/CHD8 proteins, could prove beneficial for therapeutic development. Here, we intended to provide a Proof-of-Principle of how SINEUP - recently described class of non-coding RNAs able to augment, in a specific and controlled way, the expression of target proteins - can increase the translation of CHD2/CHD8 proteins and possibly rescue haploinsufficiency-associated phenotypes. With this purpose, we designed and cloned SINEUP targeting human CHD2/CHD8 isoforms and tested their efficacy in human induced pluripotent stem (iPS), induced neural progenitor cells (hiNPC) expressing normal and reduced levels of the target proteins and in patients’ fibroblasts bearing CHD8 heterozygous loss-of-function mutations. While stimulation by different CHD2/CHD8-SINEUP molecules didn’t elicit any effect in wild-type cells with physiologic levels of CHD2/CHD8, CHD2/CHD8-SINEUP were fully effective in haploinsufficient conditions, when reduced levels of the target proteins were expressed. Functionally, CHD8-SINEUP were able to revert molecular phenotypes associated to CHD8-suppression, i.e. the transcriptional dysregulation of the ASD-related genes, MBD3 and SHANK3, and restore the genome-wide reduction of H3K36me3 enrichment. Strikingly, chd8-SINEUP injection in vivo, in chd8-morpholino treated, developing zebrafish embryos, confirmed that stimulation of internal methionine could rescue chd8-suppression induced macrocephaly phenotype. In conclusion, CHD2/CHD8-SINEUP molecules represent a Proof-of-Concept towards the development of an RNA-based therapy for neurodevelopmental syndromes, with implications for and beyond ASD and epilepsy, surely relevant to a large repertory of presently incurable brain genetic diseases.
60

An Effective End-User Development Approach through Domain-Specific Mashups for Research Impact Evaluation

Imran, Muhammad January 2013 (has links)
Over the last decade, there has been growing interest in the assessment of the performance of researchers, research groups, universities and even countries. The assessment of productivity is an instrument to select and promote personnel, assign research grants and measure the results of research projects. One particular assessment approach is bibliometrics i.e., the quantitative analysis of scientific publications through citation and content analysis. However, there is little consensus today on how research evaluation should be performed, and it is commonly acknowledged that the quantitative metrics available today are largely unsatisfactory. The process is very often highly subjective, and there are no universally accepted criteria. A number of dierent scientific data sources available on the Web (e.g., DBLP, Microsoft Academic Search, Google Scholar) that are used for such analysis purposes. Taking data from these diverse sources, performing the analysis and visualizing results in different ways is not a trivial and straight forward task. Moreover, the data taken from these sources cannot be used as it is due to the problem of name disambiguation, where many researchers share identical names or an author dierent name variations appear in the data. We believe that the personalization of the evaluation processes is a key element for the appropriate use and practical success of these research impact evaluation tasks. Moreover, people involved in such evaluation processes are not always IT experts and hence not capable to crawl data sources, merge them and compute the needed evaluation procedures. The recent emergence of mashup tools has refueled research on end-user development, i.e., on enabling end-users without programming skills to produce their own applications. Yet, similar to what happened with analogous promises in web service composition and business process management, research has mostly focused on technology and, as a consequence, has failed its objective. Plain technology (e.g., SOAP/WSDL web services) or simple modeling languages (e.g., Yahoo! Pipes) do not convey enough meaning to non-programmers. We believe that the heart of the problem is that it is impractical to design tools that are generic enough to cover a wide range of application domains, powerful enough to enable the specification of non-trivial logic, and simple enough to be actually accessible to non-programmers. At some point, we need to give up something. In our view, this something is generality since reducing expressive power would mean supporting only the development of toy applications, which is useless, while simplicity is our major aim. This thesis presents a novel approach for an effective end-user development, specifically for non-programmers. That is, we introduce a domain-specific approach to mashups that "speaks the language of users", i.e., that is aware of the terminology, concepts, rules, and conventions (the domain) the user is comfortable with. We show what developing a domain-specific mashup platform means, which role the mashup meta-model and the domain model play and how these can be merged into a domain-specific mashup metamodel. We illustrate the approach by implementing a generic mashup platform, whose capabilities are based on our proposed mashup meta-model. Further, we illustrate how the generic mashup platform can be tailored for a specific domain, which is achieved through the development of ResEval Mash tool that is specifically developed for the research evaluation domain. Moreover, the thesis proposed an architectural design for mashup platforms, specifically it presents a novel approach for data-intensive mashup-based web applications, which proved to be a substantial contribution. The proposed approach is suitable for those applications, which deal with large amounts of data that travel between client and server. For the evaluation of our work and to determine the effectiveness and usability of our mashup tool, we performed two separate user studies. The results of the user studies confirm that domain-specific mashup tools indeed lower the entry barrier for non-technical users in mashup development. The methodology presented in this thesis is generic and can be applied for other domains. Moreover, following the methodological approach the developed mashup platform is also generic, that is, it can be tailored for other domains.

Page generated in 0.0518 seconds