• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 408
  • 7
  • Tagged with
  • 415
  • 415
  • 415
  • 340
  • 56
  • 47
  • 40
  • 39
  • 39
  • 39
  • 39
  • 34
  • 25
  • 20
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Autonomous resource management for cloud-assisted peer-to-peer based services

Kavalionak, Hanna January 2013 (has links)
Peer-to-Peer (P2P) and Cloud Computing are two of the latest trends in the Internet arena. They both could be labelled as large-scale distributed systems, yet their approach is completely different: based on completely decentralized protocols exploiting edge resources the former, focusing on huge data centres the latter. Several Internet startups have quickly reached stardom by exploiting cloud resources. Instead, P2P applications still lack a well-defined business model. Recently, companies like Spotify and Wuala have started to explore how the two worlds could be merged by exploiting (free) user resources whenever possible, aiming at reducing the cost of renting cloud resource. However, although very promising, this model presents challenging issues, in particular about the autonomous regulation of the usage of P2P and cloud resources. Next-generation services need the possibility to guarantee a minimum level of service when peer resources are not sufficient, and to exploit as much P2P resources as possible when they are abundant. In this thesis, we answer the above research questions in the form of new algorithms and systems. We designed a family of mechanisms to self-regulate the amount of cloud resources when peer resources are not enough. We applied and adapted these mechanisms to support different Internet applications, including storage, video streaming and online gaming. To support a replication service, we designed an algorithm that self-regulates the cloud resources used for storing replicas by orchestrating their provisioning. We presented CLive, a video streaming P2P framework that meet the real-time constraints on video delay by autonomously regulating the amount of cloud helpers upon need. We proposed an architecture to support large scale on-line games, where the load coming from the interaction of players is strategically migrated between P2P and cloud resources in an autonomous way. Finally, we proposed a solution to the NAT problem that employs cloud resources to allow a node behind it to be seen from outside. Using extensive simulations, we showed that hybrid infrastructures can reduce the economical effort on the service providers, while offering a level of service comparable with centralized architectures. The results of this thesis proved that the combination of Cloud Computing and P2P is one of the milestones for next generation distributed P2P-based architectures.
112

Secure Business Process Engineering: a socio-technical approach

Salnitri, Mattia January 2016 (has links)
Dealing with security is a central activity for todays organizations. Security breaches impact on the activities executed in organizations, preventing them to execute their business processes and, therefore, causing millions of dollars of losses. Security by design principles underline the importance of considering security as early as during the design of organizations to avoid expensive fixes during later phases of their lifecycle. However, the design of secure business processes cannot take into account only security aspects on the sequences of activities. Security reports in the last years demonstrate that security breaches are more and more caused by attacks that take advantage of social vulnerabilities. Therefore, those aspects should be analyzed in order to design a business process robust to technical and social attacks. Still, the mere design of business processes does not guarantee that their correct execution, such business processes have to be correctly implemented and performed. We propose SEcure Business process Engineering (SEBE), a method that considers social and organizational aspects for designing and implementing secure business processes. SEBE provides an iterative and incremental process and a set of verification of transformation rules, supported by a software tool, that integrate different modeling languages used to specify social security aspects, business processes and the implementation code. In particular, SEBE provides a new modeling language which permits to specify business processes with security concepts and complex security constraints. We evaluated the effectiveness of SEBE for engineering secure business processes with two empirical evaluations and applications of the method to three real scenarios.
113

Museum Visits for Older Adults with Mobility Constraints: Sharing and Participation through Technology

Kostoska, Galena January 2015 (has links)
The aim of the thesis is to study how older adults with mobility constrains can enjoy museum experiences of their family members (by providing methods and tools for family members to “save” and share memories of museum visits with older adults at home) and to investigate how older adults can remotely participate in museum visits through technology. We employed face-to-face interviews and questionnaires inside two different museums settings to understand if and what visitors share with non-visitors, and which technology they use for this purpose. The results showed that a low number of visitors share their museum visits with some materials like pictures they took or books bought in the shop. Although visitors have the intention and would like to share information, they rarely do so. In order to support sharing with non-visitors, we provided several ways for “saving” museum content. The visitors were able to bookmark objects during a museum visit, and received by email a link with the bookmarked content in the form of a digital booklet. We tested whether people would use these features, and if they would access and share the “saved” content after the visit. The results suggested that our approach can significantly increase sharing: at least half of the participants shared the digital booklet with someone. We adapted the booklet for older adults and we performed usability study on it, in order to understand if older adults with and without cognitive decline can use it. We measured and compared the performance on four tasks: opening the booklet, browsing the content, zooming in the content and closing the content after being zoomed in. Results show that the booklet enables older adults to consume content to some extend and it allows additional in-depth exploration. We studied factors influencing feasibility of remote participation for older adults, where we measured the impact of different designs and interaction techniques on participants ability to understand, follow and engage in remote museum visits. Interactive navigation was found the most suitable interaction paradigm for active older adults, whereas frail adults can participate only through interaction-free tours. While almost all of the participants were able to understand the tours in our experimental setting, the ability to follow a visit was strongly influenced by the interaction type. We investigated levels of experienced presence, social closeness, engagement and enjoyment when older adults join museum visit of onsite visitors in a drama-based approach. The remote participant and onsite participants were connected with audio link, the information about the objects were contained and presented in form of a story connecting all the objects in the exhibition. The constructs of closeness, engagement and enjoyment correlated significantly: we found that both audio channel and interactive story were important elements for creating an affective virtual experience, the audio channel increased the sense of togetherness, while the interactive story made the visit more enjoyable and fun. A virtual tour was designed and developed to engage older adults in an immersive visit through part of the Louvre, by a distant real-life guide. An initial diary study and a creative workshop were conducted to learn how to better support the needs and values of older adults, and which approaches would work better for the scenario of remote participation. Visitors’ experienced levels of social and spatial presence, immersion and engagement were quite high independently of the level of interactivity of the guide, or the presence of others. We discuss further recommendations for video-mediated remote participation for older adults.
114

Competitive Robotic Car: Sensing, Planning and Architecture Design

Rizano, Tizar January 2013 (has links)
Research towards a complete autonomous car has been pushed through by industries as it offers numerous advantages such as the improvement to traffic flow, vehicle and pedestrian safety, and car efficiency. One of the main challenges faced in this area is how to deal with different uncertainties perceived by the sensors on the current state of the car and the environment. An autonomous car needs to employ efficient planning algorithm that generates the vehicle trajectory based on the environmental sensing implemented in real-time. An complete motion planning algorithm is an algorithm that returns a valid solution if one exist in finite time and returns no path exist when none exist. The algorithm is optimal when it returns an optimal path based on some criteria. In this thesis we work on a special case of motion planning problem: to find an optimal trajectory for a robotic car in order to win a car race. We propose an efficient realtime vision based technique for localization and path reconstruction. For our purpose of winning a car race we identify a characterization of the alphabet of optimal maneuvers for the car, an optimal local planning strategy and an optimal graph-based global planning strategy with obstacle avoidance. We have also implemented the hardware and software of this approach on as a testbed of the planning strategy.
115

Non-Redundant Overlapping Clustering: Algorithms and Applications

Truong, Duy Tin January 2013 (has links)
Given a dataset, traditional clustering algorithms often only provide a single partitioning or a single view of the dataset. On complex tasks, many different clusterings of a dataset exist, thus alternative clusterings which are of high quality and different from given trivial clusterings are asked to have complementary views. The task is therefore a clear multi-objective optimization problem. However, most approaches in the literature optimize these objectives sequentially (one after another one) or indirectly (by some heuristic combination). This can result in solutions which are not Pareto- optimal. The problem is even more difficult for high-dimensional datasets as clusters can be located in various subspaces of the original feature space. Besides, many practical applications require that subspace clusters can still overlap but the overlap must be below a predefined threshold. Nonetheless, most of the state-of-the-art subspace clustering algorithms can only generate a set of disjoint or significantly overlapping subspace clusters. To deal with the above issues, for full-space alternative clustering, we develop an algorithm which fully acknowledges the multiple objectives, optimizes them directly and simultaneously, and produces solutions approximating the Pareto front. As for non-redundant subspace clustering, we propose a general framework for generating K overlapping subspace clusters where the maximum overlap between them is guaranteed to be below a predefined threshold. In both cases, our algorithms can be applied for several domains as different analyzing models can be used without modifying the main parts of the algorithms.
116

Evolutionary Test Case Generation via Many Objective Optimization and Stochastic Grammars

Kifetew, Fitsum Meshesha January 2015 (has links)
In search based test case generation, most of the research works focus on the single-objective formulation of the test case generation problem. However, there are a wide variety of multi- and many-objective optimization strategies that could offer advantages currently not investigated when addressing the problem of test case generation. Furthermore, existing techniques and available tools mainly handle test generation for programs with primitive inputs, such as numeric or string input. The techniques and tools applicable to such types of programs often do not effectively scale up to large sizes and complex inputs. In this thesis work, at the unit level, branch coverage is reformulated as a many-objective optimization problem, as opposed to the state of the art single-objective formulation, and a novel algorithm is proposed for the generation of branch adequate test cases. At the system level, this thesis proposes a test generation approach that combines stochastic grammars with genetic programming for the generation of branch adequate test cases. Furthermore, the combination of stochastic grammars and genetic programming is also investigated in the context of field failure reproduction for programs with highly structured input.
117

Large-scale Structural Reranking for Hierarchical Text Categorization

JU, QI January 2013 (has links)
Current hierarchical text categorization (HTC) methods mainly fall into three directions: (1) Flat one-vs.-all approach, which flattens the hierarchy into independent nodes and trains a binary one-vs.-all classifier for each node. (2) Top-down method, which uses the hierarchical structure to decompose the entire problem into a set of smaller sub-problems, and deals with such sub-problems in top-down fashion along the hierarchy. (3) Big-bang approach, which learns a single (but generally complex) global model for the class hierarchy as a whole with a single run of the learning algorithm. These methods were shown to provide relatively high performance in previous evaluations. However, they still suffer from two main drawbacks: (1) relatively low accuracy as they disregard category dependencies, or (2) low computational efficiency when considering such dependencies. In order to build an accurate and efficient model we adopted the following strategy: first, we design advanced global reranking models (GR) that exploit structural dependencies in hierarchical multi-label text classification (TC). They are based on two algorithms: (1) to generate the k-best classification of hypotheses based on decision probabilities of the flat one-vs.-all and top-down methods; and (2) to encode dependencies in the reranker by: (i) modeling hypotheses as trees derived by the hierarchy itself and (ii) applying tree kernels (TK) to them. Such TK-based reranker selects the best hierarchical test hypothesis, which is naturally represented as a labeled tree. Additionally, to better investigate the role of category relationships, we consider two interesting cases: (i) traditional schemes in which node-fathers include all the documents of their child-categories; and (ii) more general schemes, in which children can include documents not belonging to their fathers. Second, we propose an efficient local incremental reranking model (LIR), which combines a top-down method with a local reranking model for each sub-problem. These local rerankers improve the accuracy by absorbing the local category dependencies of sub-problems, which alleviate the errors of top-down method in the higher levels of the hierarchy. The application of LIR recursively deals with the sub-problems by applying the corresponding local rerankers in top-down fashion, resulting in high efficiency. In addition, we further optimize LIR by (i) improving the top-down method by creating local dictionaries for each sub-problem; (ii) using LIBLINEAR instead of LIBSVM; and (iii) adopting the compact representation of hypotheses for learning the local reranking model. This makes LIR applicable for large-scale hierarchical text categorization. The experimentation on different hierarchical datasets has shown promising enhancements by exploiting the structural dependencies in large-scale hierarchical text categorization.
118

Black-Box Security Testing of Browser-Based Security Protocols

Sudhodanan, Avinash January 2017 (has links)
Millions of computer users worldwide use the Internet every day for consuming web-based services (e.g., for purchasing products from online stores, for storing sensitive files in cloud-based file storage web sites, etc.). Browser-based security protocols (i.e. security protocols that run over the Hypertext Transfer Protocol and are executable by commercial web-browsers) are used to ensure the security of these services. Multiple parties are often involved in these protocols. For instance, a browser-based security protocol for Single Sign-On (SSO in short) typically consists of a user (controlling a web browser), a Service Provider web site and an Identity Provider (who authenticates the user). Similarly, a browser-based security protocol for Cashier-as-a-Service (CaaS) scenario consists of a user, a Service Provider web site (e.g., an online store) and a Payment Service Provider (who authorizes payments). The design and implementation of browser-based security protocols are usually so complex that several vulnerabilities are often present even after intensive inspection. This is witnessed, for example, by vulnerabilities found in various browser-based security protocols such as SAML SSO v2.0, OAuth Core 1.0, etc. even years after their publication, implementation, and deployment. Although techniques such as formal verification and white-box testing can be used to perform security analysis of browser- based security protocols, currently they have limitations: the necessity of having formal models that can cope with the complexity of web browsers (e.g., cookies, client-side scripting, etc.), the poor support offered for certain programming languages by white-box testing tools, to name a few. What remains is black-box security testing. However, currently available black-box security testing techniques for browser-based security protocols are either scenario-specific (i.e. they are specific for SSO or CaaS, not both) or do not support very well the detection of vulnerabilities enabling replay attacks (commonly referred to as logical vulnerabilities) and Cross-Site Request Forgery (CSRF in short). The goal of this thesis is to overcome the drawbacks of the black-box security testing techniques mentioned above. At first this thesis presents an attack pattern-based black-box testing technique for detecting vulnerabilities enabling replay attacks and social login CSRF in multi-party web applications (i.e. web applications utilizing browser-based security protocols involving multiple parties). These attack patterns are inspired by the similarities in the attack strategies of previously-discovered attacks against browser-based security protocols. Second, we present manual and semi-automatic black-box security testing strategies for detecting 7 different types of CSRF attacks, targeting the authentication and identity management functionalities of web sites. We also provide proof-of-concept implementations of our ideas. These implementations are based on OWASP ZAP (a prominent, free and open-source penetration testing tool). This thesis being in the context of an industrial doctorate, we had the opportunity to analyse the use-cases provided by our industrial partner, SAP, to further improve our approach. In addition, to access the effectiveness of the techniques we propose, we applied them against the browser-based security protocols of many prominent web sites and discovered nearly 340 serious security vulnerabilities affecting more than 200 web sites, including the web sites of prominent vendors such as Microsoft, eBay, etc.
119

A Large Scale Distributed Knowledge Organization System

Noori, Sheak Rashed Haider January 2011 (has links)
The revolution of Internet and the Web takes the computer and information technology into a new age. The information on the web is growing very fast. The progress of information and communication technologies has made accessible a large amount of information, which have provided each of us with access to far more information than we can comprehend or manage. This emphasizes the difficulty with the resulting semantic heterogeneity of the diverse sources. Human knowledge is a living organism and as such evolves in time where different people having different viewpoints and using different terminology among people of different cultures and languages, intensify the heterogeneity of the sources even more. These introduce some concrete problems like natural language disambiguation, information retrieval and information integration. Nevertheless, the problem is quite well known in almost every branch of knowledge and has been independently approached by several communities for several decades. To make this huge amount of existing information accessible and manageable while also solving the semantic heterogeneity problem, namely the problem of diversity in knowledge, and therefore support interoperability, it is essential to have a large scale high quality collaborative knowledge base along with a suitable structure as a common ground on which interoperability among people and different systems should be possible. It will play the role of a reference point for communication, assigning clear meaning by accurate disambiguation to exchanged information, communication and automating complex tasks. However, successfully building large scale knowledge bases with maximum coverage is not possible by a single person or a small group of people without collaborative support. It extremely depends on expert community based support. Therefore, it is necessary for experts to work together on knowledge base building. Furthermore, it is very natural that these expert users will be geographically distributed. Web 2.0 has the potential to support information sharing, interoperability and collaboration on the Web. Simplicity, flexibility and easy to use services make it an interactive and collaborative platform which allows them to create or edit their content. The exponential expansion of the Web users and the potentials of Web 2.0 make it the natural platform of choice for developing knowledge bases collaboratively. We propose a highly flexible knowledge base system, which takes into account diversity of knowledge and its evolution in time. The work presented in this thesis is part of a larger project. More specifically the goal of this thesis is to create a powerful and easy to use knowledge base management system to help people in building, organizing a high quality knowledge base and making accessible their knowledge and to support interoperability in real world scenarios.
120

An Effective End-User Development Approach through Domain-Specific Mashups for Research Impact Evaluation

Imran, Muhammad January 2013 (has links)
Over the last decade, there has been growing interest in the assessment of the performance of researchers, research groups, universities and even countries. The assessment of productivity is an instrument to select and promote personnel, assign research grants and measure the results of research projects. One particular assessment approach is bibliometrics i.e., the quantitative analysis of scientific publications through citation and content analysis. However, there is little consensus today on how research evaluation should be performed, and it is commonly acknowledged that the quantitative metrics available today are largely unsatisfactory. The process is very often highly subjective, and there are no universally accepted criteria. A number of dierent scientific data sources available on the Web (e.g., DBLP, Microsoft Academic Search, Google Scholar) that are used for such analysis purposes. Taking data from these diverse sources, performing the analysis and visualizing results in different ways is not a trivial and straight forward task. Moreover, the data taken from these sources cannot be used as it is due to the problem of name disambiguation, where many researchers share identical names or an author dierent name variations appear in the data. We believe that the personalization of the evaluation processes is a key element for the appropriate use and practical success of these research impact evaluation tasks. Moreover, people involved in such evaluation processes are not always IT experts and hence not capable to crawl data sources, merge them and compute the needed evaluation procedures. The recent emergence of mashup tools has refueled research on end-user development, i.e., on enabling end-users without programming skills to produce their own applications. Yet, similar to what happened with analogous promises in web service composition and business process management, research has mostly focused on technology and, as a consequence, has failed its objective. Plain technology (e.g., SOAP/WSDL web services) or simple modeling languages (e.g., Yahoo! Pipes) do not convey enough meaning to non-programmers. We believe that the heart of the problem is that it is impractical to design tools that are generic enough to cover a wide range of application domains, powerful enough to enable the specification of non-trivial logic, and simple enough to be actually accessible to non-programmers. At some point, we need to give up something. In our view, this something is generality since reducing expressive power would mean supporting only the development of toy applications, which is useless, while simplicity is our major aim. This thesis presents a novel approach for an effective end-user development, specifically for non-programmers. That is, we introduce a domain-specific approach to mashups that "speaks the language of users", i.e., that is aware of the terminology, concepts, rules, and conventions (the domain) the user is comfortable with. We show what developing a domain-specific mashup platform means, which role the mashup meta-model and the domain model play and how these can be merged into a domain-specific mashup metamodel. We illustrate the approach by implementing a generic mashup platform, whose capabilities are based on our proposed mashup meta-model. Further, we illustrate how the generic mashup platform can be tailored for a specific domain, which is achieved through the development of ResEval Mash tool that is specifically developed for the research evaluation domain. Moreover, the thesis proposed an architectural design for mashup platforms, specifically it presents a novel approach for data-intensive mashup-based web applications, which proved to be a substantial contribution. The proposed approach is suitable for those applications, which deal with large amounts of data that travel between client and server. For the evaluation of our work and to determine the effectiveness and usability of our mashup tool, we performed two separate user studies. The results of the user studies confirm that domain-specific mashup tools indeed lower the entry barrier for non-technical users in mashup development. The methodology presented in this thesis is generic and can be applied for other domains. Moreover, following the methodological approach the developed mashup platform is also generic, that is, it can be tailored for other domains.

Page generated in 0.0864 seconds