• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1016
  • 224
  • 97
  • 96
  • 70
  • 31
  • 29
  • 19
  • 19
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 2078
  • 745
  • 706
  • 585
  • 437
  • 357
  • 330
  • 310
  • 227
  • 221
  • 193
  • 189
  • 174
  • 165
  • 160
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
681

Impact of DDoS Attack on the Three Common HypervisorS(Xen, KVM, Virtual Box)

Sheinidashtegol, Pezhman 01 July 2016 (has links)
Cloud computing is a technology of inter-connected servers and resources that use virtualization to utilize the resources, flexibility, and scalability. Cloud computing is accessible through the network. This accessibility and utilization have its own benefit and drawbacks. Utilization and scalability make this technology more economic and affordable for even small businesses. Flexibility drastically reduces the risk of starting businesses. Accessibility allows cloud customers not to be restricted in a specific location until they could have access to the network, and in most cases through the internet. These significant traits, however, have their own disadvantages. Easy accessibility makes it more convenient for the malicious user to have access to servers in the cloud. Virtualizations that come to existence by middleware software called Virtual Machine Managers (VMMs) or hypervisors come with different vulnerabilities. These vulnerabilities are adding to previously existed vulnerability of Networks and Operating systems and Applications. In this research we are trying to distinguish the most resistant Hypervisor between (Xen, KVM and Virtual Box) against Distributed Denial of Service (DDoS) attack, an attempt to saturate victim’s resources making them unavailable to legitimate users, or shutting down the services by using more than one machine as attackers by targeting three different resources (Network, CPU, Memory). This research will show how hypervisors act differently under the same attacks and conditions.
682

Architecture of Databases for Mineralogy and Astrobiology

Lafuente Valverde, Barbara, Lafuente Valverde, Barbara January 2016 (has links)
This dissertation is focused on the design of the Open Data Repository's Data Publisher (ODR), a web-based central repository for scientific data, primarily focused on mineralogical properties, but also applicable to other data types, including for instance, morphological, textural and contextual images, chemical, biochemical, isotopic, and sequencing information. Using simple web-based tools, the goal of ODR is to lower the cost and training barrier so that any researcher can easily publish their data, ensure that it is archived for posterity, and comply with the mandates for data sharing. There are only a few databases in the mineralogical community, including RRUFF (http://rruff.info) for professionals, and mindat.org (http://www.mindat.org) for amateurs. These databases contain certain specific mineral information, but none, however, provide the ability to include, in the same platform, any of the many datatypes that characterize the properties of minerals. The ODR framework provides the flexibility required to include unforeseen data without the need for additional software programming. Once ODR is completed, the RRUFF database will be migrated into ODR and populated with additional data using other analytical techniques, such as Mössbauer data from Dr. Richard Morris and NVIR data from Dr. Ralf Milliken. The current ODR pilot studies are also described here, including 1) a database of the XRD analysis performed by the CheMin instrument on the Mars Science Laboratory rover Curiosity, 2) the NASA-AMES Astrobiology Habitable Environments Database (AHED), which aims to provide a central, high quality, long-term data repository for relevant astrobiology information, 3) the University of Arizona Mineral Museum (UAMM), with over 21,000 records of minerals and fossils from the museum collection, and 4) the Mineral Evolution Database (MED), that uses the ages of mineral species and their localities to correlate the diversification of mineral species through time with Earth's physical, chemical and biological processes. A good database design requires understanding the fundamentals of its content, so part of this thesis is also focused on developing my skills in mineral analysis and characterization, through the study of the crystal-chemistry of diverse minerals using X-ray diffraction, Raman spectroscopy and microprobe analysis, as principal techniques.
683

Multiple representation databases for topographic information

Dunkars, Mats January 2004 (has links)
No description available.
684

MABIC: Mobile Application Builder for Interactive Communication

Nguyen, Huy Manh 01 October 2016 (has links)
Nowadays, the web services and mobile technology advance to a whole new level. These technologies make the modern communication faster and more convenient than the traditional way. People can also easily share data, picture, image and video instantly. It also saves time and money. For example: sending an email or text message is cheaper and faster than a letter. Interactive communication allows the instant exchange of feedback and enables two-way communication between people and people, or people and computer. It increases the engagement of sender and receiver in communication. Although many systems such as REDCap and Taverna are built for improving the interactive communication between the servers and clients, there are still common drawbacks existing in these systems. These systems lack the support of the branching logic and two-way communication. They also require administrator’s programming skills to function the system adequately. These issues are the motivation of the project. The goal is to build a framework to speed up the prototype development of mobile application. The MABIC support the complex workflow by providing conditional logic, instantaneous interactivity between the administrators and participants and the mobility. These supported features of MABIC improve the interaction because it engages the participants to communicate more with the system. MABIC system provides the mobile electronic communication via sending a text message or pushing a notification to mobile’s device. Moreover, MABIC application also supports multiple mobile platforms. It helps to reduce the time and cost of development. In this thesis, the overview of MABIC system, its implementation, and related application is described.
685

Inheritance Problems in Object-Oriented Database

Auepanwiriyakul, Raweewan 05 1900 (has links)
This research is concerned with inheritance as used in object-oriented database. More specifically, partial bi-directional inheritance among classes is examined. In partial inheritance, a class can inherit a proper subset of instance variables from another class. Two subclasses of the same superclass do not need to inherit the same proper subset of instance variables from their superclass. Bi-directional partial inheritance allows a class to inherit instance variables from its subclass. The prototype of an object-oriented database that supports both full and partial bi-directional inheritance among classes was developed on top of an existing relational database management system. The prototype was tested with two database applications. One database application needs full and partial inheritance. The second database application required bi-directional inheritance. The result of this testing suggests both advantages and disadvantages of partial bi-directional inheritance. Future areas of research are also suggested.
686

Autorskoprávní aspekty webových stránek / Copyright aspects of web pages

Král, Samuel January 2015 (has links)
Copyright aspects of webpages protection of computer programmes and its present statutory provisions governed by Act number 121/2000 Sb., of the copyright and rights related to copyright (Copyright Act), and its extension and application on one of the fastest developing area of law, such as websites and web presentations. Another important objective of this thesis is the critical analysis and application of the most recent judgements of Czech and foreign courts and also judgements of the Court of Justice of the European Union in the area of computer programmes and internet law itself. First chapter deals with the history and development of the copyright with particular focus on protection of computer programmes, databases and legal aspects of behaviour on the internet, respectively World Wide Web. Second chapter is focused on definition of terms which are defined to be used in the following chapters in purpose of application of provisions of the Copyright Act. Chapter also provides detailed description of functioning of web presentations and description of its unique parts which create the web presentation itself. The following chapter applies the conditions of statutory law and the most recent jurisprudence in the area of websites and the Internet on web presentations. First part of the chapter...
687

Sesuvy, sutě a další méně obvyklé terénní prvky v topografických databázích a digitální kartografii / Landslides, Scree and the Other Unusual Terrain Features in Topographic Databases and Digital Cartography

Šákrová, Michaela January 2014 (has links)
Landslides, scree and other specific terrain objects in topographical databases and digital cartography Abstract Topographic maps capture detailed information about terrain. In traditional analogous way of creating these maps, understandable and illustrating cartographic symbology was used. However, certain spheres of symbology were modified with transition to digital topographic databases and digital cartography. Now they carry less information and are less illustrative. Main cause of this inaccuracy is imperfection of the cartographic software, which is unable to create appropriate symbology. This diploma thesis tries to solve aforesaid problem for some specific terrain objects as scree and landslides. These shapes are distinctive geomorphologic phenomenon in terrain, but they are often neglected as their occurrence in our territory is infrequent. Key words: topographic maps, digital cartography, scree, landslide, specific terrain object
688

Adressing scaling challenges in comparative genomics / Adresser les défis de passage à l'échelle en génomique comparée

Golenetskaya, Natalia 09 September 2013 (has links)
La génomique comparée est essentiellement une forme de fouille de données dans des grandes collections de relations n-aires. La croissance du nombre de génomes sequencés créé un stress sur la génomique comparée qui croit, au pire géométriquement, avec la croissance en données de séquence. Aujourd'hui même des laboratoires de taille modeste obtient, de façon routine, plusieurs génomes à la fois - et comme des grands consortia attend de pouvoir réaliser des analyses tout-contre-tout dans le cadre de ses stratégies multi-génomes. Afin d'adresser les besoins à tous niveaux il est nécessaire de repenser les cadres algorithmiques et les technologies de stockage de données utilisés pour la génomique comparée. Pour répondre à ces défis de mise à l'échelle, dans cette thèse nous développons des méthodes originales basées sur les technologies NoSQL et MapReduce. À partir d'une caractérisation des sorts de données utilisés en génomique comparée et d'une étude des utilisations typiques, nous définissons un formalisme pour le Big Data en génomique, l'implémentons dans la plateforme NoSQL Cassandra, et évaluons sa performance. Ensuite, à partir de deux analyses globales très différentes en génomique comparée, nous définissons deux stratégies pour adapter ces applications au paradigme MapReduce et dérivons de nouveaux algorithmes. Pour le premier, l'identification d'événements de fusion et de fission de gènes au sein d'une phylogénie, nous reformulons le problème sous forme d'un parcours en parallèle borné qui évite la latence d'algorithmes de graphe. Pour le second, le clustering consensus utilisé pour identifier des familles de protéines, nous définissons une procédure d'échantillonnage itérative qui converge rapidement vers le résultat global voulu. Pour chacun de ces deux algorithmes, nous l'implémentons dans la plateforme MapReduce Hadoop, et évaluons leurs performances. Cette performance est compétitive et passe à l'échelle beaucoup mieux que les algorithmes existants, mais exige un effort particulier (et futur) pour inventer les algorithmes spécifiques. / Comparative genomics is essentially a form of data mining in large collections of n-ary relations between genomic elements. Increases in the number of sequenced genomes create a stress on comparative genomics that grows, at worse geometrically, for every increase in sequence data. Even modestly-sized labs now routinely obtain several genomes at a time, and like large consortiums expect to be able to perform all-against-all analyses as part of these new multi-genome strategies. In order to address the needs at all levels it is necessary to rethink the algorithmic frameworks and data storage technologies used for comparative genomics.To meet these challenges of scale, in this thesis we develop novel methods based on NoSQL and MapReduce technologies. Using a characterization of the kinds of data used in comparative genomics, and a study of usage patterns for their analysis, we define a practical formalism for genomic Big Data, implement it using the Cassandra NoSQL platform, and evaluate its performance. Furthermore, using two quite different global analyses in comparative genomics, we define two strategies for adapting these applications to the MapReduce paradigm and derive new algorithms. For the first, identifying gene fusion and fission events in phylogenies, we reformulate the problem as a bounded parallel traversal that avoids high-latency graph-based algorithms. For the second, consensus clustering to identify protein families, we define an iterative sampling procedure that quickly converges to the desired global result. For both of these new algorithms, we implement each in the Hadoop MapReduce platform, and evaluate their performance. The performance is competitive and scales much better than existing solutions, but requires particular (and future) effort in devising specific algorithms.
689

Object-oriented parallel paradigms

17 March 2015 (has links)
M.Sc. (Computer Science) / This report is primarily concerned with highlighting fmdings of a research recently undertaken towards completing the requirements for the M.Sc. degree of 1994 at the Rand Afrikaans University (RAU). The research is aimed at striving to investigate what benefits (if any) exist in Object-Oriented Parallel Systems. The area of research revolves around the Object-Oriented Parallel Paradigm (OOPP) which is currently under development by the author. One primary aim of this research is to investigate numerous current trends in Object-Oriented Parallel Systems and Language Developments with the objective of providing an indication as to whether the Object-Oriented methodology can be (or has been) successfully married with existing Parallel Processing mechanisms. New benefits may come about while attempting to combine these methodologies, and this expectation will also be reflected upon. The Object-Oriented methodology allows a system designer the ability to approach a problem with a good degree of problem space understanding; while Parallel Processing allows the system designer the ability to create extremely fast algorithms for solving problems amenable to Parallel Processing techniques. The question we attempt to answer is whether the Object-Oriented methodology can be successfully married to the Parallel Processing field (whilst maintaining a high degree of benefits encountered in both methodologies) so as to gain the best of both worlds. Certain papers have laid claim to their proposed system encompassing both the Object-Oriented methodology, as well as the Parallel Processing methodology. In view of this fact, we shall furthermore examine papers to see if any of these systems are candidates for successfully marrying Object-Oriented and Parallel Processing into one homogeneous body. Criticism will be given on the shortcomings of unsuccessful candidates. Based on the findings of the research, the report will culminate to the proposal of the Object-Oriented Parallel Paradigm (OOPP). OOPP will speculate on the most probable features that system designers can expect to see in an almost ideal Object-Oriented Parallel System. It is very important at this stage to mention that, at its current state of development, OOPP is only a paradigm; thus OOPP should be viewed merely as an abstract model intended to establish a solid foundation for building more formal Object-Oriented Parallel Methodologies. Furthermore, OOPP is intended to be suitable for present day systems and amenable (possibly with a few minor adjustments) to future systems. The author trusts OOPP to generate sufficient interest to warrant further research being commissioned. In this event, OOPP should be expected to undergo modifications and enhancements...
690

Quantifying Performance Costs of Database Fine-Grained Access Control

Kumka, David Harold 01 January 2012 (has links)
Fine-grained access control is a conceptual approach to addressing database security requirements. In relational database management systems, fine-grained access control refers to access restrictions enforced at the row, column, or cell level. While a number of commercial implementations of database fine-grained access control are available, there are presently no generalized approaches to implementing fine-grained access control for relational database management systems. Fine-grained access control is potentially a good solution for database professionals and system architects charged with designing database applications that implement granular security or privacy protection features. However, in the oral tradition of the database community, fine-grained access control is spoken of as imposing significant performance penalties, and is therefore best avoided. Regardless, there are current and emerging social, legal, and economic forces that mandate the need for efficient fine-grained access control in relational database management systems. In the study undertaken, the author was able to quantify the performance costs associated with four common implementations of fine-grained access control for relational database management systems. Security benchmarking was employed as the methodology to quantify performance costs. Synthetic data from the TPC-W benchmark as well as representative data from a real-world application were utilized in the benchmarking process. A simple graph-base performance model for Fine-grained Access Control Evaluation (FACE) was developed from benchmark data collected during the study. The FACE model is intended for use in predicting throughput and response times for relational database management systems that implement fine-grained access control using one of the common fine-grained access control mechanisms - authorization views, the Hippocratic Database, label-based access control, and transparent query rewrite. The author also addresses the issue of scalability for fine-grained access control mechanisms that were evaluated in the study.

Page generated in 0.0303 seconds