• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 189
  • 25
  • 22
  • 21
  • 13
  • 12
  • 7
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 356
  • 356
  • 66
  • 63
  • 61
  • 53
  • 50
  • 47
  • 42
  • 41
  • 41
  • 39
  • 36
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Kartläggning av tvärfunktionella verksamhetsbehov för framtida utveckling av OAS / Mapping of cross-functional user needs for future development of OAS

Eriksson, Martin, Lindgren, Mikael January 2012 (has links)
The management of information is one of the key aspects within a successful andefficient product development process, particularly regarding complex products.Scania CV AB is at the moment developing a new IT-system, OAS, which aims tomanage the company’s product data. With this as a background, the purpose of thismaster thesis is to identify the cross-functional user needs within Scania’s organizationconcerning product data and the management around it. To fulfill the purpose, an empirical study consisting of 40 personal interviews with 50representatives from different functions within Scania’s organization was carriedthrough. The empirical data was then analyzed focusing on identifying thecross-functional needs and issues. The study points out that there is a big potential regarding improvement among themanagement of product data. For example, a lot of time is spent by the users to findthe requested information and to copy data manually from one IT-system to another.The most important findings, in terms of cross-functional user needs within Scania’sorganization, are the integration of Scania’s many IT-system and to make informationmore accessible. Further needs are regarding the ability to follow a product’s entirelifecycle, better support for the user’s understanding of the complex product, anenhanced management of Engineering Change Orders and a better supportconcerning the product structure.
102

Data Management in an Object-Oriented Distributed Aircraft Conceptual Design Environment

Lu, Zhijie 16 January 2007 (has links)
Aircraft conceptual design, as the first design stage, provides major opportunity to compress design cycle time and is the cheapest place for making design changes. However, traditional aircraft conceptual design programs, which are monolithic programs, cannot provide satisfactory functionality to meet new design requirements due to the lack of domain flexibility and analysis scalability. Therefore, we are in need of the next generation aircraft conceptual design environment (NextADE). To build the NextADE, the framework and the data management problem are two major problems that need to be addressed at the forefront. Solving these two problems, particularly the data management problem, is the focus of this research. In this dissertation, a distributed object-oriented framework is firstly formulated and tested for the NextADE. In order to improve interoperability and simplify the integration of heterogeneous application tools, data management is one of the major problems that need to be tackled. To solve this problem, taking into account the characteristics of aircraft conceptual design data, a robust, extensible object-oriented data model is then proposed according to the distributed object-oriented framework. By overcoming the shortcomings of the traditional approach of modeling aircraft conceptual design data, this data model makes it possible to capture specific detailed information of aircraft conceptual design without sacrificing generality. Based upon this data model, a prototype of the data management system, which is one of the fundamental building blocks of the NextADE, is implemented utilizing the state of the art information technologies. Using a general-purpose integration software package to demonstrate the efficacy of the proposed framework and the data management system, the NextADE is initially implemented by integrating the prototype of the data management system with other building blocks of the design environment. As experiments, two case studies are conducted in the integrated design environments. One is based upon a simplified conceptual design of a notional conventional aircraft; the other is a simplified conceptual design of an unconventional aircraft. As a result of the experiments, the proposed framework and the data management approach are shown to be feasible solutions to the research problems.
103

Case Study of Implementing PLM system Based on Adaptive Structuration Theory¡GA Case of H Company

Li, Chu-wen 15 February 2011 (has links)
none
104

STORI: selectable taxon ortholog retrieval iteratively

Stern, Joshua Gallant 08 June 2015 (has links)
Speciation and gene duplication are fundamental evolutionary processes that enable biological innovation. For over a decade, biologists have endeavored to distinguish orthology (homology caused by speciation) from paralogy (homology caused by duplication). Disentangling orthology and paralogy is useful to diverse fields such as phylogenetics, protein engineering, and genome content comparison. A common step in ortholog detection is the computation of Bidirectional Best Hits (BBH). However, we found this computation impractical for more than 24 Eukaryotic proteomes. Attempting to retrieve orthologs in less time than previous methods require, we developed a novel algorithm and implemented it as a suite of Perl scripts. This software, Selectable Taxon Ortholog Retrieval Iteratively (STORI), retrieves orthologous protein sequences for a set of user-defined proteomes and query sequences. While the time complexity of the BBH method is O(#taxa^2), we found that the average CPU time used by STORI may increase linearly with the number of taxa. To demonstrate one aspect of STORI’s usefulness, we used this software to infer the orthologous sequences of 26 ribosomal proteins (rProteins) from the large ribosomal subunit (LSU), for a set of 115 Bacterial and 94 Archaeal proteomes. Next, we used established tree-search methods to seek the most probable evolutionary explanation of these data. The current implementation of STORI runs on Red Hat Enterprise Linux 6.0 with installations of Moab 5.3.7, Perl 5 and several Perl modules. STORI is available at: <http://github.com/jgstern/STORI>.
105

Networking infrastructure and data management for large-scale cyber-physical systems

Han, Song, doctor of computer sciences 25 February 2013 (has links)
A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system’s computational and physical elements. A large-scale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge. In this thesis, we first present a TDMA-based low-power and secure real-time wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. We then describe the network management techniques designed for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built a prototype system and deployed it in different environments for performance measurement. We also present a light-weighted and scalable solution for interconnecting heterogeneous CPS subsystems together through a slim IP adaptation layer and a constrained application protocol layer. This approach makes the underlying connectivity technologies transparent to the application developers thus enables rapid application development and efficient migration among different CPS platforms. At the end of this thesis, we present a semi-autonomous robotic system called cyberphysical avatar. The cyberphysical avatar is built based on our proposed network infrastructure and data management techniques. By integrating recent advance in body-compliant control in robotics, and neuroevolution in machine learning, the cyberphysical avatar can adjust to an unstructured environment and perform physical tasks subject to critical timing constraints while under human supervision. / text
106

Chukchi Sea environmental data management in a relational database

Yang, Fengyan 29 October 2013 (has links)
Environmental data hold important information regarding humanity’s past, present, and future, and are managed in various ways. The database structure most commonly used in contemporary applications is the relational database. Its usage in the scientific world for managing environmental data is not as popular as in businesses enterprises. Attention is caught by the diverse nature and rapidly growing volume of environmental data that has been increasing substantially in recent. Environmental data for the Chukchi Sea, with its embedded potential oil resources, have become important for characterizing the physical, chemical, and biological environment. Substantive data have been collected recently by researchers from the Chukchi Sea Offshore Monitoring in the Drilling Area: Chemical and Benthos (COMIDA CAB) project. A modified Observations Data Model was employed for storing, retrieving, visualizing and sharing data. Throughout the project-based study, the processes of environmental data heterogeneity reconciliation and relational database model modification and implementation were carried out. Data were transformed into shareable information, which improves data interoperability between different software applications (e.g. ArcGIS and SQL server). The results confirm the feasibility and extendibility of employing relational databases for environmental data management. / text
107

Scalable data-management systems for Big Data

Tran, Viet-Trung 21 January 2013 (has links) (PDF)
Big Data can be characterized by 3 V's. * Big Volume refers to the unprecedented growth in the amount of data. * Big Velocity refers to the growth in the speed of moving data in and out management systems. * Big Variety refers to the growth in the number of different data formats. Managing Big Data requires fundamental changes in the architecture of data management systems. Data storage should continue being innovated in order to adapt to the growth of data. They need to be scalable while maintaining high performance regarding data accesses. This thesis focuses on building scalable data management systems for Big Data. Our first and second contributions address the challenge of providing efficient support for Big Volume of data in data-intensive high performance computing (HPC) environments. Particularly, we address the shortcoming of existing approaches to handle atomic, non-contiguous I/O operations in a scalable fashion. We propose and implement a versioning-based mechanism that can be leveraged to offer isolation for non-contiguous I/O without the need to perform expensive synchronizations. In the context of parallel array processing in HPC, we introduce Pyramid, a large-scale, array-oriented storage system. It revisits the physical organization of data in distributed storage systems for scalable performance. Pyramid favors multidimensional-aware data chunking, that closely matches the access patterns generated by applications. Pyramid also favors a distributed metadata management and a versioning concurrency control to eliminate synchronizations in concurrency. Our third contribution addresses Big Volume at the scale of the geographically distributed environments. We consider BlobSeer, a distributed versioning-oriented data management service, and we propose BlobSeer-WAN, an extension of BlobSeer optimized for such geographically distributed environments. BlobSeer-WAN takes into account the latency hierarchy by favoring locally metadata accesses. BlobSeer-WAN features asynchronous metadata replication and a vector-clock implementation for collision resolution. To cope with the Big Velocity characteristic of Big Data, our last contribution feautures DStore, an in-memory document-oriented store that scale vertically by leveraging large memory capability in multicore machines. DStore demonstrates fast and atomic complex transaction processing in data writing, while maintaining high throughput read access. DStore follows a single-threaded execution model to execute update transactions sequentially, while relying on a versioning concurrency control to enable a large number of simultaneous readers.
108

Record Linkage for Web Data

Hassanzadeh, Oktie 15 August 2013 (has links)
Record linkage refers to the task of finding and linking records (in a single database or in a set of data sources) that refer to the same entity. Automating the record linkage process is a challenging problem, and has been the topic of extensive research for many years. However, the changing nature of the linkage process and the growing size of data sources create new challenges for this task. This thesis studies the record linkage problem for Web data sources. Our hypothesis is that a generic and extensible set of linkage algorithms combined within an easy-to-use framework that integrates and allows tailoring and combining of these algorithms can be used to effectively link large collections of Web data from different domains. To this end, we first present a framework for record linkage over relational data, motivated by the fact that many Web data sources are powered by relational database engines. This framework is based on declarative specification of the linkage requirements by the user and allows linking records in many real-world scenarios. We present algorithms for translation of these requirements to queries that can run over a relational data source, potentially using a semantic knowledge base to enhance the accuracy of link discovery. Effective specification of requirements for linking records across multiple data sources requires understanding the schema of each source, identifying attributes that can be used for linkage, and their corresponding attributes in other sources. Schema or attribute matching is often done with the goal of aligning schemas, so attributes are matched if they play semantically related roles in their schemas. In contrast, we seek to find attributes that can be used to link records between data sources, which we refer to as linkage points. In this thesis, we define the notion of linkage points and present the first linkage point discovery algorithms. We then address the novel problem of how to publish Web data in a way that facilitates record linkage. We hypothesize that careful use of existing, curated Web sources (their data and structure) can guide the creation of conceptual models for semi-structured Web data that in turn facilitate record linkage with these curated sources. Our solution is an end-to-end framework for data transformation and publication, which includes novel algorithms for identification of entity types and their relationships out of semi-structured Web data. A highlight of this thesis is showcasing the application of the proposed algorithms and frameworks in real applications and publishing the results as high-quality data sources on the Web.
109

Record Linkage for Web Data

Hassanzadeh, Oktie 15 August 2013 (has links)
Record linkage refers to the task of finding and linking records (in a single database or in a set of data sources) that refer to the same entity. Automating the record linkage process is a challenging problem, and has been the topic of extensive research for many years. However, the changing nature of the linkage process and the growing size of data sources create new challenges for this task. This thesis studies the record linkage problem for Web data sources. Our hypothesis is that a generic and extensible set of linkage algorithms combined within an easy-to-use framework that integrates and allows tailoring and combining of these algorithms can be used to effectively link large collections of Web data from different domains. To this end, we first present a framework for record linkage over relational data, motivated by the fact that many Web data sources are powered by relational database engines. This framework is based on declarative specification of the linkage requirements by the user and allows linking records in many real-world scenarios. We present algorithms for translation of these requirements to queries that can run over a relational data source, potentially using a semantic knowledge base to enhance the accuracy of link discovery. Effective specification of requirements for linking records across multiple data sources requires understanding the schema of each source, identifying attributes that can be used for linkage, and their corresponding attributes in other sources. Schema or attribute matching is often done with the goal of aligning schemas, so attributes are matched if they play semantically related roles in their schemas. In contrast, we seek to find attributes that can be used to link records between data sources, which we refer to as linkage points. In this thesis, we define the notion of linkage points and present the first linkage point discovery algorithms. We then address the novel problem of how to publish Web data in a way that facilitates record linkage. We hypothesize that careful use of existing, curated Web sources (their data and structure) can guide the creation of conceptual models for semi-structured Web data that in turn facilitate record linkage with these curated sources. Our solution is an end-to-end framework for data transformation and publication, which includes novel algorithms for identification of entity types and their relationships out of semi-structured Web data. A highlight of this thesis is showcasing the application of the proposed algorithms and frameworks in real applications and publishing the results as high-quality data sources on the Web.
110

Régularisation d'images sur des surfaces non planes

Lopez Perez, Lucero 15 December 2006 (has links) (PDF)
Nous nous intéressons aux approches par EDP pour la régularisation d'images scalaires et multivaluées définies sur des supports non plans et à leurs applications à des problèmes de traitement des images. Nous étudions la relation entre les méthodes existantes et les comparons en termes de performance et complexité d'implémentation. Nous développons de nouvelles méthodes numériques pour traiter des opérateurs de type divergence utilisés dans les méthodes de régularisation par EDPs sur des surfaces triangulées.<br />Nous généralisons la technique de régularisation du Flot de Beltrami pour le cas des images définies sur des surfaces implicites et explicites. Des implémentations sont proposées pour ces méthodes, et des expériences sont exposées. Nous montrons aussi une application concrète de ces méthodes à un problème de cartographie rétinotopique.

Page generated in 0.1217 seconds