• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 217
  • 145
  • 53
  • 38
  • 32
  • 14
  • 9
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 592
  • 133
  • 120
  • 116
  • 100
  • 80
  • 79
  • 77
  • 76
  • 73
  • 70
  • 70
  • 59
  • 53
  • 53
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

GIS ANALÝZY V PROSTŘEDÍ INFORMAČNÍCH MODELŮ STAVEB / GIS ANALYSIS IN BUILDING INFORMATION MODELS

Černý, Martin Unknown Date (has links)
This dissertation thesis are concerned about the Geographic Information Systems (GIS) and spatial analyses inside of the Building Information Models (BIM). BIM and GIS are compared from different points of view like data organization, geometry types and modelling paradigms. Results of this comparisons are applied in the first case study where BIM data in IFC format are converted to the SHP format usable for spatial analyses in GIS. The results of the case study are evaluated and new approach for spatial analyses in BIM models is proposed based on the results. The new proposed approach is based on the semantic relations existing in the BIM models. Thesis are concluded with the future development which is in line with authors actual focus.
72

Ontologies and Methods for Interoperability of Engineering Analysis Models (eams) in an E-Design Environment

Kanuri, Neelima 01 January 2007 (has links) (PDF)
ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.
73

Limitations Of Micro And Macro Solutions To The Simulation Interoperability Challenge: An Ease Case Study

Barry, John 01 January 2013 (has links)
This thesis explored the history of military simulations and linked it to the current challenges of interoperability. The research illustrated the challenge of interoperability in integrating different networks, databases, standards, and interfaces and how it results in U.S. Army organizations constantly spending time and money to create and implement irreproducible Live, Virtual, and Constructive (LVC) integrating architectures to accomplish comparable tasks. Although the U.S. Army has made advancements in interoperability, it has struggled with this challenge since the early 1990s. These improvements have been inadequate due to evolving and growing needs of the user coupled with the technical complexities of interoperating legacy systems with emergent systems arising from advances in technology. To better understand the impact of the continued evolution of simulations, this paper mapped Maslow's Hierarchy of Needs with Tolk's Levels of Conceptual Interoperability Model (LCIM). This mapping illustrated a common relationship in both the Hierarchy of Needs and the LCIM model depicting that each level increases with complexity and the proceeding lower level must first be achieved prior to reaching the next. Understanding the continuum of complexity of interoperability, as requirements or needs, helped to determine why the previous funding and technical efforts have been inadequate in mitigating the interoperability challenges within U.S. Army simulations. As the U.S. Army's simulation programs continue to evolve while the military and contractor personnel turnover rate remains near constant, a method of capturing and passing on the tacit knowledge from one personnel staffing life cycle to the next must be developed in order to economically and quickly reproduce complex simulation events. This thesis explored a potential solution to this challenge, the Executable Architecture Systems Engineering (EASE) research project managed by the U.S. Army’s Simulation and Training Technology Center in the Army Research Laboratory within the Research, Development and Engineering Command. However, there are two main drawbacks to EASE; it iv is still in the prototype stage and has not been fully tested and evaluated as a simulation tool within the community of practice. In order to determine if EASE has the potential to reduce the micro as well as macro interoperability, an EASE experiment was conducted as part of this thesis. The following three alternative hypothesis were developed, tested, and accepted as a result of the research for this thesis: Ha1 = Expert stakeholders believe the EASE prototype does have potential as a U.S. Army technical solution to help mitigate the M&S interoperability challenge. Ha2 = Expert stakeholders believe the EASE prototype does have potential as a U.S. Army managerial solution to help mitigate the M&S interoperability challenge. Ha3 = Expert stakeholders believe the EASE prototype does have potential as a U.S. Army knowledge management solution to help mitigate the M&S interoperability challenge. To conduct this experiment, eleven participants representing ten different organizations across the three M&S Domains were selected to test EASE using a modified Technology Acceptance Model (TAM) approach developed by Davis. Indexes were created from the participants’ responses to include both the quality of participants and research questions. The Cronbach Alpha Test for reliability was used to test the reliability of the adapted TAM. The Wilcoxon Signed Ranked test provided the statistical analysis that formed the basis of the research; that determined the EASE project has the potential to help mitigate the interoperability challenges in the U.S. Army's M&S domains.
74

BIM Software Capability and Interoperability Analysis : An analytical approach toward structural usage of BIM software (S-BIM)

Abdulhasan Taher, Ali January 2016 (has links)
This study focused on the structuralanalysis of BIM models. Different commercial software (Autodesk products and Rhinoceros)are presented through modelling and analysis of different structures with varying complexity,section properties, geometry, and material. Beside the commercial software, differentarchitectural and different tools for structural analysis are evaluated (dynamo, grasshopper,add-on tool, direct link, indirect link via IFC). / BIM and Structural BIM (S-BIM)
75

Definition, Analysis, And An Approach For Discrete-Event Simulation Model Interoperability

Wu, Tai-Chi 10 December 2005 (has links)
Even though simulation technology provides great benefits to industry, it is largely underutilized. One of the biggest barriers to utilizing simulation is the lack of interoperability between simulation models. This is especially true when simulation models that need to interact with each other span an enterprise or supply chain. These models are likely to be distributed and developed in disparate simulation application software. In order to analyze the dynamic behavior of the systems they represent, the models must interoperate. However, currently this interoperability is nearly impossible. The interaction of models also refers to the understanding of them among stakeholders in the different stages of models¡Š lifecycles. The lack of interoperability also makes it difficult to share the knowledge within disparate models. This research first investigates this problem by identifying, defining, and analyzing the types of simulation model interactions. It then identifies and defines possible approaches to allow models to interact. Finally, a framework that adopts the strength of Structured Modeling (SM) and the Object-Oriented (OO) concept is proposed for representing discrete event simulation models. The framework captures the most common simulation elements and will serve as an intermediate language between disparate simulation models. Because of the structured nature of the framework, the resulting model representation is concise and easily understandable. Tools are developed to implement the framework. A Common User Interface (CUI) with software specified controllers is developed for using the proposed framework with various commercial simulation software packages. The CUI is also used to edit simulation models in a neutral environment. A graphical modeling tool is also developed to facilitate conceptual modeling. The resulting graphic can be translated into the common model representation automatically. This not only increases the understanding of models for all stakeholders, but also shifts model interactions to the ¡§formulating¡š stage, which can prevent problems later in the model¡Šs lifecycle. Illustration of the proposed framework and the tools will be given, as well as future work needs.
76

Dynamic Inevitability in Computational Design

Boldrin, Niccolo 03 June 2014 (has links)
No description available.
77

Interoperability of Data and Mined Knowledge in Clinical Decision Support Systems

Kazemzadeh, Reza Sherafat 08 1900 (has links)
<p> The constantly changing and dynamic nature of medical knowledge has proven to be challenging for healthcare professionals. Due to reliance on human knowledge the practice of medicine in many cases is subject to errors that endanger patients' health and cause substantial financial loss to both public and governmental health sectors. Computer based clinical guidelines have been developed to help healthcare professionals in practicing medicine. Currently, the decision making steps within most guideline modeling languages are limited to the evaluation of basic logic expressions. On the other hand, data mining analyses aim at building descriptive or predictive mining models that contain valuable knowledge; and researchers in this field have been active to apply data mining techniques on health data. However, this type of knowledge can not be represented using the current guideline specification standards.</p> <p> In this thesis, we focus is on encoding, sharing and finally using the results obtained from a data mining study in the context of clinical care and in particular at the point of care. For this purpose, a knowledge management framework is proposed that addresses the issues of data and knowledge interoperability. Standards are adopted to represent both data and data mining results in an interoperable manner; and then the incorporation of data mining results into guideline-based Clinical Decision Support Systems is elaborated. A prototype tool has been developed as a part of this thesis that serves as the proof of concept which provides an environment for clinical guideline authoring and execution. Finally three real-world clinical case studies are presented.</p> / Thesis / Master of Applied Science (MASc)
78

Open Digital Libraries

Suleman, Hussein 26 November 2002 (has links)
Digital Libraries (DLs) are software systems specifically designed to assist users in information seeking activities. Stemming from the intersection of library sciences and computer networking, traditional DL systems impose library philosophies of structure and management on the sprawling collections of data that are made possible through the Internet. DLs evolve to keep pace with innovation on the Internet so there is little standardization in the architecture of such systems. However, in attempting to provide users with the highest possible levels of service with the minimum possible effort, many systems work collaboratively with others, e.g., meta-search engines. This type of system interoperability is encouraged by the emergence of simple data transfer protocols such as the Open Archives Initiative?s Protocol for Metadata Harvesting (OAI-PMH). Open Digital Libraries are an extension of the work of the OAI. It is proposed in this dissertation that the philosophy and approach adopted by the OAI can easily be extended to support inter-component interaction within a componentized DL. In particular, DLs can be built by connecting small components that communicate through a family of lightweight protocols, using XML as the data interchange mechanism. In order to test the feasibility of this, a set of protocols was designed based on a generalization of the work of the OAI. Components adhering to these protocols were implemented and integrated into production and research DLs. These systems were then evaluated for simplicity, reusability, and performance. On the whole, this study has shown promise in the approach of applying the fundamental concepts of the OAI protocol to the task of DL component design and implementation. Further, it has shown the feasibility of building componentized DL systems using techniques that are a precursor to the Web Services approach to system design. / Ph. D.
79

Improving the Interoperability of the OpenDSA eTextbook System

Wonderly, Jackson Daniel 07 October 2019 (has links)
In recent years there has been considerable adoption of the IMS Learning Tools Interoperability (LTI) standard among both Learning Management Systems (LMS), and learning applications. The LTI standard defines a way to securely connect learning applications and tools with platforms like LMS, enabling content from external learning tools to appear as if it were a native part of the LMS, and enabling these learning tools to send users' scores directly to the gradebook in the LMS. An example of such a learning tool is the OpenDSA eTextbook system which provides materials that cover a variety of Computer Science-related topics, incorporating hundreds of interactive visualizations and auto-graded exercises. Previous work turned OpenDSA into an LTI tool provider, allowing for OpenDSA eTextbooks to be integrated with the Canvas LMS. In this thesis, we further explore the problem of connecting educational systems while documenting challenges, issues, and design rationales. We expand upon the existing OpenDSA LTI infrastructure by turning OpenDSA into an LTI tool consumer, thus enabling OpenDSA to better integrate content from other LTI tool providers. We also describe how we expanded OpenDSA's LTI tool provider functionality to increase the level of granularity at which OpenDSA content can be served, and how we implemented support for several LMS, including challenges faced and remaining issues. Finally, we discuss the problem of sharing analytics data among educational systems, and outline an architecture that could be used for this purpose. / Master of Science / In recent years there has been considerable adoption of the IMS Learning Tools Interoperability (LTI) standard among Learning Management Systems (LMS) like Blackboard and Canvas, and among learning tools. The LTI standard allows for learning tools to be securely connected with platforms like LMS, enabling content from external learning tools to appear as if it were built into the LMS, and enabling these learning tools to send users’ scores directly to the gradebook in the LMS. An example of such a learning tool is the OpenDSA online textbook system which provides materials that cover a variety of Computer Science-related topics, incorporating hundreds of interactive visualizations and auto-graded exercises. Previous work enabled OpenDSA textbooks to be connected with the Canvas LMS using LTI. In this thesis, we further explore the problem of connecting educational systems while documenting challenges, issues, and design rationales. We expand the existing OpenDSA system to allow OpenDSA to better integrate content from other learning tools. We also describe how we expanded OpenDSA’s features to increase number of ways that OpenDSA content can be consumed, and how we implemented support for adding OpenDSA content to several LMS, including challenges faced and remaining issues. Finally, we discuss the problem of sharing analytics data among educational systems, and outline a potential way to connect educational systems for this purpose.
80

Learning Schemes for Adaptive Spectrum Sharing Radar

Thornton, Charles E. III 08 June 2020 (has links)
Society's newfound dependence on wireless transmission systems has driven demand for access to the electromagnetic (EM) spectrum to an all-time high. In particular, wireless applications related to the fifth generation (5G) of cellular technology along with statically allocated radar systems have contributed to the increasing scarcity of the sub 6 GHz frequency bands. As a result, development of Dynamic Spectrum Access (DSA) techniques for sharing these frequencies has become a critical research area for the greater wireless community. Since among incumbent systems, radars are the largest consumers of spectrum in the sub 6 GHz regime, and are being used increasingly for civilian applications such as traffic control, adaptive cruise control, and collision avoidance, the need for radars which can adaptively tune specific transmission parameters in an intelligent manner to promote coexistence with other systems has arisen. Thus, fully-aware, dynamic, cognitive radar has been proposed as target for radars to evolve towards. In this thesis, we extend current research thrusts towards cognitive radar to utilize Reinforcement Learning (RL) techniques which allow a radar system to learn desired behavior using information obtained from past transmissions. Since radar systems inherently interact with their electromagnetic environment, it is natural to view the use of reinforcement learning techniques as a straightforward extension to previous adaptive techniques. However, in designing learning algorithms for radar systems, we must carefully define goal-driven rewards, formalize the learning process, and consider an appropriate amount of environmental information. In this thesis, we apply well-established and emerging reinforcement learning approaches to meet the demands of modern radar coexistence problems. In particular, function estimation using deep neural networks is examined, as Deep RL presents a scalable learning framework which allows many environmental states to be considered in the decision-making process. We then show how these techniques can be used to improve traditional radar performance metrics, such as interference avoidance, spectral efficiency, and target detectibility with simulated and experimental results. We also compare the learning techniques to each other and naive approaches, such as fixed bandwidth radar and avoiding interference reactively. Finally, online learning strategies are considered which aim to balance the fundamental learning trade-off between exploration and exploitation. We show that online learning techniques can be used to select individual waveforms or applied as a high-level controller in a hierarchical learning scheme based on the biologically inspired concept of metacognition. The general use of RL techniques provides a robust framework for decision making under uncertainty that is more flexible than previously proposed cognitive radar strategies. Further, the wide array of RL models and algorithms allow the underlying structure to be applied to both small and large-scale radar scenarios. / Master of Science / Society's newfound dependence on wireless transmission systems has driven demand for control of the electromagnetic (EM) spectrum to an all-time high. In particular, federal spectrum auctions and the fifth generation of wireless technologies have contributed to the scarcity of frequency bands below 6GHz. These frequencies are widely used by both radar and communications systems due to favorable propagation characteristics. However, current radar systems typically occupy a fixed bandwidth and are tend to be poorly equipped to share their allocated spectrum with other users, which has become a necessity given the growth of wireless traffic. In this thesis, we study learning algorithms which enable a radar to optimize its electromagnetic pulses based on feedback from received signals. In particular, we are interested in reinforcement learning algorithms which allow a radar to learn optimal behavior based on rewards defined by a human. Using these algorithms, radar system designers can choose which metrics may be most important for a given radar application which can then be optimized for the given setting. However, scaling reinforcement learning to real-world problems such as radar optimization is often difficult due to the massive scope of the problem. Here we attempt to identify potential issues with implementation of each algorithm and narrow in on algorithms that are well-suited for real-time radar operation.

Page generated in 0.0438 seconds