11 |
Ontology Driven Development For Hla FederatesKoksal Algin, Ceren Fatma 01 June 2010 (has links) (PDF)
This thesis puts forth a process for ontology driven distributed simulation through a case
study. Ontology is regarded as a domain model, including objects, attributes, methods and
object relations. The case study involves trajectory simulation. A trajectory simulation is a
piece of software that calculates the flight path and other parameters of a munition, such as
its orientation and angular rates, from launch to impact. Formal specification of trajectory
simulation domain is available as a domain model in the form of an ontology, called
Trajectory Simulation ONTology (TSONT). Ontology driven federation development
process proposed in this thesis is executed in three steps. The first step is to analyze the
TSONT and to create instances of individuals guided by the requirements of the targeted
simulation application, called Puma Trajectory Simulation. Puma is the simulation of a
ficticious air-to-ground guided bomb. The second step is to create the High Level
Architecture(HLA) Federation Object Model (FOM) using Puma Simulation individuals.
FOM will include the required object and interaction definitions to enable information
exchange among federation members, including the Puma federate and the Exercise
Manager federate. Transformation from the ontology to FOM is realized in two ways:
manually, and by using a tool called OWL2OMT. The third step is to implement the
Trajectory Simulation federation based on the constructed FOM. Thus, the applicability of
developing HLA federates and the federation under the guidance of ontology is
demonstrated.
|
12 |
Patterned Versus Conventional Object-Oriented Analysis Methods: A Group Project ExperimentKUROKI, Hiroaki, YAMAMOTO, Shuichiro 20 December 1998 (has links)
No description available.
|
13 |
An Object-Process Methodology for Implementation a Distribution Information SystemLu, Liang-Yu 16 July 2001 (has links)
Component base software development methodology is the most important technological revolution of software industry in the past few years. Straightly to push forward software industry from taking handiwork as principle thing, gradually to get into automation assisting tool procreation¡¦s automation industry. Component base software development technology give way to business information system easy fabricate flexibly. System developer may assemble software components depending on user requirement. We can increase or subtract system components to modulate a section of system capability any time. But do not influence whole system, only contained a part of system components.
This thesis brings up an object-process methodology to apply develop a business distributed information system. Using object-process methodology to find business objects from business process. We can divide system analysis into two parts and eight steps, to analyze the user requirement than to design information system to guide stable software objects and system framework. Through object-process business system helps we establish the model of the complex business system, mapping the real world activity or the abstract conception into system model. We can analyze and design distributed objects efficiently for distributed operation system environment needed. Proceeding to the next step, to transform software model and to seal up distributed component object module (DCOM), than to put DCOM into system application layer. Let the business information system flexibly and ample fitting in user requirement.
|
14 |
TENA Implementation at Pacific Missile Range Facility (PMRF) PaperWigent, Mark, McKinley, Robert A. 10 1900 (has links)
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV / PMRF provides a volume of space, which may include any combination of below-surface, surface, above-surface environments to safely test, gather data, and monitor in real time, the performance of systems being developed. This paper discusses how TENA implementation in range instrumentation; including radar, optics, video, GPS, and telemetry systems; will enhance data acquisition and distribution of systems under test. While details of this implementation plan are specific to PMRF, this approach can serve as a blueprint for TENA implementation at other ranges throughout the DoD.
|
15 |
ITC TENA-Enabled Range Roadmap PaperSchoberg, Paul, Beatty, Harry, McKinley, Robert A. 10 1900 (has links)
This paper discusses the Department of Defense (DoD) direction to provide an environment for realistic Test & Evaluation in a Joint operational context and enhance interoperability and reuse with other test ranges and facilities though the use of the Test and Training Enabling Architecture (TENA) and connectivity to the Joint Mission Environment Test Capability (JMETC) joint test infrastructure. The intent of the "TENA-Enabled Range Roadmap" is to describe how TENA would be incorporated into PMRF's range infrastructure through both near-term upgrades and long-term system replacement. While details of this implementation plan are specific to PMRF, this roadmap can serve as a blueprint for TENA implementation at other ranges throughout the DoD.
|
16 |
Steps towards the object semantic hierarchyXu, Changhai, 1977- 17 November 2011 (has links)
An intelligent robot must be able to perceive and reason robustly about its world in terms of objects, among other foundational concepts. The robot can draw on rich data for object perception from continuous sensory input, in contrast to the usual formulation that focuses on objects in isolated still images. Additionally, the robot needs multiple object representations to deal with different tasks and/or different classes of objects. We propose the Object Semantic Hierarchy (OSH), which consists of multiple representations with different ontologies. The OSH factors the problems of object perception so that intermediate states of knowledge about an object have natural representations, with relatively easy transitions from less structured to more structured representations. Each layer in the hierarchy builds an explanation of the sensory input stream, in terms of a stochastic model consisting of a deterministic model and an unexplained "noise" term. Each layer is constructed by identifying new invariants from the previous layer. In the final model, the scene is explained in terms of constant background and object models, and low-dimensional dynamic poses of the observer and objects.
The OSH contains two types of layers: the Object Layers and the Model Layers. The Object Layers describe how the static background and each foreground object are individuated, and the Model Layers describe how the model for the static background or each foreground object evolves from less structured to more structured representations. Each object or background model contains the following layers: (1) 2D object in 2D space (2D2D): a set of constant 2D object views, and the time-variant 2D object poses, (2) 2D object in 3D space (2D3D): a collection of constant 2D components, with their individual time-variant 3D poses, and (3) 3D object in 3D space (3D3D): the same collection of constant 2D components but with invariant relations among their 3D poses, and the time-variant 3D pose of the object as a whole.
In building 2D2D object models, a fundamental problem is to segment out foreground objects in the pixel-level sensory input from the background environment, where motion information is an important cue to perform the segmentation. Traditional approaches for moving object segmentation usually appeal to motion analysis on pure image information without exploiting the robot's motor signals. We observe, however, that the background motion (from the robot's egocentric view) has stronger correlation to the robot's motor signals than the motion of foreground objects. Based on this observation, we propose a novel approach to segmenting moving objects by learning homography and fundamental matrices from motor signals.
In building 2D3D and 3D3D object models, estimating camera motion parameters plays a key role. We propose a novel method for camera motion estimation that takes advantage of both planar features and point features and fuses constraints from both homography and essential matrices in a single probabilistic framework. Using planar features greatly improves estimation accuracy over using point features only, and with the help of point features, the solution ambiguity from a planar feature is resolved. Compared to the two classic approaches that apply the constraint of either homography or essential matrix, the proposed method gives more accurate estimation results and avoids the drawbacks of the two approaches. / text
|
17 |
Implementace algoritmu pro vizuální segmentaci www stránek / Implementation of Algorithm for Visual Web Page SegmentationPopela, Tomáš January 2012 (has links)
Segmentation of WWW pages or page division on di erent semantics blocks is one of the disciplines of information extraction. Master's thesis deals with Vision-based Page Segmentation - VIPS method, which consist in division based on visual properties of page's elements. The method is given in context of other prominent segmentation procedures. In this work, the key steps, that this method consist of are shown and described on examples. For VIPS method it is necessary to cooperate with WWW pages rendering engine in order to obtain Document Object Model of page. The paper presents and describes four most important engines for Java programming language. The output of this work is implementation of VIPS algorithm just in Java language with usage of CSSBox core. The original algorithm implementation from Microsoft's labs is presented. The di erent development stages of library implementing VIPS method and my approach to it's solution are described. In the end of this work the work's outcome is demonstrated on several pages segmentation.
|
18 |
Algoritmy pro segmentaci webových stránek / Web Page Segmentation AlgorithmsLaščák, Tomáš January 2016 (has links)
Segmentation of web pages is one of the disciplines of information extraction. It allows to divide the page into different semantic blocks. This thesis deals with the segmentation as such and also with the implementation of the segmentation method. In this paper, we describe various examples of methods such as VIPS, DOM PS etc. There is a theoretical description of the chosen method and also the FITLayout Framework, which will be extended by this method. The implementation of the chosen method is also described in detail. The implementation description is focused on describing the different problems we had to solve. We also describe the testing that helped to reveal some weaknesses. The conclusion is a summary of the results and possible ideas for extending this work.
|
19 |
A multi-objective approach to incorporate indirect costs into optimisation models of waterborne sewer systemsBester, Albertus J. 03 1900 (has links)
Thesis (MScEng (Civil Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: Waterborne sewage system design and expansion objectives are often focused on
minimising initial investment while increasing system capacity and meeting
hydraulic requirements. Although these objectives make good sense in the short
term, the solutions obtained might not represent the optimal cost-effective
solution to the complete useful life of the system. Maintenance and operation of
any system can have a significant impact on the life-cycle cost. The costing
process needs to be better understood, which include maintenance and operation
criteria in the design of a sewer system. Together with increasing public
awareness regarding global warming and environmental degradation,
environmental impact, or carbon cost, is also an important factor in decisionmaking
for municipal authorities. This results in a multiplicity of different
objectives, which can complicate the decisions faced by waterborne sewage
utilities.
Human settlement and migration is seen as the starting point of expansion
problems. An investigation was conducted into the current growth prediction
models for municipal areas in order to determine their impact on future planning
and to assess similarities between the models available. This information was used
as a platform to develop a new method incorporating indirect costs into models
for planning waterborne sewage systems.
The need to balance competing objectives such as minimum cost, optimal
reliability, and minimum environmental impact was identified. Different models
were developed to define the necessary criteria, thus minimising initial
investment, operating cost and environmental impact, while meeting hydraulic
constraints. A non-dominated sorting genetic algorithm (NSGA-II) was applied to
certain waterborne sewage system (WSS) scenarios that simulated the
evolutionary processes of genetic selection, crossover, and mutation to find a
number of suitable solutions that balance all of the given objectives. Stakeholders
could in future apply optimisation results derived in this thesis in the decision making process to find a solution that best fits their concerns and priorities.
Different models for each of the above-mentioned objectives were installed into a
multi-objective NSGA and applied to a hypothetical baseline sewer system
problem. The results show that the triple-objective optimisation approach supplies
the best solution to the problem. This approach is currently not applied in practice
due to its inherent complexities. However, in the future this approach may become
the norm. / AFRIKAANSE OPSOMMING: Spoelafvoering rioolstelsel ontwerp en uitbreiding doelwitte is dikwels gefokus op
die vermindering van aanvanklike belegging, terwyl dit die verhoging van stelsel
kapasiteit insluit en ook voldoen aan hidrouliese vereistes. Alhoewel hierdie
doelwitte goeie sin maak in die kort termyn, sal die oplossings verkry dikwels nie
die optimale koste-effektiewe oplossing van die volledige nuttige lewensduur van
die stelsel verteenwoordig nie. Bedryf en instandhouding van 'n stelsel kan 'n
beduidende impak op die lewensiklus-koste hê, en die kostebepalings proses moet
beter verstaan word en die nodige kriteria ingesluit word in die ontwerp van 'n
rioolstelsel. Saam met 'n toenemende openbare bewustheid oor aardverwarming
en die agteruitgang van die omgewing, is omgewingsimpak, of koolstof koste, 'n
belangrike faktor in besluitneming vir munisipale owerhede. As gevolg hiervan,
kan die diversiteit van die verskillende doelwitte die besluite wat munisipale
besluitnemers in die gesig staar verder bemoeilik.
Menslike vestiging en migrasie is gesien as die beginpunt van die uitbreiding
probleem. 'n Ondersoek na die huidige groeivoorspelling modelle vir munisipale
gebiede is van stapel gestuur om hul impak op die toekomstige beplanning te
bepaal, en ook om die ooreenkomstes tussen die modelle wat beskikbaar is te
asesseer. Hierdie inligting is gebruik as 'n platform om ‘n nuwe metode te
ontwikkel wat indirekte kostes inkorporeer in die modelle vir die beplanning van
spoelafvoer rioolstelsels.
Die behoefte is geïdentifiseer om meedingende doelwitte soos minimale
aanvanklike koste, optimale betroubaarheid en minimum invloed op die
omgewing te balanseer. Verskillende modelle is ontwikkel om die bogenoemde
kriteria te definiëer, in die strewe na die minimaliseering van aanvanklike
belegging, bedryfskoste en omgewingsimpak, terwyl onderhewig aan hidrouliese
beperkinge. ‘n Nie-gedomineerde sorteering genetiese algoritme (NSGA-II),
istoegepas op sekere spoelafvoering rioolstelsel moontlikhede wat gesimuleerde
evolusionêre prosesse van genetiese seleksie, oorplasing, en mutasie gebruik om 'n aantal gepaste oplossings te balanseer met inagname van al die gegewe
doelwitte. Belanghebbendes kan in die toekoms gebruik maak van die resultate
afgelei in hierdie tesis in besluitnemings prosesse om die bes-passende oplossing
vir hul bekommernisse en prioriteite te vind. Verskillende modelle vir elk van die
bogenoemde doelwitte is geïnstalleer in die nie-gedomineerde sorteering genetiese
algoritme en toegepas op 'n hipotetiese basislyn rioolstelsel probleem. Die
resultate toon dat die drie-objektief optimalisering benadering die beste oplossing
vir die probleem lewer. Hierdie benadering word tans nie in die praktyk toegepas
nie, as gevolg van sy inherente kompleksiteite. Desnieteenstaande, kan hierdie
benadering in die toekoms die norm word.
|
20 |
Gerenciamento de variabilidade de linha de produtos de software com utilização de objetos adaptáveis e reflexão. / Variability management of software product line using adaptive object model and reflection.Burgareli, Luciana Akemi 04 August 2009 (has links)
A abordagem de linha de produtos de software oferece benefícios ao desenvolvimento de software como economia, qualidade e desenvolvimento rápido, pois se baseia em reuso de arquitetura de software mais planejado e direcionado a um domínio específico. Neste contexto, o gerenciamento da variabilidade é uma questão chave e desafiadora, já que esta atividade auxilia a identificação, projeto e implementação dos novos produtos derivados da linha de produtos de software. O objetivo deste trabalho é definir um processo de gerenciamento de variabilidade de linha de produtos de software. Este processo, denominado GVLPS, identifica a variabilidade, extraindo as variantes a partir de diagramas de casos de uso e modelando-as através de features, especifica a variabilidade identificada e utiliza como suporte, na criação de variantes, um mecanismo de variabilidade baseado em modelos de objetos adaptáveis e em reflexão. A aplicação do processo é realizada através de um estudo de caso sobre o software de um veículo espacial hipotético, o Lançador de Satélites Brasileiro (LSB). / The Software Product Line approach offers benefits such as savings, large-scale productivity and increased product quality to the software development because it is based on software architecture reuse which is more planned and aimed to a specific domain. The management of variability is a key and challenging issue, since this activity helps identifying, designing and implementing new products derived from software products line. This work defines a process for the variability management of software product line, called GVLPS. After modeling the variability, extracting the variants from use case diagrams and features, the next step is to specify the variability that was identified. Finally, the proposed process uses a variability mechanism based on adaptive object model and reflection as support in the creation of variants. The proposed process uses as case study the software system of a hypothetic space vehicle, the Brazilian Satellites Launcher (LSB).
|
Page generated in 0.0475 seconds