• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 38
  • 17
  • 14
  • 13
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 238
  • 46
  • 39
  • 38
  • 36
  • 34
  • 27
  • 26
  • 25
  • 24
  • 21
  • 21
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Formalisation, acquisition et mise en œuvre de connaissances pour l’intégration virtuelle de bases de données géographiques : les spécifications au cœur du processus d’intégration / Formalisation, acquisition and implementation of specifications knowledge for geographic databases integration

Abadie, Nathalie 20 November 2012 (has links)
Cette thèse traite de l'intégration de bases de données topographiques qui consiste à expliciter les relations de correspondance entre bases de données hétérogènes, de sorte à permettre leur utilisation conjointe. L'automatisation de ce processus d'intégration suppose celle de la détection des divers types d'hétérogénéité pouvant intervenir entre les bases de données topographiques à intégrer. Ceci suppose de disposer, pour chacune des bases à intégrer, de connaissances sur leurs contenus respectifs. Ainsi, l'objectif de cette thèse réside dans la formalisation, l'acquisition et l'exploitation des connaissances nécessaires pour la mise en œuvre d'un processus d'intégration virtuelle de bases de données géographiques vectorielles. Une première étape du processus d'intégration de bases de données topographiques consiste à apparier leurs schémas conceptuels. Pour ce faire, nous proposons de nous appuyer sur une source de connaissances particulière : les spécifications des bases de données topographiques. Celles-ci sont tout d'abord mises à profit pour la création d'une ontologie du domaine de la topographie. Cette ontologie est utilisée comme ontologie de support, dans le cadre d'une première approche d'appariement de schémas de bases de données topographiques, fondée sur des techniques d'appariement terminologiques et structurelles. Une seconde approche, inspirée des techniques d'appariement fondées sur la sémantique, met en œuvre cette ontologie pour la représentation des connaissances sur les règles de sélection et de représentation géométrique des entités géographiques issues des spécifications dans le langage OWL 2, et leur exploitation par un système de raisonnement / This PhD thesis deals with topographic databases integration. This process aims at facilitating the use of several heterogeneous databases by making the relationships between them explicit. To automatically achieve databases integration, several aspects of data heterogeneity must be detected and solved. Identifying heterogeneities between topographic databases implies comparing some knowledge about their respective contents. Therefore, we propose to formalise and acquire this knowledge and to use it for topographic databases integration. Our work focuses on the specific problem of topographic databases schema matching, as a first step in an integration application. To reach this goal, we propose to use a specific knowledge source, namely the databases specifications, which describe the data implementing rules. Firstly, they are used as the main resource for the knowledge acquisition process in an ontology learning application. As a first approach for schema matching, the domain ontology created from the texts of IGN's databases specifications is used as a background knowledge source in a schema matching application based on terminological and structural matching techniques. In a second approach, this ontology is used to support the representation, in the OWL 2 language, of topographic entities selection and geometry capture rules described in the databases specifications. This knowledge is then used by a reasoner in a semantic-based schema matching application
112

Framework formal para composição automática de serviços em sistemas de internet das coisas. / Formal framework for automatic service composition in internet of things system.

André Luís Meneses Silva 01 March 2018 (has links)
É cada vez mais notável o desenvolvimento da indústria micro-eletrônica. A criação de dispositivos eletrônicos menores, que apresentam maior autonomia de energia, aliados ao aumento do poder de processamento, armazenamento e comunicação sem fio de alta velocidade favoreceram o surgimento e disseminação de novas tecnologias e paradigmas, dentre elas a Internet das Coisas (IoT). Do ponto de vista tecnológico, IoT é uma rede de objetos físicos que possuem tecnologia embarcada de sensoriamento e atuação. Agências de consultoria empresarial, tais como a McKinsey & Company, afirmam que IoT apresenta valor de mercado bilionário e poderá ultrapassar a casa dos trilhões antes de 2020. Dessa forma, o mercado de IoT vem se apresentando como um dos mercados mais promissores para os próximos anos. Alguns dos problemas que podem postergar este crescimento são os problemas decorrentes da dificuldade de integração e escalabilidade das aplicações de IoT. Em IoT, problemas de interoperabilidade são corriqueiros, seja pela alta diversidade de dispositivos empregados, seja pela incompatibilidade entre fabricantes. Em relação a escalabilidade, sistemas de IoT possuem uma demanda natural por alta escala, visto que buscam atender demandas comuns a vários setores, seja na indústria, transporte, domótica, segurança pública, comércio, entre outros. Este trabalho apresenta uma solução para esses problemas através do SWoTPAD, um framework formal que auxilia o projetista no desenvolvimento de soluções para IoT. SWoTPAD oferece uma linguagem para especificar dispositivos e serviços, descrever o ambiente e realizar requisições. Adicionalmente, ele gera o módulo de descoberta, composição automática de serviços e execução. Aplicações SWoTPAD são facilmente integráveis, pois usam e estendem um mesmo conjunto de ontologias, o que garante a compatibilidade nos dados gerados e consumidos por essas aplicações. A escalabilidade advém da associação de anotações semânticas a cada um dos elementos que compõem a aplicação de IoT. Essas anotações permitem ao SWoTPAD descobrir, classificar, selecionar e compor automaticamente serviços do ambiente. Dessa forma, SWoTPAD pode procurar por soluções alternativas, quando o serviço original apto a atender uma determinada demanda se encontra sobrecarregado ou indisponível. Para validação do framework, foram adotados dois estudos de caso. O primeiro deles, o problema de implantação de serviços em um ambiente de nuvem, e o segundo, uma aplicação de segurança residencial. O estudo de caso demonstrou que é possível desenvolver aplicações completas de IoT no framework proposto. Adicionalmente, o mecanismo de composição automática gerado pelo framework para essas aplicações apresenta uma piora média de 45% de desempenho quando comparado à composição manual. / The development of the micro-electronics industry is becoming more and more remarkable. The creation of smaller electronic devices, with higher degree of autonomy, processing, storage, and wireless communication favor the emergence and dissemination of new technologies and paradigms, such as the Internet of Things (IoT ). From the technological point of view, IoT is a network of physical objects that have embedded technology of sensing and actuation. McKinsey & Company says the IoT market is already reaching billionaire numbers and may exceed the trillions by 2020. Thus, the IoT market is proving to be one of the most promising markets in the next years. Problems that can delay this growth come from the difficulty of integration and scalability of IoT applications. In IoT, interoperability problems are common, either because of the high diversity of devices used, or because of the incompatibility between manufacturers. Regarding scalability, IoT systems have a natural demand for high scale, since they seek to meet common demands in various sectors, be it in industry, transportation, home automation, public safety, commerce, among others. This work solves these problems through SWoTPAD, a formal framework that assists the designer in developing solutions for IoT. SWoTPAD provides a language for specifying devices and services, describing the environment, and performing requests. Additionally, it generates the discovery, automatic service composition, and execution module. SWoTPAD applications are easily integrable, since they use and extend the same set of ontologies, which guarantees compatibility in the data generated and consumed by these applications. Scalability comes from the association of semantic annotations to each of the elements that compose the IoT application. These annotations allow SWoTPAD to discover, rank, select, and automatically compose services. In this way, SWoTPAD can search for alternative solutions, when the original service able to meet a particular demand is overloaded or unavailable. Two case studies were developed for validation of the framework. The first one, the problem of deploying services in a cloud environment, and the second, a home security system. The case study demonstrated that it is possible to develop complete IoT applications in the proposed framework. Also, the automatic service composition module generated by SWoTPAD for these applications has a mean worsening of 45 % of performance when compared to the manual composition.
113

Projeto de sistemas de controle multivariáveis robustos com especificações no domínio do tempo. / Robust multivariable control systems design with time domain specifications.

Leonardi, Fabrizio 29 November 2002 (has links)
Este trabalho discute o projeto de compensadores multivariáveis robustos com especificações no domínio do tempo. Primeiramente faz-se a análise dos compensadores por observadores de estados como forma de atingir tais objetivos. Mostra-se que, em certas condições, essa estrutura equivale à dos observadores proporcionais-integrais e apresentam-se as condições de estabilidade nominal. Evidencia-se também que é possível tratar esse problema de controle como um problema de "model matching" ou como um problema de controle com dois graus de liberdade. Mostra-se também que o projeto do compensador é equivalente ao projeto de sistemas de controle por realimentação estática da saída. Essa equivalência implica que, embora os compensadores por observadores sejam cômodos à incorporação de especificações temporais, sua estrutura é limitada para garantir que especificações gerais sejam satisfeitas. Contorna-se então essa limitação estendendo-se o estudo ao caso dos compensadores sem essa restrição estrutural. O problema de "model matching" e o problema de controle 2D são considerados como forma indireta de incorporar-se as especificações temporais e condições de projeto são obtidas reduzindo-se os possíveis conservadorismos dos projetos usuais. Ainda neste sentido, formula-se o problema denominado de "model tracking", podendo ser capaz de reduzir ainda mais esses aspectos de conservadorismo. Nessa estrutura, o erro de rastreamento entre a saída do modelo de referência e a saída medida da planta é realimentado, fazendo com que o modelo de referência faça parte explícita do controlador, permitindo ajuste pós-projeto da mesma forma que faz com a estrutura 2D. Toda as condições de projeto são obtidas no domínio da freqüência, em termos de restrições da malha aberta e da malha fechada, permitindo que técnicas padrões de projeto multivariável como LQG/LTR e H¥ possam ser diretamente utilizadas na obtenção do compensador. Obtiveram-se assim metodologias de projeto capazes de garantir robustamente a rejeição do erro de rastreamento de um modelo de referência, além de permitir que especificações usuais como a rejeição dos erros de medida e rejeição dos distúrbios sejam incorporadas no mesmo procedimento de projeto. O controle de um tanque de mistura é usado como exemplo numérico para ilustrar as metodologias de projeto. / This work is concerned with the design of robust multivariable controllers with time domain specifications. As a first step an analysis of controllers based on state observers has been done. It has been shown that this structure is equivalent to the proportional-integral observers. The conditions for nominal stability have then been presented. It has also been shown that this control problem can be seen as a model matching control design with two degrees of freedom. Furthermore it has been shown that compensator design can be reduced to the traditional static output feedback problem. This fact implies that although compensators based on state observers seem to be suitable to deal with time domain specifications, their structure has limitations to guarantee that more general specifications be satisfied. In order to overcome such limitations both the model matching and the 2-D control structures have been considered to include time domain specifications in the design problem. Design conditions have been derived in order to reduce the eventual conservatism associated to the usual design procedure. A control design problem named model tracking has also been formulated with the aim of reducing even more such conservatism. In this new structure the tracking error between the output of the reference model and the measured plant output is fed back. The reference model is then an explicit part of the controller. As a consequence small adjustments can be done on it during system start-up in the same way as with the 2-D structure. All the design conditions have been written in the frequency domain as constraints on both open-loop and closed-loop transfer matrices. Standard multivariable design techniques like LQG/LTR and H¥ can then be used. The proposed design methodology can robustly guarantee tracking of the reference model output as well as both measurement error and disturbance rejections. The control of a mixture tank has been used as a numeric example to illustrate the design methodologies.
114

Modelling and verification of web services protocols.

Ramsokul, Pemadeep Kumar, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Among the plethora of solutions to the Business-to-Business interoperability problem, no other solution has obtained as much attention asWeb Services Technology (WST), which allows entities to exchange data regardless of their underlying platforms. WST also allows services to be composed in order to provide high quality customer service over the web. In order to perform transactions across different service providers, standard protocols need to be supported by participating providers. Many useful protocols are coming into the market, but are often ambiguously specified by protocol designers and not fully verified. Furthermore, even if the specifications are reasonably clear, programmers often make subtle assumptions, possibly leading to errors that are hard to detect and locate, especially when the number of participating entities is dynamic. Consequently, these can lead to interoperability problems among implementations of the same protocol and high software maintenance costs. To address these issues, a hierarchical automata-based framework is proposed to model the functional aspects of Web Services (WS) protocols that also assists in verifying their correctness. The modelling formalism has a sound mathematical foundation and aims to reconcile desirable features while still maintaining syntactic and semantic simplicity. The properties to be verified are specified using a pattern system and/or 'observer' states, which have been adapted for WS protocols. In particular, always in a positive observer state implies proper termination and partial functional correctness while reachability of a negative observer state signifies deadlock and/or violation of a safety property. Verification itself is handled by automatic translation of the model and its properties into a model-checker's input code and interpretation of the output produced by the model-checker. A test-bed is proposed to check the conformance of a protocol implementation to its specification It helps in locating errors in the implementations of WS protocols especially where the number of participating entities is dynamic. Conformance checking is achieved by capturing sequences of exchanged messages of the actual implementations and checking them against the formal specification. Experience using the framework is also described and illustrated using two non-trivial WS protocols, namely WS-BusinessActivity and WS-AtomicTransaction.
115

應用同步選擇網路在派翠網路之分析 / Application of SNC (Synchronized choice net) to analysis Petri nets

巫亮宏 Unknown Date (has links)
Well-behaved SNC covers well-behaved and various classes of FC (free-choice) and is not included in AC (asymmetric choice). An SNC allows internal choices and concurrency and hence is powerful for irodeling. Any SNC is bounded and its liveness conditions are simple. An integrated algorithm, has been presented for verification of a net being SNC and its liveness with polynomial time complexity. Scholars often need to verify properties on nets appearing in literatures. Verification by CAD tool is less desirable than that by hand due to the extra efforts to input the model aid learn to use the tool. We propose to manually search the maximum SNC component followed by locating bad siphons in an incremental manner. We then apply Lautenback's Maridng Condition (MC) for liveness to berify the property of liveness. But there are two drawbacks associated with the above MC. First, it guarantees only deadlock-freeness, and not necessary liveness. We have identified the structure cause for this and developed its livess conditions correspondingly. Second a net may be live even if the MC is not satisfied. We have identified the structure cause for this. The MC has been readjusted based on our proposed new theorey.
116

Application of the 13th edition AISC direct analysis method to heavy industry industrial structures

Modugno, Jennifer L. 02 July 2010 (has links)
The objective of this study was to understand and develop procedures for the use of the AISC 2005 Specification's Direct Analysis Method for the analysis and design of heavy-industry industrial structures, to layout a systematic approach for the engineer to analyze and design using this method, and to determine if there will be any consequences to the practicing engineer in using this method. The relevant 13th Edition AISC stability analysis methods (Effective Length,First-Order, and Direct Analysis Methods) were researched in the 2005 Specification aswell as in available technical literature, and then were critically evaluated by their applicability and limitations. This study will help serve as a guide for the systematic approach for the practicing engineer to apply this method to analyze and design such complex steel frame structures using the computer-aided software called GTSTRUDL. To accomplish this purpose, two analytical models were studied using the Direct Analysis Method. The first model was a simple industrial structure and the second model was a more complex nuclear power plant boiler building.
117

Logistikos sistemų formalus imitacinis modeliavimas / Formal simulation modeling for logistic systems

Liorenšaitytė, Vilma 16 August 2007 (has links)
Šiuo metu egzistuoja įvairiausių sudėtingų sistemų imitacinių modelių kūrimo technologijų ir priemonių. Šiame darbe aiškinama, kaip imitacinis modelis gali būti integruojamas į verslo valdymo sistemą. Taip pat įvertinama, kad verlo valdymo sistemos gali būti paskirstytos atskiruose kompanijose ar kompiutersiuose. Šiame darbe bus išanalizuota koncepcija, kaip apjungti atskirus imitacinius modelius. Išanalizuota modelių integravimo į informacinę sistemą technologija. Pateikiami gamybos ir paskirstymo imitacinių modelių matematiniai aprašai, naudojant DEVS(Discrete Event Simulation) ir PLA(Piece-Linear Aggregates) specifikacijų metodus. / At present, a large number of modeling and simulation techniques and tools have been developed to deal with complex business systems. In this paper, we concentrate on scenario illustrating, how simulation models can be integrated in to business management system. Different infrastructure forms are possible, because services may be implemented on single machine or distributed throughout several companies’ networks. In this paper a concept will be analyzed - how to combine different simulation models. The technology for involvement of simulation models in to information systems will be created. Also the problem of simulation model transformation in to program code will be solved. For model formalization we can use a method of Piece-Linear Aggregates (PLA) that belongs to the class of time automata model. PLA method is close to Discrete Event Simulation (DEVS) formalism, which is used to create wide purpose simulation models.
118

Addressing aspect interactions in an industrial setting: experiences, problems and solutions

Zambrano Polo y La Borda, Arturo Federico January 2013 (has links)
Aspect oriented programming (AOP) introduces new and powerful modularization constructs. The aspect module is used to encapsulate crosscutting concerns, which otherwise would remain tangled and scattered. The idea of encapsulating crosscutting concerns rapidly expanded to earlier phases in the development cycle, including requirement analysis (aspect oriented requirement engineering, AORE) and design (aspect oriented modeling, AOM). The overall application of aspect orientation concepts is known as aspect oriented software development (AOSD). AOP is not yet a mainstream practice. Particularly AOSD is still in its early stages. This is reflected in the lack of reports of full development cycles using aspect oriented approaches, especially using industrial case studies. Furthermore, the power of aspects comes at the price of new challenges, one of them is that systems built using aspects are more difficult to understand. The crosscutting nature of aspects allows them to alter the behavior of many other modules. As a result, aspects may interact in unintended and unanticipated ways. This problem is known as aspect interactions. In this work we deal with the aspect interaction problem in the context of an industrial domain: slots machines. We perform a complete development cycle of the slot machine software. This is, to the best of our knowledge, the first complete industrial case of study of aspect orientation. Through this experience we discovered the limitations with regard to aspect interactions, of some emblematic aspect oriented approaches for requirement engineering, design and implementation. The contribution of this work is threefold. Firstly, we contribute with the evaluation and extensions to some of AORE and AOM approaches, in order to provide explicit support for aspect interactions in requirement analysis and design phases. We also evaluate the implementation of interactions using a static and a dynamic AOP language, and propose an AspectJ extension that copes with aspect interactions. Secondly, this work is the first report of a complete aspect oriented development cycle of an industrial case study. Thirdly, this work provides a complex case study that presents several business logic crosscutting concerns, which in turn exhibit numerous aspect interactions, that serves as a challenging test bed for upcoming AOSD approaches.
119

Speciation of mercury by chromatography coupled with atomic spectrometry

Armstrong, Helen Elisabeth Louise January 2000 (has links)
A commercial GC-AFS instrument has been developed and optimised for the speciation of organomercury. This instrument couples a GC oven to a modified atomic fluorescence detector via a ceramic pyrolyser. Organomercury compounds in dichloromethane solvent were directly injected through a Programmable Temperature Vaporiser Injector onto a DBl Megabore column. Once separated, the compounds eluted from the column and were atomised in the pyrolyser then detected by AFS. The direct injection technique, ceramic pyrolysis design and argon purged detector have improved previous instrument designs by enhancing and maintaining sensitivity. The instrumental limit of detection was determined to be 0.25 pg Hg absolute. Methods were developed for the extraction of methylmercury from a variety of marine samples. The techniques were validated using mussel homogenate and dogfish liver (IAEA 142, SRM 8044 and DOLT-2) certified reference materials. An interlaboratory comparision exercise was participated in and a method was developed for the detemination of methylmercury in Fucus sea plant (IAEA 140). A concentration of 0.63 ± 0.006 ng g-1 was reported. The material is now certified at 0.626 +0.139 ng g-1. Of all the participating laboratories, this was the closest result to the certified value. The instrument and methods were also applied to soil and sediment samples. Once again validation was performed with a CRM sediment, IAEA 356. Although this material has been reported to give positive artifact formation when using a steam distillation sample preparation procedure, good agreement and no artifects were observed upon analysis. A further contaminated land, an uncontaminated soil and sediment sample were also studied. For all the samples studied by GC-AFS total mercury measurements were also made following an appropriate digestion procedure and CV-AFS. A gas chromatograph was also coupled with ICP-MS and HPLC was coupled to CV-AFS as comparative techniques. Both approaches were optimised and validated with CRM's. The GC-ICP-MS had the advantage of providing additional element information and confirmed the presence of methylmercury bromide in the final mussel homogenate extract. The HPLC approach found to be much less sensitive than the GC techniques and also suffered from vapour generation interferences. The PTV injector was considered for large volume injection and thermal desorption techniques. Injector breakdown problems were overcome by optimising the conditions and solid phase adsorbent for cold splitless injection. A recovery of 70% was achieved for a 50 ul large volume injection of methylmercury chloride in DCM. This technique indicated the possibility that LVI may in the future offer increased method sensitivity.
120

Reach Control Problems on Polytopes

Helwa, Mohamed 07 August 2013 (has links)
As control systems become more integrated with high-end engineering systems as well as consumer products, they are expected to achieve specifications that may include logic rules, safety constraints, startup procedures, and so forth. Control design for such complex specifications is a relatively unexplored research area. One possible design approach is based on partitioning the state space into polytopic regions, and then formulating a certain control problem on each polytope, with the intention that the set of all controllers so obtained would collectively achieve the specification. The control problem which must be solved for each polytope is called the reach control problem, and it has been identified as turnkey to the further development of this approach. The reach control problem (RCP) is to find a state feedback to make the closed-loop trajectories of an affine (or linear) control system defined on a polytope reach and exit a prescribed facet of the polytope in finite time. This dissertation studies a number of aspects of the reach control problem, and it uses tools from convex analysis, nonsmooth analysis, and computational geometry for this study. The dissertation has three main themes. First, we formulate and solve a variant of RCP in which trajectories exit the polytope in a monotonic sense; this provides a triangulation-independent solution of RCP. Second, we develop a Lyapunov-like theory for verifying if RCP is solved using a given candidate controller. This involves the introduction of the notion of generalized flow functions, a LaSalle Principle for RCP, and several converse theorems on existence of generalized flow functions. Third, we study the relationship between affine feedbacks and continuous state feedbacks for RCP on simplices. Although the two feedback classes have been shown to be equivalent under an assumption on the triangulation of the state space, we show by a counterexample that the equivalence is no longer true under arbitrary triangulations. Then we provide for single-input systems a constructive method for the synthesis of multi-affine feedbacks for RCP on simplices.

Page generated in 0.1283 seconds