• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 269
  • 111
  • 90
  • 36
  • 26
  • 24
  • 21
  • 16
  • 7
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 737
  • 140
  • 138
  • 131
  • 101
  • 90
  • 87
  • 82
  • 81
  • 68
  • 67
  • 64
  • 63
  • 63
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Three-Level Multiple Imputation: A Fully Conditional Specication Approach

January 2015 (has links)
abstract: Currently, there is a clear gap in the missing data literature for three-level models. To date, the literature has only focused on the theoretical and algorithmic work required to implement three-level imputation using the joint model (JM) method of imputation, leaving relatively no work done on fully conditional specication (FCS) method. Moreover, the literature lacks any methodological evaluation of three-level imputation. Thus, this thesis serves two purposes: (1) to develop an algorithm in order to implement FCS in the context of a three-level model and (2) to evaluate both imputation methods. The simulation investigated a random intercept model under both 20% and 40% missing data rates. The ndings of this thesis suggest that the estimates for both JM and FCS were largely unbiased, gave good coverage, and produced similar results. The sole exception for both methods was the slope for the level-3 variable, which was modestly biased. The bias exhibited by the methods could be due to the small number of clusters used. This nding suggests that future research ought to investigate and establish clear recommendations for the number of clusters required by these imputation methods. To conclude, this thesis serves as a preliminary start in tackling a much larger issue and gap in the current missing data literature. / Dissertation/Thesis / Masters Thesis Psychology 2015
252

Méthode pour la spécification de responsabilité pour les logiciels : Modelisation, Tracabilité et Analyse de dysfonctionnements / Method for software liability specifications : Modelisation, Traceability and Incident Analysis

Sampaio Elesbao Mazza, Eduardo 26 June 2012 (has links)
Malgré les progrès importants effectués en matière de conception de logiciels et l'existence de méthodes de développement éprouvées, il faut reconnaître que les défaillances de systèmes causées par des logiciels restent fréquentes. Il arrive même que ces défaillances concernent des logiciels critiques et provoquent des dommages significatifs. Considérant l'importance des intérêts en jeu, et le fait que la garantie de logiciel "zéro défaut" est hors d'atteinte, il est donc important de pouvoir déterminer en cas de dommages causés par des logiciels les responsabilités des différentes parties. Pour établir ces responsabilités, un certain nombre de conditions doivent être réunies: (i) on doit pouvoir disposer d'éléments de preuve fiables, (ii) les comportements attendus des composants doivent avoir été définis préalablement et (iii) les parties doivent avoir précisé leurs intentions en matière de répartition des responsabilités. Dans cette thèse, nous apportons des éléments de réponse à ces questions en proposant un cadre formel pour spécifier et établir les responsabilités en cas de dysfonctionnement d'un logiciel. Ce cadre formel peut être utilisé par les parties dans la phase de rédaction du contrat et pour concevoir l'architecture de logs du système. Notre première contribution est une méthode permettant d'intégrer les définitions formelles de responsabilité et d'éléments de preuves dans le contrat juridique. Les éléments de preuves sont fournis par une architecture de logs dite "acceptable" qui dépend des types de griefs considérés par les parties. La seconde contribution importante est la définition d'une procédure incrémentale, qui est mise en ?uvre dans l'outil LAPRO, pour l'analyse incrémentale de logs distribués. / Despite the effort made to define methods for the design of high quality software, experience shows that failures of IT systems due to software errors remain very common and one must admit that even critical systems are not immune from that type of errors. One of the reasons for this situation is that software requirements are generally hard to elicit precisely and it is often impossible to predict all the contexts in which software products will actually be used. Considering the interests at stake, it is therefore of prime importance to be able to establish liabilities when damages are caused by software errors. Essential requirements to define these liabilities are (1) the availability of reliable evidence, (2) a clear definition of the expected behaviors of the components of the system and (3) the agreement between the parties with respect to liabilities. In this thesis, we address these problems and propose a formal framework to precisely specify and establish liabilities in a software contract. This framework can be used to assist the parties both in the drafting phase of the contract and in the definition of the architecture to collect evidence. Our first contribution is a method for the integration of a formal definition of digital evidence and liabilities in a legal contract. Digital evidence is based on distributed execution logs produced by "acceptable log architectures". The notion of acceptability relies on a formal threat model based on the set of potential claims. Another main contribution is the definition of an incremental procedure, which is implemented in the LAPRO tool, for the analysis of distributed logs.
253

Dimensionamento de elementos estruturais em concreto leve

Ferreira, Cláudia Nunes Gomes 22 April 2015 (has links)
Submitted by Izabel Franco (izabel-franco@ufscar.br) on 2016-09-21T20:16:49Z No. of bitstreams: 1 DissCNGF.pdf: 3859904 bytes, checksum: 2fa6f704c030b58e303d83e761c6a771 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-09-28T19:06:41Z (GMT) No. of bitstreams: 1 DissCNGF.pdf: 3859904 bytes, checksum: 2fa6f704c030b58e303d83e761c6a771 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-09-28T19:06:50Z (GMT) No. of bitstreams: 1 DissCNGF.pdf: 3859904 bytes, checksum: 2fa6f704c030b58e303d83e761c6a771 (MD5) / Made available in DSpace on 2016-09-28T19:14:29Z (GMT). No. of bitstreams: 1 DissCNGF.pdf: 3859904 bytes, checksum: 2fa6f704c030b58e303d83e761c6a771 (MD5) Previous issue date: 2015-04-22 / Não recebi financiamento / The lightweight concrete for structural purposes is a material that has great potential for application in various construction areas. Its main feature is the reduced density compared to conventional concrete. This characteristic is a great advantage, since one of the main shortcomings of conventional concrete is its heavy weight. Thus there is a reduction in foundation loads, reduction in moving cost due to smaller weight, low consumption cost of cement and, in some cases, improved thermal and acoustic performance. However, despite the great potential application of lightweight concrete, there is a lack of studies regarding the design criteria of elements made of this material type. The lightweight concretes have different strength properties when compared to conventional concrete, thus require special design criteria. As there is no specific Brazilian standard, the design is taken to be similar to the design principles of conventional concrete. In this context, the present study aims to evaluate design criteria for lightweight concrete elements in compression, shear and bending available in national and international standardization and specific technical literature. The study includes both the analysis of the design criteria for the Ultimate Limit State (ULS) and for checking the Limit State Service (LLS). Finally, cases are compared assessing beams, slabs and walls on four different types of lightweight concrete, taking into account factors such as material consumption, cost, weight and cement consumption. / O concreto leve com finalidade estrutural é um material que apresenta grande potencial de aplicação nas mais diversas áreas da construção civil. Sua principal característica é a reduzida massa específica em comparação ao concreto convencional. Tal característica torna-se uma grande vantagem, uma vez que uma das principais deficiências do concreto convencional é seu elevado peso próprio. Com isso há redução nas cargas na fundação e no peso de transporte, menor consumo de cimento e custo em algumas situações, melhoria do desempenho térmico e acústico. Entretanto, apesar do grande potencial de aplicação dos concretos leves, observa-se uma carência de estudos no que tange a critérios de dimensionamento de elementos feitos com esse material. Os concretos leves apresentam diferentes propriedades físicas e mecânicas quando comparados aos concretos convencionais, diante disso, exigem critérios especiais de dimensionamento. Como não existe norma brasileira específica para este fim, o dimensionamento segue princípios similares ao dimensionamento de concreto convencional. Nesse contexto, a presente pesquisa tem como objetivo fazer um levantamento sobre a utilização de concreto leves, suas propriedades e critérios para dimensionamento à compressão, cisalhamento e flexão disponíveis em normalização nacional e internacional e em literatura técnica específica. O estudo inclui tanto a análise dos critérios para dimensionamento no Estado Limite Último (ELU) quanto para verificação no Estado Limite de Serviço (ELS). Por fim, são comparados casos de dimensionamento de vigas, lajes e paredes em quatro diferentes tipos de concreto leve, levando em conta fatores como consumo de materiais, custo, peso e consumo de cimento.
254

Uma abordagem de exploração volumétrica baseada em agrupamento e redução dimensional para apoiar a definição de funções de transferência multidimensionais / A volume exploration approach based on clustering and dimensional reduction to support the definition of multidimensional transfer functions

Santos, Rafael Silva 27 March 2018 (has links)
Submitted by Rafael Silva Santos (rafael.silva.sts@gmail.com) on 2018-04-19T01:03:05Z No. of bitstreams: 1 Dissertacao - Rafael Silva Santos - vDeposito.pdf: 42155755 bytes, checksum: fb36b8d70ad22da512d7f23dc3a13691 (MD5) / Approved for entry into archive by Elza Mitiko Sato null (elzasato@ibilce.unesp.br) on 2018-04-19T14:25:43Z (GMT) No. of bitstreams: 1 santos_rs_me_sjrp.pdf: 42155755 bytes, checksum: fb36b8d70ad22da512d7f23dc3a13691 (MD5) / Made available in DSpace on 2018-04-19T14:25:44Z (GMT). No. of bitstreams: 1 santos_rs_me_sjrp.pdf: 42155755 bytes, checksum: fb36b8d70ad22da512d7f23dc3a13691 (MD5) Previous issue date: 2018-03-27 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Funções de transferência (FTs) são uma parte crucial do processo de exploração volumétrica em Visualização Direta de Volumes. Nesse processo, FTs desempenham duas tarefas principais: a classificação de materiais e o mapeamento de informações presentes nos dados para propriedades visuais. A busca por uma solução que lide com ambas as tarefas envolve uma série de fatores que, em conjunto, são um dos maiores desafios de visualização volumétrica. Neste trabalho, propomos uma abordagem de exploração que tem por objetivo envolver todo escopo e simplificar tanto a definição de FTs multidimensionais quanto a manipulação de datasets. A abordagem se organiza em três componentes: uma heurística baseada em entropia e correlação que guia a seleção de atributos para formação do espaço de entrada; um método de classificação que emprega a técnica de redução de dimensionalidade FastMap e a técnica de agrupamento DBSCAN para proporcionar a descoberta semiautomática de características volumétricas; e uma interface simplificada que, atrelada ao método de classificação, produz um gráfico de dispersão 2D de características para a exploração do volume. Inicialmente, o usuário deve analisar o ranking de atributos para formar um espaço multidimensional. Depois, deve escolher parâmetros para gerar o gráfico de características. Finalmente, deve navegar por esse gráfico a fim de identificar materiais ou estruturas relevantes. Nos experimentos realizados para avaliar a abordagem, os mecanismos disponibilizados permitiram encontrar e isolar de forma efetiva características inseridas em todos os datasets investigados. Aponta-se ainda como contribuição o baixo custo computacional, na prática, a complexidade de tempo do método de classificação é de O (n log n). O tempo de execução foi inferior a 11 segundos, mesmo quando datasets formados por cerca de 10 milhões de instâncias e com mais de 10 dimensões são utilizados. / Transfer functions (TFs) are a crucial part of the volume exploration process in Direct Volume Rendering. In this process, TFs perform two main tasks: material classification and mapping of information to visual properties. The search for a solution that copes with both tasks involves a number of factors that, together, is one of the greatest challenges of volume visualization. In this work, we propose an exploration approach that aims to involve the entire scope and a simplify both the definition of multidimensional TFs and the manipulation of datasets. The approach is organized into three components: a heuristic based on entropy and correlation that guides the selection of attributes to conceive the input space; a classification method, which uses the dimensionality reduction technique FastMap and the clustering technique DBSCAN to provide a semiautomatic features finding; and a simplified interface that, linked to the previous method, provide a 2D scatter plot of features for volume exploration. Initially, the user must analyze at the ranking of attributes to form a multidimensional space. Afterwards, it must choice parameters to generate the scatter plot. Finally, it must navigate through this chart in order to reveal relevant materials and features. In the experiments performed to evaluate the approach, the available mechanisms allow to effectively find and isolate features inserted in all investigated datasets. It is also pointed out as contribution a low computational cost, in practice the time complexity of the classification method is O(n log n). The runtime was less than 11 seconds, even when datasets formed by about 10 million instances and with more than 10 dimensions are used.
255

Čínský pacient v českém zdravotnictví / A Chinese patient in the Czech health care service

SCHOLZ, Pavel January 2008 (has links)
The name of this thesis is: "A Chinese patient in the Czech health care service". China with approximately 1.3 million inhabitants belongs to the largest countries in the world. Migration and tourism of inhabitants of the Folk Republic of China and Republic of China towards the Central European region, is higher year by year. Simultaneously, with increasing accumulation of the Chinese minority group members in the Czech Republic, even the number of provided health care for this minority group of inhabitants is rising. Different conception of health and disease markedly affects the approach and understanding of the patient/client from another culture both from professional point of view and from the view of patient/client. Medical personnel should be aware of main differences in behaviour of the members of minority group and their population, and not only in connection with provided health care. This research work is worked out by means of qualitative research methods. Working out of nursing care peculiarities which the members of Chinese minority want to respect was the research goal of this thesis. Collection of data was carried out by means of a half-structured interview in members of the Chinese minority group in the Czech Republic. The content of questions of the half-structured interview is based notably on conceptual, nursing models developed by Gordonová, M., Leiningerová, M., and a model developed by Gigerová, J. {--} Davidhizerová, R. The five research questions were specified for the goal accomplishment. The results of research were treated by means of casuistics and a slightly modified approach of general analysis by Ritchie and Spencer. Established nursing care peculiarities in the area of blood taking, and information of physician´s diagnosis, especially towards the family, and apprehension of medicament addiction, and even the pain and answer to it belong to leading results. Description of how the members of Chinese minority group understand the Czech nursing and a nurse as a provider of professional nursing care also belongs to other results. The hypotheses decided on further research use were suggested on the basis of results of this thesis. This thesis and its results are determined to the professional health care purposes and for direct provision of nursing care in health service facilities in the Czech Republic. One of the outcomes of this thesis is also a propsal of the nursing care standard in members of Chinese minority group living in the Czech Republic, and, hereafter, an information summary book for nurses concerning trans-cultural specificity of the Chinese patient/client in the Czech health services.
256

Definição e especificação formal do jogo diferencial Lobos e Cordeiro / Definition and formal specification of the differential game wolfs and lamb

Sulzbach, Sirlei Ines January 2005 (has links)
No presente trabalho serão apresentadas questões usuais em jogos diferenciais, nos quais os jogadores envolvidos têm objetivos diferentes; ou seja, enquanto um dos jogadores tenta fugir, o outro tenta pegar. Além disso, será definido um modelo de especificação para o jogo diferencial lobos e cordeiro. As Redes de Petri foram escolhidas como forma de especificação para o jogo proposto. Assim, o objetivo será estabelecer estratégias eficientes para o jogo lobos e cordeiro para que se possa realizar um estudo da complexidade das questões apresentadas para este jogo, levando-se em consideração a especificação formal apresentada para tal jogo. / In this work usual questions in differential games will be presented, in which the involved players have different objectives; that is, while one of the players tries "to run away", the other tries "to catch". Moreover, a specification for the differential game "wolves and lamb" will be defined. The Petri Nets had been chosen as specification formalism for the considered game. Thus, the objective is to establish efficient strategies for the game wolves and lamb so that we can carry out a study of the complexity of the presented questions, taking into consideration the presented formal specification for the game.
257

Infrastructures virtuelles dynamiquement approvisionnées : spécification, allocation et exécution / Dynamically provisioned virtual infrastructures : specification, allocation and execution

Koslovski, Guilherme Piêgas 08 July 2011 (has links)
Les Infrastructures Virtuelles (VIs) ont émergé de la combinaison de l’approvisionnement des ressources informatiques et des réseaux virtuels dynamiques. Grâce à la virtualisation combinée des ressource de calcul et de réseau, le concept de VI transforme l’Internet en un réservoir mondial de ressources interconnectées. Avec l’innovation des VIs viennent aussi des nouveaux défis nécessitant le développement de modèles et technologies, pour assister la migration d’applications existantes d’infrastructures traditionnelles vers des VIs. L’abstraction complète des ressources physiques et l’indéterminisme dans les besoins des applications, en termes de ressources de calcul et de communication ont fait de la composition de VI un problème difficile. En outre, l’allocation d’un ensemble des VIs sur un substrat distribué est un problème NP-difficile. En plus des objectifs traditionnels (par exemple un coût minimal, un revenu croissant), un algorithme d’allocation doit également satisfaire les attentes des utilisateurs (par exemple la qualité de l’allocation). Ce manuscrit contribue aux initiatives de recherche en cours avec les propositions suivantes : i) le Virtual Infrastructure Description Language (VXDL), qui permet aux utilisateurs et aux systèmes de décrire les composants pertinents d’une VI ; ii) un mécanisme qui traduit un flux de travail en une spécification de VI pour faciliter l’exécution d’applications distribuées; iii) une solution pour réduire l’espace de recherche d’une façon automatique qui accélère le processus d’allocation ; et iv) un service offert par des fournisseurs d'infrastructure avec lequel un utilisateur peut déléguer les besoins en fiabilité. / Virtual Infrastructures (VIs) have emerged as result of the combined on-demand provisioning of IT resources and dynamic virtual networks. By combining IT and network virtualization, the VI concept is turning the Internet into a worldwide reservoir of interconnected resources, where computational, storage, and communication services are available on-demand for different users and applications. The innovation introduced by VIs posed a set of challenges requiring the development of new models, technologies, and procedures to assist the migration of existing applications from traditional infrastructures to VIs. The complete abstraction of physical resources, coupled with the indeterminism of required computing and communication resources to execute applications, turned the specification and composition of a VI into a challenging task. In addition, mapping a set of VIs onto a distributed substrate is an NP-hard problem. Besides considering common objectives of infrastructure providers (e.g., efficient usage of the physical substrate, cost minimization, increasing revenue), an allocation algorithm should consider the users' expectations (e.g., allocation quality, data location and mobility). This thesis contributes to related research initiatives by proposing the following: i) Virtual Infrastructure Description Language (VXDL), a descriptive and declarative language that allows users and systems to model the components of a VI; ii) a mechanism for composing VI specifications to execute distributed applications; iii) an approach to reduce the search space in an automatic way, accelerating the process of VI allocation; and iv) mechanism for provisioning reliable VIs.
258

HMBS:Um modelo baseado em Statecharts para a especificação formal de hiperdocumentos / HMBS: a statechart-based model for hyperdocuments formal specification

Marcelo Augusto Santos Turine 01 June 1998 (has links)
Um novo modelo para a especificação de hiperdocumentos denominado HMBS - Hyperdocument Model Based on Statecharts - é proposto. O HMBS adota como modelo formal subjacente a técnica Statecharts, cuja estrutura e semântica operacional são utilizadas para especificar a estrutura organizacional e a semântica de navegação de hiperdocumentos grandes e complexos. A definição do HMBS, bem como a semântica de navegação adotada, são apresentadas. Na definição apresenta-se como o modelo permite separar as informações referentes a estrutura organizacional e navegacional das representações físicas do hiperdocumento. Também são discutidas características do modelo que possibilitam ao autor analisar a estrutura do hiperdocumento, encorajando a especificação de hiperdocumentos estruturados. Para provar e validar a viabilidade prática do uso do HMBS num contexto real foi desenvolvido um ambiente de autoria e navegação de hiperdocumentos denominado HySCharts - Hyperdocumenf System based on Statecharts. Esse ambiente fornece facilidades de prototipação rápida e simulação interativa de hiperdocumentos. Para ilustrar como o modelo HMBS e o HySCharts podem ser utilizados no contexto de uma abordagem de projeto sistemática é utilizada como estudo de caso a especificação de um hiperdocumento que apresenta o Parque Ecológico de São Carlos / A new model for hyperdocument specification called HMBS - Hyperdocument Model Based on Statecharts - is proposed. HMBS uses the Statechart formalism as its underlying model. Statecharts structure and operational semantics are used to specify the organizational structure and the browsing semantics of large and complex hyperdocuments. The definition of HMBS is presented and its browsing semantics is described. It is shown how the model allows the separation of information related to the organizational and navigational structure from the hyperdocument\'s physical representation. Model features that allow authors to analyze the hyperdocument structure, encouraging the specification of structured hyperdocuments are also discussed. As a proof of concept and also to evaluate the feasibility of using HMBS in real-life applications a system called HySCharts - Hyperdocument System based on StateCharts - was developed. HySCharts is composed by an authoring and a browsing environments, supporting rapid prototyping and interactive simulation of hyperdocuments. A case study is presented that uses the specification of a hyperdocument introducing the Ecological Park of São Carlos to illustrate the use of HMBS and of the HySCharts environment integrated into a systematic design approach
259

Checagem de arquiteturas de controle de veículos submarinos: uma abordagem baseada em especificações formais. / Model checking underwater vehicles control architectures: a formal specification based approach.

Fábio Henrique de Assis 08 July 2009 (has links)
O desenvolvimento de arquiteturas de controle para veículos submarinos é uma tarefa complexa. Estas podem ser caracterizadas pelos seguintes atributos: tempo real, multitarefa, concorrência e comunicações distribuídas em rede. Neste cenário, existem múltiplos processos sendo executados em paralelo, possivelmente distribuídos, e se comunicando uns com os outros. Neste contexto, o modelo comportamental pode levar a fenômenos como deadlocks, livelocks, disputa por recursos, entre outros. A fim de se tentar minimizar os efeitos de tais dificuldades, neste trabalho será apresentado um método para checagem de modelos de arquiteturas de controle de veículos submarinos baseado em Especificações Formais. A linguagem de especificação formal escolhida foi CSP-OZ, uma combinação de CSP e Object-Z. Object-Z é uma extensão orientada a objetos da linguagem Z para a especificação de predicados, tipicamente pré e pós condições, além de invariantes de dados. CSP (Communicating Sequential Process) é uma álgebra de processos desenvolvida para descrever modelos comportamentais de processos paralelos. A checagem de modelos especificados formalmente consiste na análise das especificações para verificar se um sistema possui certas propriedades através de uma busca exaustiva em todos os estados em que este pode entrar durante sua execução. Neste contexto, é possível checar corretude, livelocks, deadlocks, etc. Além disso, pode-se relacionar duas especificações diferentes a fim de se checar relações de refinamento. Para as especificações, o verificador de modelos FDR da Formal Systems Ltd. será utilizado. A implementação é desenvolvida utilizando um perfil da linguagem Ada denominado RavenSPARK, uma junção do perfil Ravenscar (desenvolvido na Universidade de York) com a linguagem SPARK (um subconjunto da linguagem Ada desenvolvido pela Praxis, Inc.). O Ravenscar é um perfil para desenvolvimento de processos, e portanto os processos de CSP, incluindo seus canais de comunicação, podem ser facilmente criados. Por outro lado, SPARK é uma linguagem onde podem ser inseridos predicados para os dados (originalmente especificados em Object-Z) utilizando anotações da própria linguagem. A linguagem SPARK possui uma ferramenta, o Examinador, que pode checar códigos de modelos baseado nestas anotações. Em resumo, o método proposto permite tanto a checagem de modelos em CSP quanto a checagem no nível de código. Para isso, as especificações em Object-Z devem inicialmente ser convertidas em um código na linguagem SPARK juntamente com suas respectivas anotações, para que então a checagem do modelo possa ser realizada no código. O desenvolvimento de uma arquitetura de controle reativa para um ROV denominado VSOR (Veículo Submarino Operado Remotamente) é utilizado como exemplo de uso do método proposto. Toda a arquitetura de controle é codificada utilizando a linguagem Ada com o perfil RavenSPARK e embarcada em um computador do tipo PC104 com o sistema operacional de tempo real VxWorks, da Windriver, Inc. / The development of control architectures for Underwater Vehicles is a complex task. These control architectures might be chracterised by the following attributes: real-time, multitasking, concurrency, and distributed over communication networks. In this scenario, we have multiple processes running in parallel, possibly distributed, and engaging in communication between each other. In this context, the behavioural model might lead to phenomena like deadlocks, livelocks, race conditions, among others. In order to try to minimize the effects of such difficulties, in this work a method for model checking control architectures of underwater vehicles based on formal specifications is presented. The chosen formal specification language is CSP-OZ, a combination of CSP and Object-Z. Object-Z is an object-oriented extension of Z for the specification of predicates, typically, data pre, post and invariant conditions. CSP (Communicating Sequential Process) is a process algebra developed to describe behavioural models of parallel process. The model checking of formal specifications is a task of reasoning on specifications in which a system verifies certain properties by means of an exhaustive search of all possible states that a system could enter during its execution. In this context, it is possible to check about correctness, liveness, deadlock, etc. Also, one can relate two different specifications in order to check a refinement ordering. For the specifications, the model checker FDR of Formal Systems Ltd. is utilised. The implementation is developed using an ADA language profile called RavenSPARK, a union of the Ravenscar profile (developed at the University of York) and the SPARK language (a subset of the ADA language developed by Praxis, Inc.). The Ravenscar is a profile for developing processes, so CSP processes including their message channels can be easily deployed. On the other hand, SPARK is a language where one can insert data predicates (originally specified in Object-Z) using language annotations. The SPARK language has a tool, the Examiner, that can model check code based on these annotations. In summary, the proposed method allows model checking of CSP processes but does not allow any checking in the code level. On the contrary, Object-Z specifications must first be converted into a SPARK language code, together with proper annotations, and then model checking can be realised in code. The development of a real-time reactive control architecture of an ROV named VSOR (Veiculo Submarino Operado Remotamente) is used as an example of the use of the proposed method. The whole control architecture is coded using the ADA Language with the RavenSPARK profile and deployed into a PC104 cpu system running the Vxworks real-time operating system of Windriver, Inc.
260

Sintonia ótima de controladores. / Optimal controller tuning.

Rodrigo Juliani Correa de Godoy 14 August 2012 (has links)
Estuda-se o problema de sintonia de controladores, objetivando-se a formulação do problema de sintonia ótima de controladores. Busca-se uma formulação que seja geral, ou seja, válida para qualquer estrutura de controlador e qualquer conjunto de especificações. São abordados dois temas principais: especificação de controladores e sintonia ótima de controladores. São compiladas as principais formas de especificação e avaliação de controladores e é feita a formulação do problema de sintonia de controladores como um problema padrão de otimização. A abordagem proposta e os conceitos apresentados são então aplicados em um conjunto de exemplos. / The problem of control tuning is studied, aiming the formulation of the optimal control tuning problem. A general formulation, valid for any controller structure and any set of specifications, is sought. Two main themes are addressed: controller specification and optimal controller tuning. The main ways of controller specification and assessment are compiled and the optimal controller tuning problem is formulated as a standard optimization problem. The proposed approach and the presented concepts are then applied in a set of examples.

Page generated in 0.0346 seconds