• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 268
  • 131
  • 41
  • 20
  • 16
  • 15
  • 11
  • 10
  • 8
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 627
  • 85
  • 81
  • 64
  • 63
  • 58
  • 57
  • 49
  • 46
  • 45
  • 41
  • 40
  • 39
  • 39
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Seismic Analysis of Steel Wind Turbine Towers in the Canadian Environment

Nuta, Elena 06 April 2010 (has links)
The seismic response of steel monopole wind turbine towers is investigated and their risk is assessed in the Canadian seismic environment. This topic is of concern as wind turbines are increasingly being installed in seismic areas and design codes do not clearly address this aspect of design. An implicit finite element model of a 1.65MW tower was developed and validated. Incremental dynamic analysis was carried out to evaluate its behaviour under seismic excitation, to define several damage states, and to develop a framework for determining its probability of damage. This framework was implemented in two Canadian locations, where the risk was found to be low for the seismic hazard level prescribed for buildings. However, the design of wind turbine towers is subject to change, as is the design spectrum. Thus, a methodology is outlined to thoroughly investigate the probability of reaching predetermined damage states under seismic loading for future considerations.
362

A review of the economic consequences of a policy of universal leucodepletion as compared to existing practices

Clare, Virginia Mary January 2009 (has links)
Leucodepletion, the removal of leucocytes from blood products improves the safety of blood transfusion by reducing adverse events associated with the incidental non-therapeutic transfusion of leucocytes. Leucodepletion has been shown to have clinical benefit for immuno-suppressed patients who require transfusion. The selective leucodepletion of blood products by bed side filtration for these patients has been widely practiced. This study investigated the economic consequences in Queensland of moving from a policy of selective leucodepletion to one of universal leucodepletion, that is providing all transfused patients with blood products leucodepleted during the manufacturing process. Using an analytic decision model a cost-effectiveness analysis was conducted. An ICER of $16.3M per life year gained was derived. Sensitivity analysis found this result to be robust to uncertainty in the parameters used in the model. This result argues against moving to a policy of universal leucodepletion. However during the course of the study the policy decision for universal leucodepletion was made and implemented in Queensland in October 2008. This study has concluded that cost-effectiveness is not an influential factor in policy decisions regarding quality and safety initiatives in the Australian blood sector.
363

Construção de ferramenta computacional para estimação de custos na presença de censura utilizando o método da Ponderação pela Probabilidade Inversa

Sientchkovski, Paula Marques January 2016 (has links)
Introdução: Dados de custo necessários na Análise de Custo-Efetividade (CEA) são, muitas vezes, obtidos de estudos longitudinais primários. Neste contexto, é comum a presença de censura caracterizada por não se ter os dados de custo a partir de certo momento, devido ao fato de que indivíduos saem do estudo sem esse estar finalizado. A ideia da Ponderação pela Probabilidade Inversa (IPW – do inglês, Inverse Probability Weighting) vem sendo bastante estudada na literatura relacionada a esse problema, mas é desconhecida a disponibilidade de ferramentas computacionais para esse contexto. Objetivo: Construir ferramentas computacionais em software Excel e R, para estimação de custos pelo método IPW conforme proposto por Bang e Tsiatis (2000), com o objetivo de lidar com o problema da censura em dados de custos. Métodos: Através da criação de planilhas eletrônicas em software Excel e programação em software R, e utilizando-se bancos de dados hipotéticos com situações diversas, busca-se propiciar ao pesquisador maior entendimento do uso desse estimador bem como a interpretação dos seus resultados. Resultados: As ferramentas desenvolvidas, ao proporcionarem a aplicação do método IPW de modo intuitivo, se mostraram como facilitadoras para a estimação de custos na presença de censura, possibilitando calcular a ICER a partir de dados de custo. Conclusão: As ferramentas desenvolvidas permitem ao pesquisador, além de uma compreensão prática do método, a sua aplicabilidade em maior escala, podendo ser considerada como alternativa satisfatória às dificuldades postas pelo problema da censura na CEA. / Introduction: Cost data needed in Cost-Effectiveness Analysis (CEA) are often obtained from longitudinal primary studies. In this context, it is common the presence of censoring characterized by not having cost data after a certain point, due to the fact that individuals leave the study without this being finalized. The idea of Inverse Probability Weighting (IPW) has been extensively studied in the literature related to this problem, but is unknown the availability of computational tools for this context. Objective: To develop computational tools in software Excel and software R, to estimate costs by IPW method, as proposed by Bang and Tsiatis (2000), in order to deal with the problem of censorship in cost data. Methods: By creating spreadsheets in Excel software and programming in R software, and using hypothetical database with different situations, we seek to provide to the researcher most understanding of the use of IPW estimator and the interpretation of its results. Results: The developed tools, affording the application of IPW method in an intuitive way, showed themselves as facilitators for the cost estimation in the presence of censorship, allowing to calculate the ICER from more accurate cost data. Conclusion: The developed tools allow the researcher, besides a practical understanding of the method, its applicability on a larger scale, and may be considered a satisfactory alternative to the difficulties posed by the problem of censorship in CEA.
364

HIGMN : an IGMN-based hierarchical architecture and its applications for robotic tasks

Pereira, Renato de Pontes January 2013 (has links)
O recente campo de Deep Learning introduziu a área de Aprendizagem de Máquina novos métodos baseados em representações distribuídas e abstratas dos dados de treinamento ao longo de estruturas hierárquicas. A organização hierárquica de camadas permite que esses métodos guardem informações distribuídas sobre os sinais sensoriais e criem conceitos com diferentes níveis de abstração para representar os dados de entrada. Este trabalho investiga o impacto de uma estrutura hierárquica inspirada pelas ideias apresentadas em Deep Learning, e com base na Incremental Gaussian Mixture Network (IGMN), uma rede neural probabilística com aprendizagem online e incremental, especialmente adequada para as tarefas de robótica. Como resultado, foi desenvolvida uma arquitetura hierárquica, denominada Hierarchical Incremental Gaussian Mixture Network (HIGMN), que combina dois níveis de IGMNs. As camadas de primeiro nível da HIGMN são capazes de aprender conceitos a partir de dados de diferentes domínios que são então relacionados na camada de segundo nível. O modelo proposto foi comparado com a IGMN em tarefas de robótica, em especial, na tarefa de aprender e reproduzir um comportamento de seguir paredes, com base em uma abordagem de Aprendizado por Demonstração. Os experimentos mostraram como a HIGMN pode executar três diferentes tarefas em paralelo (aprendizagem de conceitos, segmentação de comportamento, e aprendizagem e reprodução de comportamentos) e sua capacidade de aprender um comportamento de seguir paredes e reproduzi-lo em ambientes desconhecidos com novas informações sensoriais. A HIGMN conseguiu reproduzir o comportamento de seguir paredes depois de uma única, simples e curta demonstração do comportamento. Além disso, ela adquiriu conhecimento de diferentes tipos: informações sobre o ambiente, a cinemática do robô, e o comportamento alvo. / The recent field of Deep Learning has introduced to Machine Learning new meth- ods based on distributed abstract representations of the training data throughout hierarchical structures. The hierarchical organization of layers allows these meth- ods to store distributed information on sensory signals and to create concepts with different abstraction levels to represent the input data. This work investigates the impact of a hierarchical structure inspired by ideas on Deep Learning and based on the Incremental Gaussian Mixture Network (IGMN), a probabilistic neural network with an on-line and incremental learning, specially suitable for robotic tasks. As a result, a hierarchical architecture, called Hierarchical Incremental Gaussian Mixture Network (HIGMN), was developed, which combines two levels of IGMNs. The HIGMN first-level layers are able to learn concepts from data of different domains that are then related in the second-level layer. The proposed model was compared with the IGMN regarding robotic tasks, in special, the task of learning and repro- ducing a wall-following behavior, based on a Learning from Demonstration (LfD) approach. The experiments showed how the HIGMN can perform parallely three different tasks concept learning, behavior segmentation, and learning and repro- ducing behaviors and its ability to learn a wall-following behavior and to perform it in unknown environments with new sensory information. HIGMN could reproduce the wall-following behavior after a single, simple, and short demonstration of the behavior. Moreover, it acquired different types of knowledge: information on the environment, the robot kinematics, and the target behavior.
365

Avalia??o da aptid?o cardiorrespirat?ria pelo Incremental Shuttle Walking Test em crian?as e adolescentes assintom?ticos do sexo masculino

Gomes, Andreza Let?cia 10 November 2017 (has links)
O orientador do trabalho n?o mencionado na lista da folha de aprova??o. / Submitted by Jos? Henrique Henrique (jose.neves@ufvjm.edu.br) on 2018-08-06T18:31:52Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) andreza_leticia_gomes.pdf: 1090108 bytes, checksum: 96eed10c887fbcff28a65cbd64874639 (MD5) / Approved for entry into archive by Rodrigo Martins Cruz (rodrigo.cruz@ufvjm.edu.br) on 2018-10-05T19:12:17Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) andreza_leticia_gomes.pdf: 1090108 bytes, checksum: 96eed10c887fbcff28a65cbd64874639 (MD5) / Made available in DSpace on 2018-10-05T19:12:17Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) andreza_leticia_gomes.pdf: 1090108 bytes, checksum: 96eed10c887fbcff28a65cbd64874639 (MD5) Previous issue date: 2018 / O Incremental Shuttle Walking Test (ISWT) vem sendo utilizado na avalia??o da aptid?o cardiorrespirat?ria (ACR) de crian?as e adolescentes com diferentes condi??es de sa?de. N?o se sabe se a resposta cardiorrespirat?ria apresentada por adolescentes saud?veis no ISWT ir? se assemelhar aquela induzida pelo teste de esfor?o cardiopulmonar (TECP). Os objetivos deste estudo foram: (1) Avaliar se o ISWT ? um teste m?ximo para adolescentes assintom?ticos do sexo masculino. (2) Propor uma equa??o matem?tica para predizer o pico do consumo de oxig?nio (VO2 pico) e, (3) testar a confiabilidade dessa equa??o para essa popula??o. M?todos: No primeiro est?gio do estudo, 26 participantes realizaram o ISWT e o TECP. No segundo est?gio 50 participantes realizaram o ISWT duas vezes. Em ambos os est?gios foram avaliados VO2 pico, a frequ?ncia card?aca m?xima (FC m?x.) e o pico da raz?o de troca respirat?ria (R pico). No terceiro est?gio foram comparados os valores do VO2 pico preditos pela equa??o criada e obtidos de forma direta no ISWT. Resultados: N?o houve diferen?a significativa no VO2 pico, R pico e FC m?x. obtidos no ISWT e TECP. Os valores encontrados para o VO2 pico (r = 0,44. p = 0,002) e R pico (r = -0,53, p< 0,01) obtidos no ISWT e TECP apresentaram correla??o moderada e significativa, al?m de concord?ncia na an?lise de Bland-Altman. A velocidade da marcha foi a vari?vel que explicou 48% (R2 = 0,48, p = 0,000) da varia??o no VO2 pico no ISWT. Foi criada a equa??o VO2 previsto = 5,490 + (17,093 x Velocidade da Marcha). Os resultados obtidos pela equa??o foram comparados com os valores obtidos pelo analisador de gases e nenhuma diferen?a significativa foi encontrada entre eles. Conclus?es: Em crian?as e adolescentes do sexo masculino o ISWT ? um teste de esfor?o m?ximo com repercuss?es cardiorrespirat?rias similares ao TECP. A equa??o preditiva proposta ? uma estimativa vi?vel para predi??o do VO2 pico para essa popula??o. / Disserta??o (Mestrado Profissional) ? Programa de P?s-Gradua??o em Reabilita??o e Desempenho Funcional, Universidade Federal dos Vales do Jequitinhonha e Mucuri, 2017. / The Incremental Shuttle Walking Test (ISWT) has been used to evaluate the Cardiorespiratory Fitness (CRF) of children and adolescents with different pathological conditions. It is unknown whether the cardiorespiratory response presented by asymptomatic adolescents in ISWT will be similar to that induced by CardiopulmonaryExerciseStress Test (CEPT). The aims of this study were: (1) To evaluate whether ISWT is a maximum test for asymptomatic male adolescents. (2) To propose a mathematical equation to predict peak oxygen consumption (VO2 peak) and (3) Totest the reliability of this equation on this population. Methods: In the first stage of the study, 26 participants performed the ISWT and the CEPT. In the second stage 33 participants performed the ISWT twice. In both stages, peak VO2, maximal heart rate (HR max) and peak respiratory rate (peak R) were evaluated. In the third stage, the peak VO2 values predicted by the equation created and obtained directly in the ISWT were compared. Results: There was no significant difference in the peak VO2 (p> 0.05), peak R (p> 0.05) and maximum HR obtained in ISWT and CEPT. The values found for the VO2 peak (r = 0.44, p = 0.002) and peak R (r = -0.53, p <0.01) obtained in the ISWT and CEPT presented a moderate correlation and anagreement in the analysis of Bland-Altman analysis. The gait speed was the variable that explained 48% (R2 = 0.48, p = 0.000) of the variation in the peak VO2. The VO2 peak equation = 5.490 + (17.093 x Gait Speed) was created. The results obtained by the equation were compared with the values obtained by the gas analyzer and no significant differences were found between them (p> 0.05). Conclusion: ISWT produced a cardiorespiratory response comparable to CEPT in asymptomatic male adolescents, and the developed equation presented viability to predict peak VO2 in adolescents.
366

Algorithmes pour la dynamique moléculaire restreinte de manière adaptative / Algorithms for adaptively restrained molecular dynamics

Singh, Krishna Kant 08 November 2017 (has links)
Les méthodes de dynamique moléculaire (MD pour Molecular Dynamics en anglais) sont utilisées pour simuler des systèmes volumineux et complexes. Cependant, la simulation de ce type de systèmes sur de longues échelles temporelles demeure un problème coûteux en temps de calcul. L'étape la plus coûteuse des méthodes de MD étant la mise à jour des forces entre les particules. La simulation de particules restreintes de façon adaptative (ARMD pour Adaptively Restrained Molecular Dynamics en anglais) est une nouvelle approche permettant d'accélérer le processus de simulation en réduisant le nombre de calculs de forces effectués à chaque pas de temps. La méthode ARMD fait varier l'état des degrés de liberté en position en les activants ou en les désactivants de façon adaptative au cours de la simulation. Du fait, que le calcul des forces dépend majoritairement de la distance entre les atomes, ce calcul peut être évité entre deux particules dont les degrés de liberté en position sont désactivés. En revanche, le calcul des forces pour les particules actives (i.e. celles dont les degrés de liberté en position sont actifs) est effectué. Afin d'exploiter au mieux l'adaptabilité de la méthode ARMD, nous avons conçu de nouveaux algorithmes permettant de calculer et de mettre à jour les forces de façon plus efficace. Nous avons développé des algorithmes permettant de construire et de mettre à jour des listes de voisinage de manière incrémentale. En particulier, nous avons travaillé sur un algorithme de mise à jour incrémentale des forces en un seul passage deux fois plus rapide que l'ancien algorithme également incrémental mais qui nécessitait deux passages. Les méthodes proposées ont été implémentées et validées dans le simulateur de MD appelé LAMMPS, mais elles peuvent s'appliquer à n'importe quel autre simulateur de MD. Nous avons validé nos algorithmes pour différents exemples sur les ensembles NVE et NVT. Dans l'ensemble NVE, la méthode ARMD permet à l'utilisateur de jouer sur le précision pour accélérer la vitesse de la simulation. Dans l'ensemble NVT, elle permet de mesurer des grandeurs statistiques plus rapidement. Finalement, nous présentons des algorithmes parallèles pour la mise à jour incrémentale en un seul passage permettant d'utiliser la méthode ARMD avec le standard Message Passage Interface (MPI). / Molecular Dynamics (MD) is often used to simulate large and complex systems. Although, simulating such complex systems for the experimental time scales are still computationally challenging. In fact, the most computationally extensive step in MD is the computation of forces between particles. Adaptively Restrained Molecular Dynamics (ARMD) is a recently introduced particles simulation method that switches positional degrees of freedom on and off during simulation. Since force computations mainly depend upon the inter-atomic distances, the force computation between particles with positional degrees of freedom off~(restrained particles) can be avoided. Forces involving active particles (particles with positional degrees of freedom on) are computed.In order to take advantage of adaptability of ARMD, we designed novel algorithms to compute and update forces efficiently. We designed algorithms not only to construct neighbor lists, but also to update them incrementally. Additionally, we designed single-pass incremental force update algorithm that is almost two times faster than previously designed two-pass incremental algorithm. These proposed algorithms are implemented and validated in the LAMMPS MD simulator, however, these algorithms can be applied to other MD simulators. We assessed our algorithms on different and diverse benchmarks in both microcanonical ensemble (NVE) and canonical (NVT) ensembles. In the NVE ensemble, ARMD allows users to trade between precision and speed while, in the NVT ensemble, it makes it possible to compute statistical averages faster. In Last, we introduce parallel algorithms for single-pass incremental force computations to take advantage of adaptive restraints using the Message Passage Interface (MPI) standard.
367

The effect of default risk on trading book capital requirements for public equities: an irc application for the Brazilian market

Rodrigues, Matheus Pimentel 17 August 2015 (has links)
Submitted by Matheus Pimentel Rodrigues (mth3u5@gmail.com) on 2015-09-14T12:04:09Z No. of bitstreams: 1 Dissertação_Matheus_Pimentel_Rodrigues.pdf: 17000006 bytes, checksum: e2e4830bacdedb9b50b9f80a8638df3f (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2015-09-14T16:30:12Z (GMT) No. of bitstreams: 1 Dissertação_Matheus_Pimentel_Rodrigues.pdf: 17000006 bytes, checksum: e2e4830bacdedb9b50b9f80a8638df3f (MD5) / Made available in DSpace on 2015-09-14T19:08:49Z (GMT). No. of bitstreams: 1 Dissertação_Matheus_Pimentel_Rodrigues.pdf: 17000006 bytes, checksum: e2e4830bacdedb9b50b9f80a8638df3f (MD5) Previous issue date: 2015-08-17 / This is one of the first works to address the issue of evaluating the effect of default for capital allocation in the trading book, in the case of public equities. And more specifically, in the Brazilian Market. This problem emerged because of recent crisis, which increased the need for regulators to impose more allocation in banking operations. For this reason, the BIS committee, recently introduce a new measure of risk, the Incremental Risk Charge. This measure of risk, is basically a one year value-at-risk, with a 99.9% confidence level. The IRC intends to measure the effects of credit rating migrations and default, which may occur with instruments in the trading book. In this dissertation, the IRC was adapted for the equities case, by not considering the effect of credit rating migrations. For that reason, the more adequate choice of model to evaluate credit risk was the Moody’s KMV, which is based in the Merton model. This model was used to calculate the PD for the issuers used as case tests. After, calculating the issuer’s PD, I simulated the returns with a Monte Carlo after using a PCA. This approach permitted to obtain the correlated returns for simulating the portfolio loss. In our case, since we are dealing with stocks, the LGD was held constant and its value based in the BIS documentation. The obtained results for the adapted IRC were compared with a 252-day VaR, with a 99% confidence level. This permitted to conclude the relevance of the IRC measure, which was in the same scale of a 252-day VaR. Additionally, the adapted IRC was capable to anticipate default events. All result were based in portfolios composed by Ibovespa index stocks. / Esse é um dos primeiros trabalhos a endereçar o problema de avaliar o efeito do default para fins de alocação de capital no trading book em ações listadas. E, mais especificamente, para o mercado brasileiro. Esse problema surgiu em crises mais recentes e que acabaram fazendo com que os reguladores impusessem uma alocação de capital adicional para essas operações. Por essa razão o comitê de Basiléia introduziu uma nova métrica de risco, conhecida como Incremental Risk Charge. Essa medida de risco é basicamente um VaR de um ano com um intervalo de confiança de 99.9%. O IRC visa medir o efeito do default e das migrações de rating, para instrumentos do trading book. Nessa dissertação, o IRC está focado em ações e como consequência, não leva em consideração o efeito da mudança de rating. Além disso, o modelo utilizado para avaliar o risco de crédito para os emissores de ação foi o Moody’s KMV, que é baseado no modelo de Merton. O modelo foi utilizado para calcular a PD dos casos usados como exemplo nessa dissertação. Após calcular a PD, simulei os retornos por Monte Carlo após utilizar um PCA. Essa abordagem permitiu obter os retornos correlacionados para fazer a simulação de perdas do portfolio. Nesse caso, como estamos lidando com ações, o LGD foi mantido constante e o valor utilizado foi baseado nas especificações de basiléia. Os resultados obtidos para o IRC adaptado foram comparados com um VaR de 252 dias e com um intervalo de confiança de 99.9%. Isso permitiu concluir que o IRC é uma métrica de risco relevante e da mesma escala de uma VaR de 252 dias. Adicionalmente, o IRC adaptado foi capaz de antecipar os eventos de default. Todos os resultados foram baseados em portfolios compostos por ações do índice Bovespa.
368

Towards the elicitation of hidden domain factors from clients and users during the design of software systems

Friendrich, Wernher Rudolph 11 1900 (has links)
This dissertation focuses on how requirements for a new software development system are elicited and what pitfalls could cause a software development project to fail if the said requirements are not captured correctly. A number of existing requirements elicitation methods, namely: JAD (Joint Application Design), RAD (Rapid Application Development), a Formal Specifications Language (Z), Natural Language, UML (Unified Modelling Language) and Prototyping are covered. The aforementioned techniques are then integrated into existing software development life cycle models, such as the Waterfall model, Rapid Prototyping model, Build and Fix model, Spiral model, Incremental model and the V-Process model. Differences in the domains (knowledge and experience of an environment) of a client and that of the software development team are highlighted and this is done diagrammatically using the language of Venn diagrams. The dissertation also refers to a case study highlighting a number of problems during the requirements elicitation process, amongst other the problem of tacit knowledge not surfacing during elicitation. Two new requirements elicitation methodologies are proposed namely: the SRE (Solitary Requirements Elicitation) and the DDI (Developer Domain Interaction) methodology. These two methods could potentially be more time consuming than other existing requirements elicitation methods, but the benefits could outweigh the cost of their implementation, since the new proposed methods have the potential to further facilitate the successful completion of a software development project. Following the introduction of the new requirements elicitation methods, they are then applied to the aforementioned case study and highlight just how the hidden domain of the client may become more visible, because the software development team has gained a deeper understanding of the client’s working environment. They have therefore increased their understanding of how the final product needs to function in order to fulfil the set out requirements correctly. Towards the end of the dissertation a summary and a conclusion as well as future work that could be undertaken in this area are provided. / Computer Science / M. Sc. (Computer Science)
369

Construção de ferramenta computacional para estimação de custos na presença de censura utilizando o método da Ponderação pela Probabilidade Inversa

Sientchkovski, Paula Marques January 2016 (has links)
Introdução: Dados de custo necessários na Análise de Custo-Efetividade (CEA) são, muitas vezes, obtidos de estudos longitudinais primários. Neste contexto, é comum a presença de censura caracterizada por não se ter os dados de custo a partir de certo momento, devido ao fato de que indivíduos saem do estudo sem esse estar finalizado. A ideia da Ponderação pela Probabilidade Inversa (IPW – do inglês, Inverse Probability Weighting) vem sendo bastante estudada na literatura relacionada a esse problema, mas é desconhecida a disponibilidade de ferramentas computacionais para esse contexto. Objetivo: Construir ferramentas computacionais em software Excel e R, para estimação de custos pelo método IPW conforme proposto por Bang e Tsiatis (2000), com o objetivo de lidar com o problema da censura em dados de custos. Métodos: Através da criação de planilhas eletrônicas em software Excel e programação em software R, e utilizando-se bancos de dados hipotéticos com situações diversas, busca-se propiciar ao pesquisador maior entendimento do uso desse estimador bem como a interpretação dos seus resultados. Resultados: As ferramentas desenvolvidas, ao proporcionarem a aplicação do método IPW de modo intuitivo, se mostraram como facilitadoras para a estimação de custos na presença de censura, possibilitando calcular a ICER a partir de dados de custo. Conclusão: As ferramentas desenvolvidas permitem ao pesquisador, além de uma compreensão prática do método, a sua aplicabilidade em maior escala, podendo ser considerada como alternativa satisfatória às dificuldades postas pelo problema da censura na CEA. / Introduction: Cost data needed in Cost-Effectiveness Analysis (CEA) are often obtained from longitudinal primary studies. In this context, it is common the presence of censoring characterized by not having cost data after a certain point, due to the fact that individuals leave the study without this being finalized. The idea of Inverse Probability Weighting (IPW) has been extensively studied in the literature related to this problem, but is unknown the availability of computational tools for this context. Objective: To develop computational tools in software Excel and software R, to estimate costs by IPW method, as proposed by Bang and Tsiatis (2000), in order to deal with the problem of censorship in cost data. Methods: By creating spreadsheets in Excel software and programming in R software, and using hypothetical database with different situations, we seek to provide to the researcher most understanding of the use of IPW estimator and the interpretation of its results. Results: The developed tools, affording the application of IPW method in an intuitive way, showed themselves as facilitators for the cost estimation in the presence of censorship, allowing to calculate the ICER from more accurate cost data. Conclusion: The developed tools allow the researcher, besides a practical understanding of the method, its applicability on a larger scale, and may be considered a satisfactory alternative to the difficulties posed by the problem of censorship in CEA.
370

HIGMN : an IGMN-based hierarchical architecture and its applications for robotic tasks

Pereira, Renato de Pontes January 2013 (has links)
O recente campo de Deep Learning introduziu a área de Aprendizagem de Máquina novos métodos baseados em representações distribuídas e abstratas dos dados de treinamento ao longo de estruturas hierárquicas. A organização hierárquica de camadas permite que esses métodos guardem informações distribuídas sobre os sinais sensoriais e criem conceitos com diferentes níveis de abstração para representar os dados de entrada. Este trabalho investiga o impacto de uma estrutura hierárquica inspirada pelas ideias apresentadas em Deep Learning, e com base na Incremental Gaussian Mixture Network (IGMN), uma rede neural probabilística com aprendizagem online e incremental, especialmente adequada para as tarefas de robótica. Como resultado, foi desenvolvida uma arquitetura hierárquica, denominada Hierarchical Incremental Gaussian Mixture Network (HIGMN), que combina dois níveis de IGMNs. As camadas de primeiro nível da HIGMN são capazes de aprender conceitos a partir de dados de diferentes domínios que são então relacionados na camada de segundo nível. O modelo proposto foi comparado com a IGMN em tarefas de robótica, em especial, na tarefa de aprender e reproduzir um comportamento de seguir paredes, com base em uma abordagem de Aprendizado por Demonstração. Os experimentos mostraram como a HIGMN pode executar três diferentes tarefas em paralelo (aprendizagem de conceitos, segmentação de comportamento, e aprendizagem e reprodução de comportamentos) e sua capacidade de aprender um comportamento de seguir paredes e reproduzi-lo em ambientes desconhecidos com novas informações sensoriais. A HIGMN conseguiu reproduzir o comportamento de seguir paredes depois de uma única, simples e curta demonstração do comportamento. Além disso, ela adquiriu conhecimento de diferentes tipos: informações sobre o ambiente, a cinemática do robô, e o comportamento alvo. / The recent field of Deep Learning has introduced to Machine Learning new meth- ods based on distributed abstract representations of the training data throughout hierarchical structures. The hierarchical organization of layers allows these meth- ods to store distributed information on sensory signals and to create concepts with different abstraction levels to represent the input data. This work investigates the impact of a hierarchical structure inspired by ideas on Deep Learning and based on the Incremental Gaussian Mixture Network (IGMN), a probabilistic neural network with an on-line and incremental learning, specially suitable for robotic tasks. As a result, a hierarchical architecture, called Hierarchical Incremental Gaussian Mixture Network (HIGMN), was developed, which combines two levels of IGMNs. The HIGMN first-level layers are able to learn concepts from data of different domains that are then related in the second-level layer. The proposed model was compared with the IGMN regarding robotic tasks, in special, the task of learning and repro- ducing a wall-following behavior, based on a Learning from Demonstration (LfD) approach. The experiments showed how the HIGMN can perform parallely three different tasks concept learning, behavior segmentation, and learning and repro- ducing behaviors and its ability to learn a wall-following behavior and to perform it in unknown environments with new sensory information. HIGMN could reproduce the wall-following behavior after a single, simple, and short demonstration of the behavior. Moreover, it acquired different types of knowledge: information on the environment, the robot kinematics, and the target behavior.

Page generated in 0.0445 seconds