• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 19
  • 2
  • 2
  • 1
  • Tagged with
  • 55
  • 21
  • 11
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

IMPROVING COVERAGE OF CIRCUITS BY USING DIFFERENT FAULT MODELS COMPLEMENTING EACH OTHER

Oindree Basu (11016006) 23 July 2021 (has links)
<div> <div> <div> <p>Various fault models such as stuck-at, transition, bridging have been developed to better model possible defects in manufactured chips. However, over the years as device sizes have shrunk, the probability of systematic defects occurring in chips has increased. To predict the sites of occurrence of such defects, Design-for-Manufacturability (DFM) guidelines have been established, the violations of which are modelled into DFM faults. Nonetheless, some faults corresponding to DFM as well as other fault models are undetectable, i.e., tests cannot be generated to detect their presence. It has been seen that undetectable faults usually tend to cluster together, leaving large areas in a circuit uncovered. As a result, defects occurring there, even if detectable, go undetected because there are no tests covering those areas. Hence, this becomes an important issue to address, and to resolve it, we utilize gate- exhaustive faults to cover these areas. Gate-exhaustive faults provide exhaustive coverage to gates. They can detect any defect which is not modelled by any other fault model. However, the total number of gate-exhaustive faults in a circuit can be quite large and may require many test patterns for detection. Therefore, we use procedures to select only those faults which can provide additional coverage to the sites of undetectable faults. We de ne parameters that determine whether a gate associated with one or more undetectable faults is covered or not, depending on the number of detectable and useful gate-exhaustive faults present around the gate. Bridging faults are also added for extra coverage. These procedures applied to benchmark circuits are used for obtaining the experimental results. The results show that the sizes of clusters of undetectable faults are reduced, upon the addition of gate-exhaustive faults to the fault set, both in the case of single-cycle and two-cycle faults. </p> </div> </div> </div>
32

Research and development of triploid brown trout Salmo trutta (Linnaeus, 1758) for use in aquaculture and fisheries management

Preston, Andrew C. January 2014 (has links)
Freshwater sport fisheries contribute substantially to the economies of England and Wales. However, many trout fisheries rely partly or entirely on stocking farmed trout to maintain catches within freshwater fisheries. Farmed trout often differ genetically from their wild counterparts and wild trout could be at risk of reduced fitness due to interbreeding or competition with farmed fish. Therefore, to protect remaining wild brown trout (Salmo trutta L) populations and as a conservation measure, stocking policy has changed. Legislation introduced by the Environment Agency (EA, 2009) will now only give consent to stocking of rivers and some stillwaters with sterile, all-female triploid brown trout. There are reliable triploidy induction protocols for some other commercially important salmonid species however; there is limited knowledge on triploid induction in brown trout. Previously, triploid brown trout have been produced by heat shocks although reduced survivals were obtained suggesting that an optimised heat shock had not been identified, or that heat shock gives less consistent success than hydrostatic pressure shock (HP), which is now recognised as a more reliable technique to produce triploid fish. Thus the overall aim of this thesis was to conduct novel research to support the aquaculture and freshwater fisheries sector within the United Kingdom by optimising the production and furthering the knowledge of triploid brown trout. Firstly, this PhD project investigated an optimised triploidy induction protocol using hydrostatic pressure (Chapter 2). In order to produce an optimised hydrostatic pressure induction protocol three experiments were conducted to (1) determine the optimal timing of HP shock application post-fertilisation, (2) define optimal pressure intensity and duration of the HP shock and (3) study the effect of temperature (6-12 °C) on triploid yields. Results indicated high survival to yolk sac absorption stage (69.2 - 93.6 %) and high triploid yields (82.5 - 100 %) from the range of treatments applied. Furthermore, no significant differences in triploid rates were shown when shock timings and durations were adjusted according to the temperature used. In all treatments deformity prevalence remained low during incubation (<1.8 %) up to yolk sac absorption (~550 degree days post hatch). Overall, this study indicated that the optimised pressure shock for the induction of triploidy in brown trout delivering high survival and 100 % triploid rate (a prerequisite to brown trout restocking) is a shock with a magnitude of 689 Bar applied at 300 Centigrade Temperature Minutes (CTM) for 50 CTM duration. Regarding the assessment of triploid status, the second experimental chapter tested the accuracy and efficacy of three ploidy verification techniques (Chapter 3). Techniques studied were erythrocyte nuclei measurements (Image analysis), flow cytometry (Becton Dickinson Facscalibur flow cytometer) and DNA profiling (22 polymorphic microsatellite loci) to assess the effectiveness of triploidy induction in brown trout. Results indicated the validity of using erythrocyte indices major nuclear axis measurements, flow cytometric DNA distributions expressed as relative fluorescence (FL2-Area), and polymorphic microsatellite loci (Ssa410UOS, SSa197, Str2 and SsaD48) for assessing ploidy status in brown trout. Accuracy of each technique was assessed and indicated that all techniques correctly identified ploidy level indicating 100 % triploid rate for that commercial batch of brown trout. These techniques may be utilised within aquaculture and freshwater fisheries to ensure compliance with the legislation introduced by the EA. As a result of the legislation introduced by the Environment Agency triploid brown trout will freely interact with diploid trout therefore there is a need to assess feeding response and behavioural differences between diploid and triploid trout prior to release. Therefore, in the third experimental chapter (Chapter 4) diploid and triploid brown trout were acclimated for six weeks on two feeding regimes (floating/sinking pellet). Thereafter, aggression and surface feeding response was compared between pairs of all diploid, diploid and triploid and all triploid brown trout in a semi natural stream (flume). In each pairwise matching, fish of similar size were placed in allopatry and rank determined by the total number of aggressive interactions initiated. Dominant individuals initiated more aggression than subordinates, spent more time defending a territory and positioned themselves closer to the food source (Gammarus pulex) whereas subordinates occupied the peripheries. When ploidy was considered, diploid trout were more aggressive than triploid, and dominated their siblings when placed in pairwise matchings. However, surface feeding did not differ statistically between ploidy irrespective of feeding regime. Triploids adopted a sneak feeding strategy while diploids expended more time defending a territory. In addition, an assessment of whether triploids exhibited a similar social dominance to diploids when placed in allopatry was conducted. Although aggression was lower in triploid pairs than in the diploid/triploid pairs, a dominance hierarchy was observed between individuals of the same ploidy. Dominant triploid fish were more aggressive and consumed more feed items than subordinate individuals. Subordinate fish displayed a darker colour index than dominant fish suggesting increased stress levels. However, dominant triploid fish seemed more tolerant of subordinate individuals and did not display the same degree of invasive aggression as observed in the diploid/diploid or diploid/triploid matchings. These novel findings suggest that sterile triploid brown trout feed similarly but are less aggressive than diploid trout and therefore may provide freshwater fishery managers an alternative to stocking diploid brown trout. In addition to research at the applied level in triploid brown trout, this thesis also examined the fundamental physiological effects of ploidy in response to temperature regime. Triploid salmonids have been shown to differ in their tolerance to environmental temperature. Therefore the fourth experimental chapter (Chapter 5) investigated whether temperature tolerance affected feed intake and exercise recovery. Diploid and triploid brown trout were exposed to an incremental temperature challenge (10 and 19 °C) and subsequent survival and feed intake rates were monitored. Triploids took longer to acclimate to the increase in temperature however feed intake were significantly greater in triploids at high temperature. In a follow on study, we investigated post-exercise recovery processes under each temperature regime (10 and 19 °C). Exhaustion was induced by 10 minutes of forced swimming, with subsequent haematological responses measured to determine the magnitude of recovery from exercise. Plasma parameters (alkaline phosphatase, aspartate aminotransferase, calcium, cholesterol, triglycerides, phosphorous, total protein, lactate, glucose, pH, magnesium, osmolality, potassium, sodium, chloride, lactate dehydrogenase) were measured for each ploidy. Basal samples were taken prior to exercise and then at: 1; 4, and 24 hours post-exercise. Contrary to previous studies, there was no triploid mortality during or after the exercise at either temperature. Although diploid and triploid brown trout responded metabolically to the exercise, the magnitude of the response was affected by ploidy and temperature. In particular, triploids had higher levels of plasma lactate, osmolality, and lower pH than diploids at 1 hour post exhaustive exercise. By 4 hours post-exercise plasma parameters analysed had returned to near basal levels. It was evident that the magnitude of the physiological disturbance post-exercise was greater in triploids than diploids at 19 °C. This may have implications where catch and release is practiced on freshwater fisheries. Overall, this work aimed to develop and/or refine current industry induction and assessment protocols while better understand the behaviour and physiology of diploid and triploid brown trout. The knowledge gained from this work provides aquaculture and freshwater fisheries with an optimised protocol, which delivers 100 % triploid rates and profitability without compromising farmed trout welfare, thus ultimately leading towards a more sustainable brown trout industry within the United Kingdom.
33

Inferência de redes gênicas por agrupamento, busca exaustiva e análise de predição intrinsecamente multivariada. / Gene networks inference by clustering, exhaustive search and intrinsically multivariate prediction analysis.

Jacomini, Ricardo de Souza 09 June 2017 (has links)
A inferência de redes gênicas (GN) a partir de dados de expressão gênica temporal é um problema crucial e desafiador em Biologia Sistêmica. Os conjuntos de dados de expressão geralmente consistem em dezenas de amostras temporais e as redes consistem em milhares de genes, tornando inúmeros métodos de inferência inviáveis na prática. Para melhorar a escalabilidade dos métodos de inferência de GNs, esta tese propõe um arcabouço chamado GeNICE, baseado no modelo de redes gênicas probabilísticas. A principal novidade é a introdução de um procedimento de agrupamento de genes, com perfis de expressão relacionados, para fornecer uma solução aproximada com complexidade computacional reduzida. Os agrupamentos definidos são usados para reduzir a dimensionalidade permitindo uma busca exaustiva mais eficiente pelos melhores subconjuntos de genes preditores para cada gene alvo de acordo com funções critério multivariadas. GeNICE reduz consideravelmente o espaço de busca porque os candidatos a preditores ficam restritos a um gene representante por agrupamento. No final, uma análise multivariada é realizada para cada subconjunto preditor definido, visando recuperar subconjuntos mínimos para simplificar a rede gênica inferida. Em experimentos com conjuntos de dados sintéticos, GeNICE obteve uma redução substancial de tempo quando comparado a uma solução anterior sem a etapa de agrupamento, preservando a precisão da predição de expressão gênica mesmo quando o número de agrupamentos é pequeno (cerca de cinquenta) e o número de genes é grande (ordem de milhares). Para um conjunto de dados reais de microarrays de Plasmodium falciparum, a precisão da predição alcançada pelo GeNICE foi de aproximadamente 97% em média. As redes inferidas para os genes alvos da glicólise e do apicoplasto refletem propriedades topológicas de redes complexas do tipo \"mundo pequeno\" e \"livre de escala\", para os quais grande parte das conexões são estabelecidas entre os genes de um mesmo módulo e algumas poucas conexões fazem o papel de estabelecer uma ponte entre os módulos (redes mundo pequeno), e o grau de distribuição das conexões entre os genes segue uma lei de potência, na qual a maioria dos genes têm poucas conexões e poucos genes (hubs) apresentam um elevado número de conexões (redes livres de escala), como esperado. / Gene network (GN) inference from temporal gene expression data is a crucial and challenging problem in Systems Biology. Expression datasets usually consist of dozens of temporal samples, while networks consist of thousands of genes, thus rendering many inference methods unfeasible in practice. To improve the scalability of GN inference methods, this work proposes a framework called GeNICE, based on Probabilistic Gene Networks; the main novelty is the introduction of a clustering procedure to group genes with related expression profiles, to provide an approximate solution with reduced computational complexity. The defined clusters were used to perform an exhaustive search to retrieve the best predictor gene subsets for each target gene, according to multivariate criterion functions. GeNICE greatly reduces the search space because predictor candidates are restricted to one representative gene per cluster. Finally, a multivariate analysis is performed for each defined predictor subset to retrieve minimal subsets and to simplify the network. In experiments with in silico generated datasets, GeNICE achieved substantial computational time reduction when compared to an existing solution without the clustering step, while preserving the gene expression prediction accuracy even when the number of clusters is small (about fifty) relative to the number of genes (order of thousands). For a Plasmodium falciparum microarray dataset, the prediction accuracy achieved by GeNICE was roughly 97% on average. The inferred networks for the apicoplast and glycolytic target genes reflects the topological properties of \"small-world\"and \"scale-free\"complex network models in which a large part of the connections is established between genes of the same functional module (smallworld networks) and the degree distribution of the connections between genes tends to form a power law, in which most genes present few connections and few genes (hubs) present a large number of connections (scale-free networks), as expected.
34

Inferência de redes gênicas por agrupamento, busca exaustiva e análise de predição intrinsecamente multivariada. / Gene networks inference by clustering, exhaustive search and intrinsically multivariate prediction analysis.

Ricardo de Souza Jacomini 09 June 2017 (has links)
A inferência de redes gênicas (GN) a partir de dados de expressão gênica temporal é um problema crucial e desafiador em Biologia Sistêmica. Os conjuntos de dados de expressão geralmente consistem em dezenas de amostras temporais e as redes consistem em milhares de genes, tornando inúmeros métodos de inferência inviáveis na prática. Para melhorar a escalabilidade dos métodos de inferência de GNs, esta tese propõe um arcabouço chamado GeNICE, baseado no modelo de redes gênicas probabilísticas. A principal novidade é a introdução de um procedimento de agrupamento de genes, com perfis de expressão relacionados, para fornecer uma solução aproximada com complexidade computacional reduzida. Os agrupamentos definidos são usados para reduzir a dimensionalidade permitindo uma busca exaustiva mais eficiente pelos melhores subconjuntos de genes preditores para cada gene alvo de acordo com funções critério multivariadas. GeNICE reduz consideravelmente o espaço de busca porque os candidatos a preditores ficam restritos a um gene representante por agrupamento. No final, uma análise multivariada é realizada para cada subconjunto preditor definido, visando recuperar subconjuntos mínimos para simplificar a rede gênica inferida. Em experimentos com conjuntos de dados sintéticos, GeNICE obteve uma redução substancial de tempo quando comparado a uma solução anterior sem a etapa de agrupamento, preservando a precisão da predição de expressão gênica mesmo quando o número de agrupamentos é pequeno (cerca de cinquenta) e o número de genes é grande (ordem de milhares). Para um conjunto de dados reais de microarrays de Plasmodium falciparum, a precisão da predição alcançada pelo GeNICE foi de aproximadamente 97% em média. As redes inferidas para os genes alvos da glicólise e do apicoplasto refletem propriedades topológicas de redes complexas do tipo \"mundo pequeno\" e \"livre de escala\", para os quais grande parte das conexões são estabelecidas entre os genes de um mesmo módulo e algumas poucas conexões fazem o papel de estabelecer uma ponte entre os módulos (redes mundo pequeno), e o grau de distribuição das conexões entre os genes segue uma lei de potência, na qual a maioria dos genes têm poucas conexões e poucos genes (hubs) apresentam um elevado número de conexões (redes livres de escala), como esperado. / Gene network (GN) inference from temporal gene expression data is a crucial and challenging problem in Systems Biology. Expression datasets usually consist of dozens of temporal samples, while networks consist of thousands of genes, thus rendering many inference methods unfeasible in practice. To improve the scalability of GN inference methods, this work proposes a framework called GeNICE, based on Probabilistic Gene Networks; the main novelty is the introduction of a clustering procedure to group genes with related expression profiles, to provide an approximate solution with reduced computational complexity. The defined clusters were used to perform an exhaustive search to retrieve the best predictor gene subsets for each target gene, according to multivariate criterion functions. GeNICE greatly reduces the search space because predictor candidates are restricted to one representative gene per cluster. Finally, a multivariate analysis is performed for each defined predictor subset to retrieve minimal subsets and to simplify the network. In experiments with in silico generated datasets, GeNICE achieved substantial computational time reduction when compared to an existing solution without the clustering step, while preserving the gene expression prediction accuracy even when the number of clusters is small (about fifty) relative to the number of genes (order of thousands). For a Plasmodium falciparum microarray dataset, the prediction accuracy achieved by GeNICE was roughly 97% on average. The inferred networks for the apicoplast and glycolytic target genes reflects the topological properties of \"small-world\"and \"scale-free\"complex network models in which a large part of the connections is established between genes of the same functional module (smallworld networks) and the degree distribution of the connections between genes tends to form a power law, in which most genes present few connections and few genes (hubs) present a large number of connections (scale-free networks), as expected.
35

Emprego de diferentes algoritmos de árvores de decisão na classificação da atividade celular in vitro para tratamentos de superfícies de titânio

Fernandes, Fabiano Rodrigues January 2017 (has links)
O interesse pela área de análise e caracterização de materiais biomédicos cresce, devido a necessidade de selecionar de forma adequada, o material a ser utilizado. Dependendo das condições em que o material será submetido, a caracterização poderá abranger a avaliação de propriedades mecânicas, elétricas, bioatividade, imunogenicidade, eletrônicas, magnéticas, ópticas, químicas e térmicas. A literatura relata o emprego da técnica de árvores de decisão, utilizando os algoritmos SimpleCart(CART) e J48, para classificação de base de dados (dataset), gerada a partir de resultados de artigos científicos. Esse estudo foi realizado afim de identificar características superficiais que otimizassem a atividade celular. Para isso, avaliou-se, a partir de artigos publicados, o efeito de tratamento de superfície do titânio na atividade celular in vitro (células MC3TE-E1). Ficou constatado que, o emprego do algoritmo SimpleCart proporcionou uma melhor resposta em relação ao algoritmo J48. Nesse contexto, o presente trabalho tem como objetivo aplicar, para esse mesmo estudo, os algoritmos CHAID (Chi-square iteration automatic detection) e CHAID Exaustivo, comparando com os resultados obtidos com o emprego do algoritmo SimpleCart. A validação dos resultados, mostraram que o algoritmo CHAID Exaustivo obteve o melhor resultado em comparação ao algoritmo CHAID, obtendo uma estimativa de acerto de 75,9% contra 58,6% respectivamente, e um erro padrão de 7,9% contra 9,1% respectivamente, enquanto que, o algoritmo já testado na literatura SimpleCart(CART) teve como resultado 34,5% de estimativa de acerto com um erro padrão de 8,8%. Com relação aos tempos de execução apurados sobre 22 mil registros, evidenciaram que o algoritmo CHAID Exaustivo apresentou os melhores tempos, com ganho de 0,02 segundos sobre o algoritmo CHAID e 14,45 segundos sobre o algoritmo SimpleCart(CART). / The interest for the area of analysis and characterization of biomedical materials as the need for selecting the adequate material to be used increases. However, depending on the conditions to which materials are submitted, characterization may involve the evaluation of mechanical, electrical, optical, chemical and thermal properties besides bioactivity and immunogenicity. Literature review shows the application decision trees, using SimpleCart(CART) and J48 algorithms, to classify the dataset, which is generated from the results of scientific articles. Therefore the objective of this study was to identify surface characteristics that optimizes the cellular activity. Based on published articles, the effect of the surface treatment of titanium on the in vitro cells (MC3TE-E1 cells) was evaluated. It was found that applying SimpleCart algorithm gives better results than the J48. In this sense, the present study has the objective to apply the CHAID (Chi-square iteration automatic detection) algorithm and Exhaustive CHAID to the surveyed data, and compare the results obtained with the application of SimpleCart algorithm. The validation of the results showed that the Exhaustive CHAID obtained better results comparing to CHAID algorithm, obtaining 75.9 % of accurate estimation against 58.5%, respectively, while the standard error was 7.9% against 9.1%, respectively. Comparing the obtained results with SimpleCart(CART) results which had already been tested and presented in the literature, the results for accurate estimation was 34.5% and the standard error 8.8%. In relation to execution time found through the 22.000 registers, it showed that the algorithm Exhaustive CHAID presented the best times, with a gain of 0.02 seconds over the CHAID algorithm and 14.45 seconds over the SimpleCart(CART) algorithm.
36

Emprego de diferentes algoritmos de árvores de decisão na classificação da atividade celular in vitro para tratamentos de superfícies de titânio

Fernandes, Fabiano Rodrigues January 2017 (has links)
O interesse pela área de análise e caracterização de materiais biomédicos cresce, devido a necessidade de selecionar de forma adequada, o material a ser utilizado. Dependendo das condições em que o material será submetido, a caracterização poderá abranger a avaliação de propriedades mecânicas, elétricas, bioatividade, imunogenicidade, eletrônicas, magnéticas, ópticas, químicas e térmicas. A literatura relata o emprego da técnica de árvores de decisão, utilizando os algoritmos SimpleCart(CART) e J48, para classificação de base de dados (dataset), gerada a partir de resultados de artigos científicos. Esse estudo foi realizado afim de identificar características superficiais que otimizassem a atividade celular. Para isso, avaliou-se, a partir de artigos publicados, o efeito de tratamento de superfície do titânio na atividade celular in vitro (células MC3TE-E1). Ficou constatado que, o emprego do algoritmo SimpleCart proporcionou uma melhor resposta em relação ao algoritmo J48. Nesse contexto, o presente trabalho tem como objetivo aplicar, para esse mesmo estudo, os algoritmos CHAID (Chi-square iteration automatic detection) e CHAID Exaustivo, comparando com os resultados obtidos com o emprego do algoritmo SimpleCart. A validação dos resultados, mostraram que o algoritmo CHAID Exaustivo obteve o melhor resultado em comparação ao algoritmo CHAID, obtendo uma estimativa de acerto de 75,9% contra 58,6% respectivamente, e um erro padrão de 7,9% contra 9,1% respectivamente, enquanto que, o algoritmo já testado na literatura SimpleCart(CART) teve como resultado 34,5% de estimativa de acerto com um erro padrão de 8,8%. Com relação aos tempos de execução apurados sobre 22 mil registros, evidenciaram que o algoritmo CHAID Exaustivo apresentou os melhores tempos, com ganho de 0,02 segundos sobre o algoritmo CHAID e 14,45 segundos sobre o algoritmo SimpleCart(CART). / The interest for the area of analysis and characterization of biomedical materials as the need for selecting the adequate material to be used increases. However, depending on the conditions to which materials are submitted, characterization may involve the evaluation of mechanical, electrical, optical, chemical and thermal properties besides bioactivity and immunogenicity. Literature review shows the application decision trees, using SimpleCart(CART) and J48 algorithms, to classify the dataset, which is generated from the results of scientific articles. Therefore the objective of this study was to identify surface characteristics that optimizes the cellular activity. Based on published articles, the effect of the surface treatment of titanium on the in vitro cells (MC3TE-E1 cells) was evaluated. It was found that applying SimpleCart algorithm gives better results than the J48. In this sense, the present study has the objective to apply the CHAID (Chi-square iteration automatic detection) algorithm and Exhaustive CHAID to the surveyed data, and compare the results obtained with the application of SimpleCart algorithm. The validation of the results showed that the Exhaustive CHAID obtained better results comparing to CHAID algorithm, obtaining 75.9 % of accurate estimation against 58.5%, respectively, while the standard error was 7.9% against 9.1%, respectively. Comparing the obtained results with SimpleCart(CART) results which had already been tested and presented in the literature, the results for accurate estimation was 34.5% and the standard error 8.8%. In relation to execution time found through the 22.000 registers, it showed that the algorithm Exhaustive CHAID presented the best times, with a gain of 0.02 seconds over the CHAID algorithm and 14.45 seconds over the SimpleCart(CART) algorithm.
37

Emprego de diferentes algoritmos de árvores de decisão na classificação da atividade celular in vitro para tratamentos de superfícies de titânio

Fernandes, Fabiano Rodrigues January 2017 (has links)
O interesse pela área de análise e caracterização de materiais biomédicos cresce, devido a necessidade de selecionar de forma adequada, o material a ser utilizado. Dependendo das condições em que o material será submetido, a caracterização poderá abranger a avaliação de propriedades mecânicas, elétricas, bioatividade, imunogenicidade, eletrônicas, magnéticas, ópticas, químicas e térmicas. A literatura relata o emprego da técnica de árvores de decisão, utilizando os algoritmos SimpleCart(CART) e J48, para classificação de base de dados (dataset), gerada a partir de resultados de artigos científicos. Esse estudo foi realizado afim de identificar características superficiais que otimizassem a atividade celular. Para isso, avaliou-se, a partir de artigos publicados, o efeito de tratamento de superfície do titânio na atividade celular in vitro (células MC3TE-E1). Ficou constatado que, o emprego do algoritmo SimpleCart proporcionou uma melhor resposta em relação ao algoritmo J48. Nesse contexto, o presente trabalho tem como objetivo aplicar, para esse mesmo estudo, os algoritmos CHAID (Chi-square iteration automatic detection) e CHAID Exaustivo, comparando com os resultados obtidos com o emprego do algoritmo SimpleCart. A validação dos resultados, mostraram que o algoritmo CHAID Exaustivo obteve o melhor resultado em comparação ao algoritmo CHAID, obtendo uma estimativa de acerto de 75,9% contra 58,6% respectivamente, e um erro padrão de 7,9% contra 9,1% respectivamente, enquanto que, o algoritmo já testado na literatura SimpleCart(CART) teve como resultado 34,5% de estimativa de acerto com um erro padrão de 8,8%. Com relação aos tempos de execução apurados sobre 22 mil registros, evidenciaram que o algoritmo CHAID Exaustivo apresentou os melhores tempos, com ganho de 0,02 segundos sobre o algoritmo CHAID e 14,45 segundos sobre o algoritmo SimpleCart(CART). / The interest for the area of analysis and characterization of biomedical materials as the need for selecting the adequate material to be used increases. However, depending on the conditions to which materials are submitted, characterization may involve the evaluation of mechanical, electrical, optical, chemical and thermal properties besides bioactivity and immunogenicity. Literature review shows the application decision trees, using SimpleCart(CART) and J48 algorithms, to classify the dataset, which is generated from the results of scientific articles. Therefore the objective of this study was to identify surface characteristics that optimizes the cellular activity. Based on published articles, the effect of the surface treatment of titanium on the in vitro cells (MC3TE-E1 cells) was evaluated. It was found that applying SimpleCart algorithm gives better results than the J48. In this sense, the present study has the objective to apply the CHAID (Chi-square iteration automatic detection) algorithm and Exhaustive CHAID to the surveyed data, and compare the results obtained with the application of SimpleCart algorithm. The validation of the results showed that the Exhaustive CHAID obtained better results comparing to CHAID algorithm, obtaining 75.9 % of accurate estimation against 58.5%, respectively, while the standard error was 7.9% against 9.1%, respectively. Comparing the obtained results with SimpleCart(CART) results which had already been tested and presented in the literature, the results for accurate estimation was 34.5% and the standard error 8.8%. In relation to execution time found through the 22.000 registers, it showed that the algorithm Exhaustive CHAID presented the best times, with a gain of 0.02 seconds over the CHAID algorithm and 14.45 seconds over the SimpleCart(CART) algorithm.
38

Algoritmo para restauração de sistemas de distribuição baseado em busca por alimentadores adjacentes

Costa, Bernardo Jacques Delgado 23 February 2017 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-05-15T17:29:47Z No. of bitstreams: 1 bernardojacquesdelgadocosta.pdf: 7161081 bytes, checksum: 43a871813092712bd1059913da9e445f (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-05-17T16:01:38Z (GMT) No. of bitstreams: 1 bernardojacquesdelgadocosta.pdf: 7161081 bytes, checksum: 43a871813092712bd1059913da9e445f (MD5) / Made available in DSpace on 2017-05-17T16:01:38Z (GMT). No. of bitstreams: 1 bernardojacquesdelgadocosta.pdf: 7161081 bytes, checksum: 43a871813092712bd1059913da9e445f (MD5) Previous issue date: 2017-02-23 / Este trabalho propõe um método capaz de promover, em tempo real e de forma automática, a restauração de um sistema elétrico de distribuição radial frente a uma contingência em qualquer ponto, utilizando somente dados facilmente obtidos pelos equipamentos de proteção e controle disponíveis, tais como: valores de tensão, corrente, fator de potência e estado do equipamento. Devido à característica de operação autônoma do método, são considerados apenas os equipamentos telecontrolados instalados na rede. Muitos métodos têm sido propostos para o problema de restabelecimento de energia em sistemas de distribuição. Entretanto, em sua maioria, são dependentes de dados da rede que não podem ser facilmente obtidos, tais como os valores de impedância das linhas, demanda das cargas, entre outros dados. Além disso, normalmente são propostos planos de restabelecimento que deverão ser analisados pelo operador do Centro de Operações da Distribuição (COD) para que ele, posteriormente, execute as ações necessárias para o restabelecimento do sistema. O método proposto pode ser resumido como segue. Após a atuação do dispositivo de proteção frente a uma falta, o algoritmo entra em operação e identifica a área afetada, promovendo o isolamento dos trechos com defeito. Feito isso, é realizada uma análise das possibilidades de restauração do sistema, que se baseia na seleção de trechos pertencentes a alimentadores adjacentes à área afetada, que atendam algumas restrições. Após a seleção, é realizado um teste de todas as combinações possíveis de serem realizadas com esses trechos e a melhor solução é executada, podendo essa ser baseada no maior número de cargas restabelecidas ou na restauração do maior número de cargas prioritárias. / This work proposes a method capable of promoting, in real time and automatically, the restoration of a radial distribution system after a contingency at any point, using only data easily obtained by the available protection and control equipments, such as: voltage, current, power factor and equipment status. Due to the automatic operation feature of the method, only the equipments with remote control installed in the network are considered. Many methods have been proposed for the problem of distribution systems restoration. However, most rely on network data that cannot be easily obtained, such as line impedance, load demand, and other data. In addition, the proposed restoration plans should normally be reviewed by the operator of Distribution Operations Center (DOC), so that he can then take the necessary steps to restore the system. The proposed method can be summarized as follows. After the protection device acts against a fault, the algorithm starts and identifies the affected area, promoting the isolation of the faulty sections. Once this is done, an analysis is made of the possibilities of system restoration, which is based on the selection of sections belonging to adjacent feeders of the affected area, which meet some restrictions. After the selection, all possible combinations to be performed with these sections are tested and the best solution is performed, which can be based on the largest number of loads restored or the restoration of the largest number of priority loads.
39

Sustainable Public Procurement : Development and analysis of tools for construction works

Verzat, Benoit January 2008 (has links)
Embedded in the economic competition, public procurement has amajor role to play in being a driving force for the promotion of a globallypositive competition that prides the best sustainable products and services,rather than only the more economically efficient ones. Responsible for ahuge part of the human pressure on natural resources, and having a largeshare in the public funding, the built environment sector provides animportant venue for the use of sustainable public procurement as a tool toenhance the sustainability of societies.Selecting the best sustainable offer is a challenging task requiringenvironmental and social assessments that can only be based on complexlife cycle thinking analysis. Through the development of the “ExhaustiveSustainable public procurement clauses Manual”, this paper analyses publicprocurement issues and their potential solutions, with a focus on theenvironmental performance in buildings procurement.
40

Automatic Development of Pharmacokinetic Structural Models

Hamdan, Alzahra January 2022 (has links)
Introduction: The current development strategy of population pharmacokinetic models is a complex and iterative process that is manually performed by modellers. Such a strategy is time-demanding, subjective, and dependent on the modellers’ experience. This thesis presents a novel model building tool that automates the development process of pharmacokinetic (PK) structural models. Methods: Modelsearch is a tool in Pharmpy library, an open-source package for pharmacometrics modelling, that searches for the best structural model using an exhaustive stepwise search algorithm. Given a dataset, a starting model and a pre-specified model search space of structural model features, the tool creates and fits a series of candidate models that are then ranked based on a selection criterion, leading to the selection of the best model. The Modelsearch tool was used to develop structural models for 10 clinical PK datasets (5 orally and 5 i.v. administered drugs). A starting model for each dataset was generated using the assemblerr package in R, which included a first-order (FO) absorption without any absorption delay for oral drugs, one-compartment disposition, FO elimination, a proportional residual error model, and inter-individual variability on the starting model parameters with a correlation between clearance (CL) and central volume of distribution (VC). The model search space included aspects of absorption and absorption delay (for oral drugs), distribution and elimination. In order to understand the effects of different IIV structures on structural model selection, five model search approaches were investigated that differ in the IIV structure of candidate models: 1. naïve pooling, 2. IIV on starting model parameters only, 3. additional IIV on mean delay time parameter, 4. additional diagonal IIVs on newly added parameters, and 5. full block IIVs. Additionally, the implementation of structural model selection in the workflow of the fully automatic model development was investigated. Three strategies were evaluated: SIR, SRI, and RSI depending on the development order of structural model (S), IIV model (I) and residual error model (R). Moreover, the NONMEM errors encountered when using the tool were investigated and categorized in order to be handled in the automatic model building workflow. Results: Differences in the final selected structural models for each drug were observed between the five different model search approaches. The same distribution components were selected through Approaches 1 and 2 for 6/10 drugs. Approach 2 has also identified an absorption delay component in 4/5 oral drugs, whilst the naïve pooling approach only identified an absorption delay model in 2 drugs. Compared to Approaches 1 and 2, Approaches 3, 4 and 5 tended to select more complex models and more often resulted in minimization errors during the search. For the SIR, SRI and RSI investigations, the same structural model was selected in 9/10 drugs with a significant higher run time in RSI strategy compared to the other strategies. The NONMEM errors were categorized into four categories based on the handling suggestions which is valuable to further improve the tool in its automatic error handling. Conclusions: The Modelsearch tool was able to automatically select a structural model with different strategies of setting the IIV model structure. This novel tool enables the evaluation of numerous combinations of model components, which would not be possible using a traditional manual model building strategy. Furthermore, the tool is flexible and can support multiple research investigations for how to best implement structural model selection in a fully automatic model development workflow.

Page generated in 0.0591 seconds