• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 23
  • 13
  • 7
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 118
  • 118
  • 24
  • 23
  • 21
  • 20
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Algorithmes de géolocalisation à l’intérieur d’un bâtiment en temps différé / Post-processing algorithms for indoor localization

Zoubert-Ousseni, Kersane 10 April 2018 (has links)
La géolocalisation indoor en temps réel a largement été étudiée ces dernières années, et de nombreuses applications y sont associées. Une estimation en temps différé de la trajectoire présente également un certain intérêt. La géolocalisation indoor en temps différé permet par exemple de développer des approches de type crowdsourcing qui tirent profit d'un grand nombre d'utilisateurs afin de récolter un grand nombre de mesures : la connaissance du trajet d'un utilisateur muni d'un smartphone permet par exemple d'alimenter une carte de fréquentation du bâtiment. Estimer la trajectoire de cet utilisateur ne nécessite pas de traitement en temps réel et peut s'effectuer en temps différé ce qui offre deux avantages. D'abord, l'approche temps réel estime une position courante uniquement avec les mesures présentes et passées, alors que l'approche temps différé permet d'avoir accès à l'ensemble des mesures et permet d'obtenir une trajectoire estimée plus régulière et plus précise qu'en temps réel. Par ailleurs, cette estimation peut se faire sur un serveur et n'a pas besoin d'être portée par un smartphone comme c'est le cas en temps réel, ce qui permet d'utiliser une puissance de calcul et un volume mémoire plus importants. L'objet de ces travaux de thèse est de proposer une estimation de la trajectoire d'un individu se déplaçant avec un smartphone recevant des mesures de puissance wifi ou bluetooth (RSS) et enregistrant des mesures inertielles (IMU). En premier lieu, sans la connaissance de la position des murs de la carte, un modèle paramétrique est proposé, basé sur un modèle de propagation d'onde adaptatif pour les mesures RSS ainsi que sur une modélisation par morceaux de la trajectoire inertielle, issue des mesures IMU. Les résultats obtenus en temps différé ont une moyenne d'erreur de 6.2m contre 12.5men temps réel. En second lieu, l'information des contraintes de déplacement induites par la présence des murs du bâtiment est ajoutée et permet d'affiner l'estimation de la trajectoire avec une technique particulaire, comme il est couramment utilisé dans la littérature. Cette seconde approche a permis de développer un lisseur particulaire ainsi qu'un estimateur du maximum a posteriori par l'algorithme de Viterbi. D'autres heuristiques numériques ont été présentées. Une première heuristique ajuste le modèle d'état de l'utilisateur, qui est initialement uniquement basé sur les mesures IMU, à partir du modèle paramétrique développé sans les murs. Une seconde heuristique met en œuvre plusieurs réalisations d'un filtre particulaire et définit deux scores basés sur les mesures RSS et sur la continuité de la trajectoire. Les scores permettent de sélectionner la meilleure réalisation du filtre. Un algorithme global, regroupant l'ensemble de ces approche permet d'obtenir une erreur moyenne de 3.6m contre 5.8m en temps réel. Enfin, un modèle d'apprentissage statistique basé sur des forêts aléatoires a permis de distinguer les trajectoires qui ont été correctement estimées en fonction d'un faible nombre de variables, en prévision d'une application au crowdsourcing. / Real time indoor geolocalization has recently been widely studied, and has many applications. Off-line (post-processing) trajectory estimation also presents some interest. Off-line indoor geolocalization makes it possible for instance to develop crowdsourcing approaches that take advantage of a large number of users to collect a large number of measurements: knowing the trajectory of a smartphone user makes it possible for instance to feed an attendance map. Estimating this trajectory does not need to be performed in real-time and can be performed off-line, two main benefits. Firstly, the real-time approach estimates a current position using present and past measurements only, when the off-line approach has access to the whole measurements, and makes it possible to obtain an estimated trajectory that is smoother and more accurate than with a real-time approach. Secondly, this estimation can be done on a server and does not need to be implemented in the smartphone as it is the case in the real-time approach, with the consequence that more computing power and size memory are available. The objective of this PhD is to provide an off-line estimation of the trajectory of a smartphone user receiving signal strength (RSS) of wifi or bluetooth measurements and collecting inertial measurements (IMU). In the beginning, without the floorplan of the building, a parametric model is proposed, based on an adaptive pathloss model for RSS measurements and on a piecewise parametrization for the inertial trajectory, obtained with IMU measurements. Results are an average error of 6.2mfor the off-line estimation against 12.5m for the real-time estimation. Then, information on displacement constraints induced by the walls is considered, that makes it possible to adjust the estimated trajectory by using a particle technique as often done in the state-of-the-art. With this second approach we developped a particle smoother and a maximum a posteriori estimator using the Viterbi algorithm. Other numerical heuristics have been introduced. A first heuristic makes use of the parametric model developed without the floorplan to adjust the state model of the user which was originally based on IMUalone. A second heuristic proposes to performseveral realization of a particle filter and to define two score functions based on RSS and on the continuity of the estimated trajectory. The scores are then used to select the best realization of the particle filter as the estimated trajectory. A global algorithm, which uses all of the aforementioned approaches, leads to an error of 3.6m against 5.8m in real-time. Lastly, a statistical machine learning model produced with random forests makes it possible to distinguish the correct estimated trajectories by only using few variables to be used in a crowdsourcing framework.
72

Análise quantitativa por ressonância magnética da epilepsia parcial sintomática de difícil controle com imagem qualitativa negativa para lesão epileptogênica / Multimodal quantitative analysis of magnetic resonance in refractory symptomatic partial epilepsy with negative qualitative MR image for epileptogenic lesion

Abud, Lucas Giansante 29 September 2017 (has links)
A RM convencional de rotina pode ser inconclusiva a cerca de um terço dos pacientes com epilepsia parcial (focal) refratária. Esses pacientes com RM negativa, quando indicados para cirurgia, representam um grande desafio, visto que a identificação de uma lesão estrutural epileptogênica por esse método pode ser considerada o melhor fator prognóstico para eliminação das crises no período pósoperatório. O objetivo foi avaliar o rendimento e a utilidade da RM quantitativa por meio de pós-processamentos individualizados nesse grupo de pacientes. Trata-se de um estudo prospectivo de uma coorte de 46 pacientes com epilepsia focal farmacorresistente com RM-3 Teslas não lesional e potenciais candidatos a cirurgia. Todos os pacientes foram submetidos a um novo protocolo de RM, incluindo 3D T1 e técnicas avançadas, e, posteriormente, avaliados por pós-processamentos individualizados de cinco medidas quantitativas extraídas dessas sequências. Essas medidas consistiram em espessura cortical (EC) e do sinal de junção entre as substâncias branca e cinzenta (JBC), ambas extraídas da sequência 3D T1, assim como da relaxometria T2 (RT2), taxa de transferência de magnetização (TTM) e difusibilidade média (DM). Os dados extraídos de todo o cérebro foram individualmente comparados com um grupo de controle saudáveis, utilizando-se das técnicas de análise baseada em superfície para a EC e de análises baseadas em voxel para as demais medidas. Utilizou-se do videoencefalograma de superfície e semiologia das crises para determinar a possível zona epileptogênica (ZE), sendo que 31 pacientes foram considerados como foco localizatório suspeito (FLS). As medidas quantitativas detectaram individualmente mudanças de sinal em alguma região do cérebro de 32,6% a 56,4% dos pacientes. No subgrupo classificado como FLS, os pós-processamentos detectaram individualmente alterações na região de origem eletroclínica das crises em 9,7% (3/31) a 31,0% (9/29) dos pacientes. Esse rendimento foi mais alto com a DM (31,0% ou 9/29) e RT2 (25,0% ou 7/28) e mais baixo com a EC e JBC (9,7% ou 3/31). Alterações observadas fora da região presumida da ZE foram sempre superiores, variando de 25,8% (8/31) a 51,7% (15/29). Em cinco pacientes (5/46 ou 10,9%) foi possível identificar alteração estrutural após a avaliação visual com direcionamento localizatório pelos pósprocessamentos. Embora a análise quantitativa da RM individualizada possa sugerir lesões ocultas no protocolo convencional, é preciso ter cautela devido à aparente baixa especificidade dos achados. Nesse grupo de pacientes, essas alterações devem refletir não só as alterações na região da ZE, mas também anormalidades microestruturais secundárias às crises ou, menos provavelmente, malformações cerebrais extensas não visíveis nos protocolos de rotina. Uma potencial utilidade prática desses métodos é auxiliar na colocação de eletrodos intracranianos em casos selecionados. Por outro lado, o estudo mostrou capacidade de detectar lesões potencialmente epileptogênicas que passaram despercebidas na inspeção visual convencional da RM após reavaliações dirigidas pelos pós-processamentos, notadamente pela medida da EC. Isso sugere que essas técnicas podem ser usadas como uma ferramenta de triagem para evitar que qualquer lesão visível seja ignorada ou a fim de guiar uma nova inspeção visual dirigida para uma região suspeita. / Conventional MRI may be inconclusive in about one-third of patients with refractory partial epilepsy. These patients with negative MRI when indicated for surgery represent a great challenge since the identification of an epileptogenic structural lesion by this method can be considered the best prognostic factor for the elimination of the crises in the postoperative period. Our objective was to evaluate the yield and utility of quantitative MRI through individualized post-processing in this group of patients. The present thesis is a prospective study of a cohort of forty-six patients with drug-resistant partial epilepsy, with non-lesional 3-Teslas MRI and potential surgical candidates. All patients underwent a new MRI protocol, including 3D T1 and advanced techniques, and were subsequently evaluated through individualized post-processing of five quantitative measures extracted from these sequences. These measurements consisted of the cortical thickness (CT) and the signal between the white and gray matters junction (WGJ), both extracted from the 3D T1 sequence, as well as the T2 relaxometry (RT2), magnetization transfer rate (MTR) and mean diffusibility (MD). Data extracted from the whole brain were individually compared to a healthy control group using surface-based analysis (SBM) techniques for CT and voxel-based analyzes (VBA) for the other measures. Surface VEEG and seizure semiology were used to determine the possible epileptogenic zone (EZ). Consequently 31 patients were considered to have a suspect location for the Focus (SLF). Quantitative measurements individually detected abnormalities in some regions of the brain from 32.6% to 56.4% of patients. In the subgroup classified as FLS post-processing individually detected abnormalities inside the region of electroclinical origin of seizures in 9.7% (3/31) to 31.0% (9/29) of the patients. This yield was higher with MD (31.0% or 9/29) and RT2 (25.0% or 7/28) and lower with CT and WGJ (9.7% or 3/31). Abnormalities observed outside the presumed EZ region were always higher, ranging from 25.8% (8/31) to 51.7% (15/29). In five patients (5/46 or 10.9%) it was possible to identify some structural abnormality after the MRI visual inspection with orientation of the location by post-processing. Although the MRI quantitative analysis through individualized post-processing may suggest hidden structural lesions in the conventional protocol, caution should be exercised because of the apparent low specificity of theses findings for the EZ. In this group of patients these abnormalities should reflect not only the alterations in the EZ region, but also the microstructural abnormalities secondary to the seizures or less likely extensive cerebral malformations not visible in the routine protocols. A practical potential utility of these methods is to assist in the placement of intracranial electrodes in selected cases. On the other hand, the study showed a certain capacity to detect potentially epileptogenic lesions that became unnoticed in the MRI conventional visual analysis after re-evaluations directed by post-processing, notably by CT measurement. This suggests that these methods should be used either as a screening tool to prevent any visible lesions from being ignored or to guide a new visual inspection directed to a suspect region.
73

Ultra-light weight design through additive manufacturing

Sauter, Barrett January 2019 (has links)
ABB Corporate Research was looking to redevelop one product to be manufactured via polymer additive manufacturing (AM), as opposed to its previously traditionally manufacturing method. The current product is cylindrical in shape and must withstand a certain amount of hydrostatic pressure. Due to the pressure and the current design, the cannister is prone to buckling failure. The cannister is currently produced from two cylindrical tube parts and two spherical end sections produced from solid blocks of the same material. For assembly, an inner assembly is inserted into one of the tube parts and then all parts are welded together. This product is also custom dimensioned for each purchase order. The purpose of investigating this redevelopment for AM is to analyse if an updated inner design unique to additive manufacturing is able to increase the performance of the product by increasing the pressure it can withstand from both a material failure standpoint and a buckling failure. The redevelopment also aims to see if the component count and process count can be decreased. Ultimately, two product solutions are suggested, one for low pressure ranges constructed in ABS and one for high pressure ranges constructed in Ultem 1010. To accomplish this, relevant literature was referred to gain insight into how to reinforce cylindrical shell structures against buckling. Design aspects unique to AM were also explored. Iterations of these two areas were designed and analysed, which led to a final design choice being decided upon. The final design is ultimately based on the theory of strengthening cylindrical structures against buckling through the use of ring stiffeners while also incorporating AM unique design aspects in the form of hollow network structures. By utilizing finite element analysis, the design was further developed until it held the pressure required. Simulation results suggest that the ABS product can withstand 3 times higher pressure than the original design while being protected against failure due to buckling. The Ultem simulation results suggest that the product can withstand 12 times higher pressure than the current design while also being protected against failure due to buckling. Part count and manufacturing processes are also found to have decreased by half. Post-processing treatments were also explored, such as the performance of sealants under pressure and the effects of sealants on material mechanical properties. Results show that one sealant in particular, an acrylic spray, is most suitable to sealing the ABS product. It withstood a pressure of 8 bar during tests. The flexural tests showed that the sealant did indeed increase certain mechanical properties, the yield strength, however did not affect the flexural modulus significantly. This work gives a clear indication that the performance of this product is feasibly increased significantly from redeveloping it specifically to AM.
74

Fracture Failure of Solid Oxide Fuel Cells

Johnson, Janine B. 23 November 2004 (has links)
Among all existing fuel cell technologies, the planar solid oxide fuel cell (SOFC) is the most promising one for high power density applications. A planar SOFC consists of two porous ceramic layers (the anode and cathode) through which flows the fuel and oxidant. These ceramic layers are bonded to a solid electrolyte layer to form a tri-layer structure called PEN (positive-electrolyte-negative) across which the electrochemical reactions take place to generate electricity. Because SOFCs operate at high temperatures, the cell components (e.g., PEN and seals) are subjected to harsh environments and severe thermomechanical residual stresses. It has been reported repeatedly that, under combined thermomechanical, electrical and chemical driving forces, catastrophic failure often occurs suddenly due to material fracture or loss of adhesion at the material interfaces. Unfortunately, there have been very few thermomechanical modeling techniques that can be used for assessing the reliability and durability of SOFCs. Therefore, modeling techniques and simulation tools applicable to SOFC will need to be developed. Such techniques and tools enable us to analyze new cell designs, evaluate the performance of new materials, virtually simulate new stack configurations, as well as to assess the reliability and durability of stacks in operation. This research focuses on developing computational techniques for modeling fracture failure in SOFCs. The objectives are to investigate the failure modes and failure mechanisms due to fracture, and to develop a finite element based computational method to analyze and simulate fracture and crack growth in SOFCs. By using the commercial finite element software, ANSYS, as the basic computational tool, a MatLab based program has been developed. This MatLab program takes the displacement solutions from ANSYS as input to compute fracture parameters. The individual stress intensity factors are obtained by using the volume integrals in conjunction with the interaction integral technique. The software code developed here is the first of its kind capable of calculating stress intensity factors for three-dimensional cracks of curved front experiencing both mechanical and non-uniform temperature loading conditions. These results provide new scientific and engineering knowledge on SOFC failure, and enable us to analyze the performance, operations, and life characteristics of SOFCs.
75

ANÁLISE DA QUALIDADE DE RESULTADOS GPS EM PROGRAMAS COMERCIAIS / QUALITY ANALISYS OF GPS RESULTS IN THE COMMERCIAL SOFTWARE

Moraes, Alarico Valls de 24 November 2005 (has links)
The Brazilian technician-scientific community that dedicates to the surveying is living a new time with the normatization of the technical parameters for reference of the geodesic surveys to Cadastro Nacional de Imóveis Rurais according to Brazilian Law 10.267/2001. The objective of this work is to analyse by means of the statistical parameters the quality of the survey data with GPS-receivers and post-processed in the softwares present in the market. Amongst the analysed parameters, the most important is the standard deviation of the coordinates, because it is the measure for the precision and it is an composition element of the accuracy, that indicate the result quality. The results presented by commercial softwares are compared with the official data of the State Landmark GPS network that are information supplied for the Instituto Brasileiro de Geografia e Estatística (IBGE). Also this work contribute for the relation user-manufacturer approaching them thus first when manipulating the programs get greater information availability how much to the applied methodology to give the processing results. By means of the fundamental univariate and multivariate Statistic concepts, this work give an analysis of as the commercial softwares are processing GPS data and also inform which the necessary minimum data that the softwares must supply to the user in order to give the estimated statistical parameters that are an indication the quality of each geodetic survey. / A comunidade técnico-científica brasileira que se dedica à mensuração está vivendo uma nova época com a normatização de parâmetros técnicos para levantamentos geodésicos destinados ao Cadastro Nacional de Imóveis Rurais de acordo com a Lei 10.267/2001. O objetivo desta dissertação é analisar por meio de parâmetros estatísticos estimados a qualidade dos dados oriundos de levantamentos com receptores GPS e pós-processados em programas computacionais presentes no mercado. Dentre os parâmetros estatísticos analisados, o mais importante é o desvio padrão das coordenadas, porque ele é a medida da precisão e compõe a medida da acurácia que exprimem a qualidade dos resultados. Os resultados apresentados pelos programas comerciais são comparados com dados oficiais da Rede Estadual de Marcos GPS que são informações fornecidas pelo Instituto Brasileiro de Geografia e Estatística (IBGE). Este trabalho também contribui para a relação usuário-fabricante aproximando-os, entre si, de maneira que o primeiro ao manipular os programas obtenha maior disponibilidade de informações quanto à metodologia aplicada para obter os resultados do processamento. Por meio dos conceitos fundamentais da Estatística univariada e multivariada, este trabalho fornece uma análise de como os programas comerciais estão processando os dados GPS e informa, também, quais os dados mínimos necessários que os programas computacionais devem fornecer ao usuário para que este obtenha os parâmetros estatísticos estimados indicadores da qualidade para cada levantamento geodésico.
76

Uso de medidas de desempenho e de grau de interesse para análise de regras descobertas nos classificadores

Rocha, Mauricio Rêgo Mota da 20 August 2008 (has links)
Made available in DSpace on 2016-03-15T19:38:11Z (GMT). No. of bitstreams: 1 Mauricio Rego Mota da Rocha.pdf: 914988 bytes, checksum: d8751dcc6d37e161867d8941bc8f7d64 (MD5) Previous issue date: 2008-08-20 / Fundo Mackenzie de Pesquisa / The process of knowledge discovery in databases has become necessary because of the large amount of data currently stored in databases of companies. They operated properly can help the managers in decision-making in organizations. This process is composed of several steps, among them there is a data mining, stage where they are applied techniques for obtaining knowledge that can not be obtained through traditional methods of analysis. In addition to the technical, in step of data mining is also chosen the task of data mining that will be used. The data mining usually produces large amount of rules that often are not important, relevant or interesting to the end user. This makes it necessary to review the knowledge discovered in post-processing of data. In the stage of post-processing is used both measures of performance but also of degree of interest in order to sharpen the rules more interesting, useful and relevant. In this work, using a tool called WEKA (Waikato Environment for Knowledge Analysis), were applied techniques of mining, decision trees and rules of classification by the classification algorithms J48.J48 and J48.PART respectively. In the post-processing data was implemented a package with functions and procedures for calculation of both measures of performance but also of the degree of interest rules. At this stage consultations have also been developed (querys) to select the most important rules in accordance with measures of performance and degree of interest. / O processo de descoberta de conhecimento em banco de dados tem se tornado necessário devido à grande quantidade de dados atualmente armazenados nas bases de dados das empresas. Esses dados devidamente explorados podem auxiliar os gestores na tomada de decisões nas organizações. Este processo é composto de várias etapas, dentre elas destaca-se a mineração de dados, etapa onde são aplicadas técnicas para obtenção de conhecimento que não podem ser obtidas através de métodos tradicionais de análise. Além das técnicas, na etapa demineração de dados também é escolhida a tarefa de mineração que será utilizada. A mineração de dados geralmente produz grande quantidade de regras que muitas vezes não são importantes, relevantes ou interessantes para o usuário final. Isto torna necessária a análise do conhecimento descoberto no pós-processamento dos dados. Na etapa de pós-processamento são utilizadas medidas tanto de desempenho como também de grau de interesse com a finalidade de apontar as regras mais interessante, úteis e relevantes. Neste trabalho, utilizando-se de uma ferramenta chamada WEKA (Waikato Environment for Knowledge Analysis), foram aplicadas as técnicas de mineração de Árvore de Decisão e de Regras de Classificação através dos algoritmos de classificação J48.J48 e J48.PART respectivamente. No pós-processamento de dados foi implementado um pacote com funções e procedimentos para cálculo das medidas tanto de desempenho como também de grau de interesse de regras. Nesta etapa também foram desenvolvidas consultas (querys) para selecionar as regras mais importantes de acordo com as medidas de desempenho e de grau de interesse.
77

Análise quantitativa por ressonância magnética da epilepsia parcial sintomática de difícil controle com imagem qualitativa negativa para lesão epileptogênica / Multimodal quantitative analysis of magnetic resonance in refractory symptomatic partial epilepsy with negative qualitative MR image for epileptogenic lesion

Lucas Giansante Abud 29 September 2017 (has links)
A RM convencional de rotina pode ser inconclusiva a cerca de um terço dos pacientes com epilepsia parcial (focal) refratária. Esses pacientes com RM negativa, quando indicados para cirurgia, representam um grande desafio, visto que a identificação de uma lesão estrutural epileptogênica por esse método pode ser considerada o melhor fator prognóstico para eliminação das crises no período pósoperatório. O objetivo foi avaliar o rendimento e a utilidade da RM quantitativa por meio de pós-processamentos individualizados nesse grupo de pacientes. Trata-se de um estudo prospectivo de uma coorte de 46 pacientes com epilepsia focal farmacorresistente com RM-3 Teslas não lesional e potenciais candidatos a cirurgia. Todos os pacientes foram submetidos a um novo protocolo de RM, incluindo 3D T1 e técnicas avançadas, e, posteriormente, avaliados por pós-processamentos individualizados de cinco medidas quantitativas extraídas dessas sequências. Essas medidas consistiram em espessura cortical (EC) e do sinal de junção entre as substâncias branca e cinzenta (JBC), ambas extraídas da sequência 3D T1, assim como da relaxometria T2 (RT2), taxa de transferência de magnetização (TTM) e difusibilidade média (DM). Os dados extraídos de todo o cérebro foram individualmente comparados com um grupo de controle saudáveis, utilizando-se das técnicas de análise baseada em superfície para a EC e de análises baseadas em voxel para as demais medidas. Utilizou-se do videoencefalograma de superfície e semiologia das crises para determinar a possível zona epileptogênica (ZE), sendo que 31 pacientes foram considerados como foco localizatório suspeito (FLS). As medidas quantitativas detectaram individualmente mudanças de sinal em alguma região do cérebro de 32,6% a 56,4% dos pacientes. No subgrupo classificado como FLS, os pós-processamentos detectaram individualmente alterações na região de origem eletroclínica das crises em 9,7% (3/31) a 31,0% (9/29) dos pacientes. Esse rendimento foi mais alto com a DM (31,0% ou 9/29) e RT2 (25,0% ou 7/28) e mais baixo com a EC e JBC (9,7% ou 3/31). Alterações observadas fora da região presumida da ZE foram sempre superiores, variando de 25,8% (8/31) a 51,7% (15/29). Em cinco pacientes (5/46 ou 10,9%) foi possível identificar alteração estrutural após a avaliação visual com direcionamento localizatório pelos pósprocessamentos. Embora a análise quantitativa da RM individualizada possa sugerir lesões ocultas no protocolo convencional, é preciso ter cautela devido à aparente baixa especificidade dos achados. Nesse grupo de pacientes, essas alterações devem refletir não só as alterações na região da ZE, mas também anormalidades microestruturais secundárias às crises ou, menos provavelmente, malformações cerebrais extensas não visíveis nos protocolos de rotina. Uma potencial utilidade prática desses métodos é auxiliar na colocação de eletrodos intracranianos em casos selecionados. Por outro lado, o estudo mostrou capacidade de detectar lesões potencialmente epileptogênicas que passaram despercebidas na inspeção visual convencional da RM após reavaliações dirigidas pelos pós-processamentos, notadamente pela medida da EC. Isso sugere que essas técnicas podem ser usadas como uma ferramenta de triagem para evitar que qualquer lesão visível seja ignorada ou a fim de guiar uma nova inspeção visual dirigida para uma região suspeita. / Conventional MRI may be inconclusive in about one-third of patients with refractory partial epilepsy. These patients with negative MRI when indicated for surgery represent a great challenge since the identification of an epileptogenic structural lesion by this method can be considered the best prognostic factor for the elimination of the crises in the postoperative period. Our objective was to evaluate the yield and utility of quantitative MRI through individualized post-processing in this group of patients. The present thesis is a prospective study of a cohort of forty-six patients with drug-resistant partial epilepsy, with non-lesional 3-Teslas MRI and potential surgical candidates. All patients underwent a new MRI protocol, including 3D T1 and advanced techniques, and were subsequently evaluated through individualized post-processing of five quantitative measures extracted from these sequences. These measurements consisted of the cortical thickness (CT) and the signal between the white and gray matters junction (WGJ), both extracted from the 3D T1 sequence, as well as the T2 relaxometry (RT2), magnetization transfer rate (MTR) and mean diffusibility (MD). Data extracted from the whole brain were individually compared to a healthy control group using surface-based analysis (SBM) techniques for CT and voxel-based analyzes (VBA) for the other measures. Surface VEEG and seizure semiology were used to determine the possible epileptogenic zone (EZ). Consequently 31 patients were considered to have a suspect location for the Focus (SLF). Quantitative measurements individually detected abnormalities in some regions of the brain from 32.6% to 56.4% of patients. In the subgroup classified as FLS post-processing individually detected abnormalities inside the region of electroclinical origin of seizures in 9.7% (3/31) to 31.0% (9/29) of the patients. This yield was higher with MD (31.0% or 9/29) and RT2 (25.0% or 7/28) and lower with CT and WGJ (9.7% or 3/31). Abnormalities observed outside the presumed EZ region were always higher, ranging from 25.8% (8/31) to 51.7% (15/29). In five patients (5/46 or 10.9%) it was possible to identify some structural abnormality after the MRI visual inspection with orientation of the location by post-processing. Although the MRI quantitative analysis through individualized post-processing may suggest hidden structural lesions in the conventional protocol, caution should be exercised because of the apparent low specificity of theses findings for the EZ. In this group of patients these abnormalities should reflect not only the alterations in the EZ region, but also the microstructural abnormalities secondary to the seizures or less likely extensive cerebral malformations not visible in the routine protocols. A practical potential utility of these methods is to assist in the placement of intracranial electrodes in selected cases. On the other hand, the study showed a certain capacity to detect potentially epileptogenic lesions that became unnoticed in the MRI conventional visual analysis after re-evaluations directed by post-processing, notably by CT measurement. This suggests that these methods should be used either as a screening tool to prevent any visible lesions from being ignored or to guide a new visual inspection directed to a suspect region.
78

Surface integrity on post processed alloy 718 after nonconventional machining

Holmberg, Jonas January 2018 (has links)
There is a strong industrial driving force to find alternative production technologies in order to make the production of aero engine components of superalloys even more efficient than it is today. Introducing new and nonconventional machining technologies allows taking a giant leap to increase the material removal rate and thereby drastically increase the productivity. However, the end result is to meet the requirements set for today's machined surfaces.The present work has been dedicated to improving the knowledge of how the non-conventional machining methods Abrasive Water Jet Machining, AWJM, Laser Beam Machining, LBM, and Electrical Discharge Machining, EDM, affect the surface integrity. The aim has been to understand how the surface integrity could be altered to an acceptable level. The results of this work have shown that both EDM and AWJM are two possible candidates but EDM is the better alternative; mainly due to the method's ability to machine complex geometries. It has further been shown that both methods require post processing in order to clean the surface and to improve the topography and for the case of EDM ageneration of compressive residual stresses are also needed.Three cold working post processes have been evaluated in order to attain this: shot peening, grit blasting and high pressure water jet cleaning, HPWJC. There sults showed that a combination of two post processes is required in order to reach the specified level of surface integrity in terms of cleaning and generating compressive residual stresses and low surface roughness. The method of high pressure water jet cleaning was the most effective method for removing the EDM wire residuals, and shot peening generated the highest compressive residual stresses as well as improved the surface topography.To summarise: the most promising production flow alternative using nonconventional machining would be EDM followed by post processing using HPWJC and shot peening.
79

Uma metodologia para exploração de regras de associação generalizadas integrando técnicas de visualização de informação com medidas de avaliação do conhecimento / A methodology for exploration of generalized association rules integrating information visualization techniques with knowledge evaluation measures

Magaly Lika Fujimoto 04 August 2008 (has links)
O processo de mineração de dados tem como objetivo encontrar o conhecimento implícito em um conjunto de dados para auxiliar a tomada de decisão. Do ponto de vista do usuário, vários problemas podem ser encontrados durante a etapa de pós-processamento e disponibilização do conhecimento extraído, como a enorme quantidade de padrões gerados por alguns algoritmos de extração e a dificuldade na compreensão dos modelos extraídos dos dados. Além do problema da quantidade de regras, os algoritmos tradicionais de regras de associação podem levar à descoberta de conhecimento muito específico. Assim, pode ser realizada a generalização das regras de associação com o intuito de obter um conhecimento mais geral. Neste projeto é proposta uma metodologia interativa que auxilie na avaliação de regras de associação generalizadas, visando melhorar a compreensibilidade e facilitar a identificação de conhecimento interessante. Este auxílio é realizado por meio do uso de técnicas de visualização em conjunto com a aplicação medidas de avaliação objetivas e subjetivas, que estão implementadas no módulo de visualização de regras de associação generalizados denominado RulEE-GARVis, que está integrado ao ambiente de exploração de regras RulEE (Rule Exploration Environment). O ambiente RulEE está sendo desenvolvido no LABIC-ICMC-USP e auxilia a etapa de pós-processamento e disponibilização de conhecimento. Neste contexto, também foi objetivo deste projeto de pesquisa desenvolver o Módulo de Gerenciamento do ambiente de exploração de regras RulEE. Com a realização do estudo dirigido, foi possível verificar que a metodologia proposta realmente facilita a compreensão e a identificação de regras de associação generalizadas interessantes / The data mining process aims at finding implicit knowledge in a data set to aid in a decision-making process. From the users point of view, several problems can be found at the stage of post-processing and provision of the extracted knowledge, such as the huge number of patterns generated by some of the extraction algorithms and the difficulty in understanding the types of the extracted data. Besides the problem of the number of rules, the traditional algorithms of association rules may lead to the discovery of very specific knowledge. Thus, the generalization of association rules can be realized to obtain a more general knowledge. In this project an interactive methodology is proposed to aid in the evaluation of generalized association rules in order to improve the understanding and to facilitate the identification of interesting knowledge. This aid is accomplished through the use of visualization techniques along with the application of objective and subjective evaluation measures, which are implemented in the visualization module of generalized association rules called RulEE-GARVis, which is integrated with the Rule Exploration Environment RulEE. The RulEE environment is being developed at LABIC-ICMC-USP and aids in the post-processing and provision of knowledge. In this context, it was also the objective of this research project to develop the Module Management of the rule exploration environment RulEE. Through this directed study, it was verified that the proposed methodology really facilitates the understanding and identification of interesting generalized association rules
80

Statistical Post-processing of Deterministic and Ensemble Wind Speed Forecasts on a Grid / Post-traitements statistiques de prévisions de vent déterministes et d'ensemble sur une grille

Zamo, Michaël 15 December 2016 (has links)
Les erreurs des modèles de prévision numérique du temps (PNT) peuvent être réduites par des méthodes de post-traitement (dites d'adaptation statistique ou AS) construisant une relation statistique entre les observations et les prévisions. L'objectif de cette thèse est de construire des AS de prévisions de vent pour la France sur la grille de plusieurs modèles de PNT, pour les applications opérationnelles de Météo-France en traitant deux problèmes principaux. Construire des AS sur la grille de modèles de PNT, soit plusieurs milliers de points de grille sur la France, demande de développer des méthodes rapides pour un traitement en conditions opérationnelles. Deuxièmement, les modifications fréquentes des modèles de PNT nécessitent de mettre à jour les AS, mais l'apprentissage des AS requiert un modèle de PNT inchangé sur plusieurs années, ce qui n'est pas possible dans la majorité des cas.Une nouvelle analyse du vent moyen à 10 m a été construite sur la grille du modèle local de haute résolution (2,5 km) de Météo-France, AROME. Cette analyse se compose de deux termes: une spline fonction de la prévision la plus récente d'AROME plus une correction par une spline fonction des coordonnées du point considéré. La nouvelle analyse obtient de meilleurs scores que l'analyse existante, et présente des structures spatio-temporelles réalistes. Cette nouvelle analyse, disponible au pas horaire sur 4 ans, sert ensuite d'observation en points de grille pour construire des AS.Des AS de vent sur la France ont été construites pour ARPEGE, le modèle global de Météo-France. Un banc d'essai comparatif désigne les forêts aléatoires comme meilleure méthode. Cette AS requiert un long temps de chargement en mémoire de l'information nécessaire pour effectuer une prévision. Ce temps de chargement est divisé par 10 en entraînant les AS sur des points de grille contigü et en les élaguant au maximum. Cette optimisation ne déteriore pas les performances de prévision. Cette approche d'AS par blocs est en cours de mise en opérationnel.Une étude préalable de l'estimation du « continuous ranked probability score » (CRPS) conduit à des recommandations pour son estimation et généralise des résultats théoriques existants. Ensuite, 6 AS de 4 modèles d'ensemble de PNT de la base TIGGE sont combinées avec les modèles bruts selon plusieurs méthodes statistiques. La meilleure combinaison s'appuie sur la théorie de la prévision avec avis d'experts, qui assure de bonnes performances par rapport à une prévision de référence. Elle ajuste rapidement les poids de la combinaison, un avantage lors du changement de performance des prévisions combinées. Cette étude a soulevé des contradictions entre deux critères de choix de la meilleure méthode de combinaison : la minimisation du CRPS et la platitude des histogrammes de rang selon les tests de Jolliffe-Primo. Il est proposé de choisir un modèle en imposant d'abord la platitude des histogrammes des rangs. / Errors of numerical weather prediction (NWP) models can be reduced thanks to post-processing methods (model output statistics, MOS) that build a statistical relationship between the observations and associated forecasts. The objective of the present thesis is to build MOS for windspeed forecasts over France on the grid of several NWP models, to be applied on operations at Météo-France, while addressing the two main issues. First, building MOS on the grid of some NWP model, with thousands of grid points over France, requires to develop methods fast enough for operational delays. Second, requent updates of NWP models require updating MOS, but training MOS requires an NWP model unchanged for years, which is usually not possible.A new windspeed analysis for the 10 m windspeed has been built over the grid of Météo-France's local area, high resolution (2,5km) NWP model, AROME. The new analysis is the sum of two terms: a spline with AROME most recent forecast as input plus a correction with a spline with the location coordinates as input. The new analysis outperforms the existing analysis, while displaying realistic spatio-temporal patterns. This new analysis, now available at an hourly rate over 4, is used as a gridded observation to build MOS in the remaining of this thesis.MOS for windspeed over France have been built for ARPEGE, Météo-France's global NWP model. A test-bed designs random forests as the most efficient MOS. The loading times is reduced by a factor 10 by training random forests over block of nearby grid points and pruning them as much as possible. This time optimisation goes without reducing the forecast performances. This block MOS approach is currently being made operational.A preliminary study about the estimation of the continuous ranked probability score (CRPS) leads to recommendations to efficiently estimate it and to generalizations of existing theoretical results. Then 4 ensemble NWP models from the TIGGE database are post-processed with 6 methods and combined with the corresponding raw ensembles thanks to several statistical methods. The best combination method is based on the theory of prediction with expert advice, which ensures good forecast performances relatively to some reference forecast. This method quickly adapts its combination weighs, which constitutes an asset in case of performances changes of the combined forecasts. This part of the work highlighted contradictions between two criteria to select the best combination methods: the minimization of the CRPS and the flatness of the rank histogram according to the Jolliffe-Primo tests. It is proposed to choose a model by first imposing the flatness of the rank histogram.

Page generated in 0.0653 seconds