• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 205
  • 100
  • 35
  • 32
  • 31
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 524
  • 524
  • 84
  • 81
  • 66
  • 60
  • 46
  • 46
  • 39
  • 38
  • 37
  • 36
  • 35
  • 31
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Širdies vainikinių arterijų susiaurėjimų vertinimo modeliai ir programinės priemonės / Models and software for estimation of heart coronary arteries stenosis

Astapenko, Dovilė 07 June 2005 (has links)
Coronary arteries stenosis causes ischemic heart disease which is the main fatality reason all over the world. For diagnosis arteries stenosis invasive and noninvasive methods are used. These methods are quite expensive and not all medical institutions can carry out such tests. Analysis of electrocardiogram could be one of the cheapest and current methods to diagnose arteries stenosis. Despite the fact, that in some cases such analysis is not very informative, medics look for the informative ECG parameters and their combinations in order to predict stenosis. The goal of this work is to create statistical methods and software for prognosis of coronary arteries stenosis using digital ECG parameters. Data was colecet and prepared for this reseach in Clinic of Cardiology of Kaunas Medical University. In this work are presented: 1.Statistical analysis models for prognosis of coronary arteries stenosis. 2.Software for uses. 3.Comparative analysis of statistical analysis models. 4.Results of real data analysis, which were obtained by using, developed statistical models and software. Obtained results will be used to improve methods of diagnosis ischemic heart disease and arteries stenosis in Clinic of Cardiology of Kaunas Medical University.
222

Interface Management for Complex Capital Projects

Shokri, Samin January 2014 (has links)
In recent years, Interface Management (IM) practices have been emerging to address the challenges of managing complex capital projects. These challenges include the added complexity and scale of these projects, globalization, geographical distribution and various working cultures, and different internal and external risks. Oil sands, off-shore and nuclear are examples of this class of projects. Despite an emerging consensus on the effectiveness of IM for facilitating complex projects delivery, IM definitions, elements, and the way it has been implemented varies widely across the construction industry. Furthermore, identifying key interface points, integrating IM with the project schedule, and the relationship between IM implementation and project performance are significant questions that owners and contractors wish to have addressed. Therefore, the objectives of this thesis are to develop a workflow driven process for IM, study its current status in the industry, develop an algorithm to identify key interface points and integrate IM with project schedule, and investigate the relationship between IM implementation and project performance. This research is mostly focused on industrial construction, though some data from other sectors is included. In this thesis, the elements and fundamental definitions of Interface Management are proposed. Then, a workflow driven Interface Management System (IMS) is developed, which lays out a strategy to systematically identify and manage stakeholders’ interfaces with the objective of more effective risk management in capital projects. Once the IMS ontology is defined, the current state of IM in the construction industry is studied through data collection on 46 projects by conducting questionnaire based interviews. The interviewed projects are from different sectors of the industry, with various sizes and geographical locations. This study aims at identifying the project characteristics that lead to formal IM implementation in a project, current common IM practices in the industry, and criteria to assess the status and effectiveness of IM. Furthermore, the relationship between IM implementation and project performance in terms of cost and schedule growth is investigated by employing descriptive and statistical analysis tools. One observation was that those projects that implemented IM at a high level experienced lower cost growth and less variation in the cost growth. This thesis also proposes a methodology to identify key interface points by recognizing the interdependency relationships between them and creating the Interface Points Network. By analyzing the network, two types of high impact and risk prone interface points are identified. Once the key interface points are recognized, they are linked to the interface milestones on the project schedule, to integrate the cyclic information of IMS with the conventional, sequential planning, scheduling and control paradigms (e.g. CPM). The proposed algorithms are validated on a representative offshore model project. In summary, the proposed algorithms in this thesis provide a framework to improve project performance through better alignment between stakeholders, enforcement of contract terms, and effective sharing and distribution of risk-related information within formalized interface management framework. The empirical analysis also sets a foundation for construction organizations to assess their IM with regard to the current practices in the industry and a roadmap to improve their IM practices to more mature levels.
223

Development Of A Subsidence Model For Cayirhan Coal Mine

Haciosmanoglu, Esenay Meryem 01 October 2004 (has links) (PDF)
In this study, subsidence analyses were carried out for panels B14, B12, B10, B02, C12, C10, C08 of &Ccedil / ayirhan Lignite Mine using in-situ subsidence measurements. Using the measurements from stations, installed both parallel and perpendicular to panel-advance direction, subsidence profiles were plotted as a function of time and distance from panel center. Horizontal displacement and strain curves were also plotted and compared with subsidence profiles. There are various methods used for subsidence prediction. In this study however, a subsidence model was developed based on empirical model obtained from nonlinear regression analysis. During the analyses SPSS (V.10.0) software was used and the unknown parameters associated with subsidence function were determined for the stations above B14 panel. Since it was too complicated to take all the affecting factors into consideration, only the parameters which could be estimated by statistical evaluation were taken into account during analyses. One significant contribution of this study to subsidence subject was the comparison of the subsidence values measured during this investigation with the values predicted by some other empirical methods. In this study, the structural damages to the pylons installed on ground surface above retreating longwall panels were also investigated by the use of previous studies. Slope as well as horizontal strain changes caused by ground movements due to underground mining were determined. Last but not least, it should be stated another significant contribution of this study to engineering was the collection of a significant database obtained from field measurements.
224

Optimum Polarization States & their Role in UWB Radar Identification of Targets

Faisal Aldhubaib Unknown Date (has links)
Although utilization of polarimetry techniques for recognition of military and civilian targets is well established in the narrowband context, it is not yet fully established in a broadband sense as compared to planetary area of research. The concept of combining polarimetry together with certain areas of broadband technology and thus forming a robust signature and feature set has been the main theme of this thesis. This is important, as basing the feature set on multiple types of signatures can increase the accuracy of the recognition process. In this thesis, the concept of radar target recognition based upon a polarization signature in a broadband context is examined. A proper UWB radar signal can excite the target dominant resonances and, consequently, reveal information about the target principle dimensions; while diversity in the polarization domain revealed information about the target shape. The target dimensions are used to classify the target, and then information about its shape is used to identify it. Fused together and inferred from the target characteristic polarization states, it was verified that the polarization information at dominant resonant frequencies have both a physical interpretation and attributes (as seen in section ‎3.4.3) related to the target symmetry, linearity, and orientation. In addition, this type of information has the ability to detect the presence of major scattering mechanisms such as strong specular reflection as in the case of the cylinder flat ends. Throughout the thesis, simulated canonical targets with similar resonant frequencies were used, and thus identification of radar targets was based solely on polarization information. In this framework, the resonant frequencies were merely identified as peaks in the frequency response for simple or low damping targets such as thin metal wires, or alternatively identified as the imaginary parts of the complex poles for complex or high damping targets with significant diameter and dielectric properties. Therefore, the main contribution of this thesis originates from the ability to integrate the optimum polarization states in a broadband context for improved target recognition performance. In this context, the spectral dispersion originating from the broad nature of the radar signal, the lack of accuracy in extracting the target resonances, the robustness of the polarization feature set, the representation of these states in time domain, and the feature set modelling with spatial variation are among the important issues addressed with several approaches presented to overcome them. The general approach considered involved a subset of “representative” times in the time domain, or correspondingly, “representative frequencies” in the frequency domain with which to associate optimum polarization states with each member of the subset are used. The first approach in chapter ‎3 involved the polarization representation by a set of frequency bands associated with the target resonant frequencies. This type of polarization description involved the formulation of a wideband scattering matrix to accommodate the broad nature of the signal presentation with appropriate bandwidth selection for each resonance; good estimation of the optimum polarization states in this procedure was achievable even for low signal-to-noise ratios. The second approach in chapter ‎4 extended the work of chapter ‎3 and involved the modification of the optimum polarization states by their associated powers. In addition, this approach included an identification algorithm based on the nearest neighbour technique. To identify the target, the identification algorithm involved the states at a set of resonant frequencies to give a majority vote. Then, a comparison of the performance of the modified polarization states and the original states demonstrated good improvement when the modified set is used. Generally, the accuracy of the resonance set estimate is more reliable in the time domain than the frequency domain, especially for resonances well localized in time. Therefore, the third approach in chapter ‎5 deals with the optimum states in the time domain where the extension to a wide band context was possible by the virtue of the polarization information embodied in the energy of the resonances. This procedure used a model-based signature to model the target impulse response as a set of resonances. The relevant resonance parameters, in this case, the resonant frequency and its associated energy, were extracted using the Matrix Pencil of Function algorithm. Again, this approach of sparse representation is necessary to find descriptors from the target impulse response that are time-invariant, and at the same time, can relate robustly to the target physical characteristics. A simple target such as a long wire showed that indeed polarization information contained in the target resonance energies could reflect the target physical attributes. In addition, for noise-corrupted signals and without any pulse averaging, the accuracy in estimating the optimum states was sufficiently good for signal to noise ratios above 20dB. Below this level, extraction of some members of the resonance set are not possible. In addition, using more complex wire models of aircraft, these time-based optimum states could distinguish between similar dimensional targets with small structural differences, e.g. different wing dihedral angles. The results also showed that the dominant resonance set has members belonging to different structural sections of the target. Therefore, incorporation of a time-based polarization set can give the full target physical characteristics. In the final procedure, a statistical Kernel function estimated the feature set derived previously in chapter ‎3, with aspect angle. After sampling the feature set over a wide set of angular aspects, a criterion based on the Bayesian error bisected the target global aspect into smaller sectors to decrease the variance of the estimate and, subsequently, decrease the probability of error. In doing so, discriminative features that have acceptable minimum probability of error were achievable. The minimum probability of error criterion and the angular bisection of the target could separate the feature set of two targets with similar resonances.
225

Análise dos sistemas de produção dos estabelecimentos rurais do município de Palmital/SP em busca de estratégias para o desenvolvimento rural / Analysis of production systems of rural establishments in the city of Palmital / SP in search of strategies for rural development

Bianchi, Vinícius Rafael [UNESP] 25 September 2015 (has links)
Submitted by Vinicius Rafael Bianchi (vini_bianchi@hotmail.com) on 2015-12-14T12:09:16Z No. of bitstreams: 1 Dissertação Vinicius Rafael Bianchi.pdf: 4843088 bytes, checksum: acd51851a8b271398245547f942d984e (MD5) / Approved for entry into archive by Cássia Adriana de Sant ana Gatti (cgatti@marilia.unesp.br) on 2015-12-17T17:58:46Z (GMT) No. of bitstreams: 1 Dissertação Vinicius Rafael Bianchi.pdf: 4843088 bytes, checksum: acd51851a8b271398245547f942d984e (MD5) / Made available in DSpace on 2015-12-17T17:58:46Z (GMT). No. of bitstreams: 1 Dissertação Vinicius Rafael Bianchi.pdf: 4843088 bytes, checksum: acd51851a8b271398245547f942d984e (MD5) Previous issue date: 2015-09-25 / Este estudo teve como principal objetivo fazer uma análise dos sistemas agrários do município de Palmital/SP, localizado na região centro oeste do estado de São Paulo, mais precisamente na região do Médio Paranapanema, por meio de uma contextualização histórica, da análise da paisagem e dos estudos existentes sobre este município. Em seguida se estabeleceu a tipologia dos produtores rurais de Palmital/SP e dos sistemas de produção por eles praticados. Tal abordagem está baseada em um enfoque sistêmico, pautadas nas premissas da ferramenta do diagnóstico de sistemas agrários. Definidas as tipologias, foram estabelecidas as distintas formas de geração de renda daqueles sistemas produtivos na referida localidade. Assumindo a importância da variável renda na manutenção e reprodução dos sistemas de produção agrícola, por meio de análise estatística multivariada (modelo de regressão múltiplo) foram identificados quais são os fatores que interferem positiva ou negativamente na renda agrícola dos produtores rurais. A coleta dos dados para análise foi feita por meio de um formulário de pesquisa aplicado a uma amostra de produtores rurais estratificada, proporcional à condição do estabelecimento rural. Todos os dados coletados estão compreendidos entre o período de agosto de 2013 a julho de 2014. Os resultados deste trabalho permitiram apontar as transformações dos sistemas agrários da região estudada bem como a presença de distintos tipos dos produtores rurais: patronais e familiares, com diversificação nos sistemas produtivos destes, respectivamente, e com casos em que se evidenciaram aqueles fatores que influenciavam no resultado obtido na renda agrícola de maneira positiva ou negativamente. Enfim, possibilitando elaborar propostas de desenvolvimento rural ao município de Palmital/SP. / This study aimed to make an analysis of agrarian systems of the city of Palmital / SP, located in the center west region of São Paulo, more precisely in the Middle Paranapanema, through a historical context, landscape analysis and of existing studies on it. Then it was established the typology of farmers of the municipality and of production systems by them. Such an approach is based on a systemic approach, guided the premises of the diagnosis of agrarian systems tool. Defined typologies, the different forms of income generation of productive systems in that locality were established. Assuming the importance of equity in the maintenance and reproduction of agricultural production systems by means of multivariate statistical analysis (multiple regression model) were analyzed what are the factors that interfere positively or negatively in the agricultural income of farmers. Data collection for analysis was done through a survey form applied to a stratified sample of farmers, proportional (to the condition of the farm). All data collected are between the period August 2013 to July 2014. These results point allowed the transformation of agrarian systems of the study area and the presence of different types of farmers: employers and family, with diversification in these production systems, respectively; in cases in which it showed those factors that influence the result obtained in agricultural income positive or negative way. Anyway, enabling elaborate proposals for rural development to the city of Palmital / SP.
226

Dégradation par électromigration dans les interconnexions en cuivre : étude des facteurs d'amélioration des durées de vie et analyse des défaillances précoces / Electromigration degradation in copper interconnects : study of lifetimes improvement factors and early failures analysis

Bana, Franck Lionel 22 November 2013 (has links)
Les circuits intégrés sont partie prenante de tous les secteurs industriels et de la vie couranteactuels. Leurs dimensions sont sans cesse réduites afin d’accroître leurs performances. Cetteminiaturisation s’accompagne notamment d’une densification et d’une complexification du réseaud’interconnexions. Les interconnexions, lignes métalliques chargées de transporter le signalélectrique dans les circuits apparaissent ainsi plus sensibles à la dégradation par électromigration.Ceci, compte tenu des fortes densités de courant qu’elles transportent. Il se trouve donc dans lesnoeuds technologiques avancés, de plus en plus difficile de garantir le niveau de fiabilité requis pourles interconnexions.La réduction de la durée de vie des interconnexions est liée à la fois à la difficulté croissanteà réaliser les étapes de procédés pour les géométries souhaitées et à l’augmentation de la dispersiondes temps à la défaillance. Dans un premier temps, nous avons poussé la compréhension desmécanismes en jeu lors de la dégradation par électromigration. Nous avons ainsi mis en évidence lerôle joué par la microstructure et la composition chimique des lignes de cuivre dans l’augmentationde leur durée de vie. Dans un second volet, l’accent a été porté sur l’amélioration de l’analysestatistique des durées de vie avec un focus sur les défaillances précoces et les distributionsbimodales qu’elles engendrent. De même, la structure multi liens que nous avons mise au pointpermet de répondre à la question fondamentale de l’augmentation de l’échantillonnage de test ;améliorant ainsi la précision aux faibles taux de défaillance pour des projections robustes des duréesde vie. / Integrated circuits are part of our nowadays life as they are presents everywhere; as well as in daily life or industry. They are continuously downscaled to increase their performances. As a result, this downscaling lead to complex interconnects grid architectures. Interconnects which are metal lines carrying electric signal in the circuit are thus more and more sensitive to electromigration failure. This I because of increasingly higher current densities they carry. Obviously, in advanced technology nodes, it is more and more difficult to ensure the reliability level required for interconnects. Interconnects lifetime reduction is linked to increasing difficulty to perform all process steps with these very small features and also to increasing failure times dispersion. In the first part of the work presented here, we deepened the understanding of mechanisms involved during electromigration degradation. We have thus shown the fundamental role played by the microstructure and the chemical composition of the line in increasing its lifetime. The second part of the work dealt with the improvement of statistical analysis of failure times. We thus focused on early failures and the bimodal failure times distributions they generate. As far as that goes, the multilink structure we have designed answers the fundamental question of increase test sampling. This lead then to improved precision at very low failure rates for robust lifetime's extrapolation to use conditions.
227

Estudo da reprodutibilidade do exame de microscopia especular de córnea em amostras com diferentes números de células / Reproducibility study of the corneal specular microscope in samples with different number of cells

Ricardo Holzchuh 19 August 2011 (has links)
INTRODUÇÃO: O endotélio corneal exerce papel primordial para a fisiologia da córnea. Seus dados morfológicos gerados pelo microscópio especular (MEC) como densidade endotelial (DE), área celular média (ACM), coeficiente de variação (CV) e porcentagem de células hexagonais (HEX) são importantes para avaliar sua vitalidade. Para interpretar estes dados de forma padronizada e reprodutível, foi utilizado um programa estatístico de análise amostral, Cells Analyzer PAT. REQ.(CA). OBJETIVO: Demonstrar valores de referência para DE, ACM, CV e HEX. Demonstrar o percentual de células endoteliais marcadas e desconsideradas no exame ao marcar-se 40, 100 e 150 células em uma única imagem do mosaico endotelial e o perfil do intervalo de confiança (IC) das variáveis estudadas ao se considerar 40, 100, 150 e tantas células quantas indicadas pelo CA. Demonstrar o erro amostral de cada grupo estudado. MÉTODOS: Estudo transversal. Os exames de MEC foram realizados com o aparelho Konan NONCON ROBO® SP-8000, nos 122 olhos de 61 portadores de catarata (63,97 ± 8,15 anos de idade). As imagens endoteliais caracterizaram se pelo número de células marcadas e consideradas para cálculo dos seguintes dados: DE, ACM, CV e HEX. Os grupos foram formados de 40, 100, 150 células marcadas numa única imagem endotelial e Grupo CA em que foram marcadas tantas células quanto necessárias em diferentes imagens, para obter o erro relativo calculado inferior ao planejado (0,05), conforme orientação do programa CA. Estudou-se o efeito do número de células sobre IC para as variáveis endoteliais utilizadas. RESULTADOS: A média dos valores de referência encontrados para DE foi 2395,37 ± 294,34 cel/mm2; ACM 423,64 ± 51,09 m2; CV 0,40 ± 0,04 e HEX 54,77 ± 4,19%. O percentual de células endoteliais desconsideradas no Grupo 40 foi 51,20%; no Grupo 100, 35,07% e no Grupo 150, 29,83%. O número médio de células calculado inicialmente pelo CA foi 247,48 ± 51,61 e o número médio de células efetivamente incluídas no final do processo amostral foi 425,25 ± 102,24. O erro amostral dos exames no Grupo 40 foi 0,157 ± 0,031; Grupo 100, 0,093 ± 0,024; Grupo 150, 0,075 ± 0,010 e Grupo CA, 0,037 ± 0,005. O aumento do número de células diminuiu a amplitude do IC nos olhos direito e esquerdo para a DE em 75,79% e 77,39%; ACM em 75,95% e 77,37%; CV em 72,72% e 76,92%; HEX em 75,93% e 76,71%. CONCLUSÃO: Os valores de referência da DE foi 2395,37 ± 294,34 cel/mm2; ACM foi 423,64 ± 51,09 m2; CV foi 0,40 ± 0,04 e HEX foi 54,77 ± 4,19%. O percentual de células endoteliais desconsideradas no Grupo 40 foi 51,20%; no Grupo 100 foi 35,07% e no Grupo 150 foi 29,83%. O programa CA considerou correto os exames nos quais 425,25 ± 102,24 células foram marcadas entre duas e cinco imagens (erro relativo calculado de 0,037 ± 0,005). O aumento do número de células diminuiu a amplitude do IC para todas as variáveis endoteliais avaliadas pela MEC / INTRODUCTION: Corneal endothelium plays an important role in physiology of the cornea. Morphological data generated from specular microscope such as endothelial cell density (CD), average cell area (ACA), coefficient of variance (CV) and percentage of hexagonal cells (HEX) are important to analyze corneal status. For a standard and reproducible analysis of the morphological data, a sampling statistical software called Cells Analyzer PAT. REC (CA) was used. PURPOSE: To determine normal reference values of CD, ACA, CV and HEX. To analyze the percentage of marked and excluded cells when the examiner counted 40, 100, 150 cells in one endothelial image. To analyze the percentage of marked and excluded cells according to the statistical software. To determine the confidence interval of these morphological data. METHODS: Transversal study of 122 endothelial specular microscope image (Konan, non-contact NONCON ROBO® SP- 8000 Specular Microscope) of 61 human individuals with cataract (63.97 ± 8.15 years old) was analyzed statistically using CA. Each image was submitted to standard cell counting. 40 cells were counted in study Group 40; 100 cells were counted in study Group 100; and 150 cells were counted in study Group 150. In study group CA, the number of counted cells was determined by the statistical analysis software in order to achieve the most reliable clinical information (relative error < 0,05). Relative error of the morphological data generated by the specular microscope were then analyzed by statistical analysis using CA software. For Group CA, relative planned error was set as 0.05. RESULTS: The average normal reference value of CD was 2395.37 ± 294.34 cells/mm2, ACA was 423.64 ± 51.09 m2, CV was 0.40 ± 0.04 and HEX was 54.77 ± 4.19%. The percentage of cells excluded for analysis was 51.20% in Group 40; 35.07% in Group 100; and 29.83% in Group 150. The average number of cells calculated initially by the statistical software was 247.48 ± 51.61 cells and the average number of cells included in the final sampling process was 425.25 ± 102.24 cells. The average relative error was 0.157 ± 0.031 for Group 40; 0.093 ± 0.024 for Group 100; 0.075 ± 0.010 for Group 150 and 0.037 ± 0.005 for Group CA. The increase of the marked cells decreases the amplitude of confidence interval (right and left eyes respectively) in 75.79% and 77.39% for CD; 75.95% and 77.37% for ACA; 72.72% and 76.92% for CV; 75.93% and 76.71% for HEX. CONCLUSION: The average normal reference value of CD was 2395.37 ± 294.34 cells/mm2, ACA was 423.64 ± 51.09 m2, CV was 0.40 ± 0.04 and HEX was 54.77 ± 4.19%. The percentage of excluded cells for analysis was 51.20% in Group 40; 35.07% in Group 100 and 29.83% in Group 150. CA software has considered reliable data when 425.25 ± 102.24 cells were marked by the examiner in two to five specular images (calculated relative error of 0.037 ± 0.005). The increase of the marked cells decreases the amplitude of confidence interval for all morphological data generated by the specular microscope
228

VARIABILIDADE DA QUALIDADE DA ÁGUA E DO ESTADO TRÓFICO DO RESERVATÓRIO DO VACACAÍ MIRIM / VARIABILITY OF WATER QUALITY AND THE STATE TROPHIC OF RESERVOIR VACACAÍ MIRIM

Burin, Rodrigo 15 July 2011 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The construction of several reservoirs to meet multiple uses has become essential to human societies. The Vacacaí Mirim s reservoir is used primarily for human supply, accounting for 40% of the Santa Maria water supply. The exam of this reservoir provides a fundamental basis for managing its water quality and its watershed. The objective of this research is to evaluate the variability of the water quality and the trophic status of the Vacacaí Mirim s reservoir. Eight field campaigns were performed between 2010 and 2011, and the water samples were taken at four distributed points along the reservoir and one point located upstream the main tributary river. On the reservoir s sampling points were collected samples from surface (S) and depth (P). The main water quality parameters evaluated were: Water Temperature, pH, Electrical Conductivity, Turbidity, DO, BOD5, COD, TSS, TDS, E. coli, Chlorophyll-a, Ammonium, Nitrite, Nitrate, Total Phosphorus. The trophic status was monitored considering the levels of phosphorus, chlorophyll-a and some biological indicators. The Vacacaí Mirim s reservoir presented a long period of thermal stability, being more or less consistent to the seasonal period, with full circulation only in winter, conferring its classification as warm monomictic. The concentration of most parameters analyzed varied more on a time level than a space level. The parameter that inspires greater concern is the number of E. coli, which in 16% of the time exceeds the limits for class 3 of CONAMA Resolution No. 357/05 making the water unsuitable for this purpose. The correlation analysis also showed strong relationship between some environmental hydrological variables with some parameters of quality. The seasonal variation of the values of the TSI and biological indicators, suggest the Vacacaí Mirim s reservoir as a mesotrophic ecosystem, sometimes tending to eutrophic, so showing an anthropogenic influence. It is suggested to the government, organizations and population, the proper attention to the degradation processes, being necessary the watershed land use control and ordination, the riparian vegetation reestablishment, as well as an adjustment on the effluents emission through appropriate treatment. / A construção de reservatórios visando atender usos múltiplos tornou-se essencial para as sociedades humanas. O reservatório do Vacacaí Mirim é utilizado primordialmente para abastecimento humano, sendo responsável por 40% do fornecimento de água para Santa Maria. O seu estudo proporciona uma base fundamental para o gerenciamento da qualidade de suas águas e de sua bacia hidrográfica. Assim, o objetivo desta pesquisa é avaliar a variabilidade da qualidade da água e do estado trófico do reservatório do Vacacaí Mirim. Foram realizadas oito campanhas de campo entre 2010 e 2011, sendo as coletas de água amostradas em quatro pontos distribuídos ao longo do reservatório e um ponto localizado à montante, no principal rio afluente. Nos pontos do reservatório foram coletadas amostras de superfície (S) e profundidade (P). Em laboratório, as características de qualidade da água foram avaliadas periodicamente, através dos parâmetros: temperatura da água, pH, condutividade elétrica, turbidez, OD, DBO5, DQO, SST, SDT, E. coli, Clorofila-a, Amônia, Nitrito, Nitrato, Fósforo Total e outros Íons. O estado trófico foi avaliado levando em consideração os teores de Fósforo, Clorofila-a e alguns indicadores biológicos. O reservatório do Vacacaí Mirim apresentou um longo período de estabilidade térmica, em maior ou menor grau coerentemente com o período sazonal, apresentando circulação completa somente no inverno, conferindo sua classificação como monomítico quente. As concentrações da maioria dos parâmetros analisados apresentaram maior variação a nível temporal que espacial. O parâmetro que inspira maior preocupação é o número de E. coli, que em 16% do tempo ultrapassa os limites estabelecidos para a classe 3 da Res. CONAMA N°357/05 tornando a água imprópria para este fim. As análises de correlação também mostraram forte relação entre algumas variáveis hidrológicas ambientais com alguns parâmetros de qualidade. A variação sazonal dos valores do IET e indicadores biológicos, sugerem o reservatório do Vacacaí Mirim como um ecossistema mesotrófico, tendendo algumas vezes à eutrofia, já evidenciando influência antropogênica. Sugere-se ao poder público, às organizações e à população, a devida atenção aos processos de degradação, sendo necessário o controle e a ordenação do uso do solo na bacia, o reestabelecimento da vegetação ripariana, bem como uma adequação nos lançamentos de efluentes através de um tratamento adequado.
229

Análise e Implementação de Algoritmos para a Aprendizagem por Reforço

Medeiros, Thiago Rodrigues 14 February 2014 (has links)
Made available in DSpace on 2015-05-14T12:36:47Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 6270725 bytes, checksum: 85c195f570753865adfc82909842b1d3 (MD5) Previous issue date: 2014-02-14 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The Reinforcement Learning is a subfield of machine learning and can be defined as a learning problem. An intelligent system that faces this problem, understands from rewards if the actions you are performing in the environment are good or bad. There are several methods and algorithms found in the literature to solve the problems of reinforcement learning. However, each of them have their advantages and disadvantages. From this, this paper presents a statistical analysis of some algorithms and a library of reinforcement learning, called AILibrary-RL. The AILibrary-RL is a library that has the objective to facilitate, organize and promote reusability of code, to implement systems that have this kind of problem. Before its development, a bibliographic survey of the main methods that solve this problem, aimed at statistical analysis of the data was performed in order to evaluate its advantages and disadvantages in different environments. This dissertation described the whole process of this work, since the survey bibliographic, analysis of the methods, mechanisms and library construction. / A Aprendizagem por Reforço é um subcampo do Aprendizado de Máquina e pode ser definido como um problema de aprendizagem. Um sistema inteligente que enfrenta esse problema, entende a partir de recompensas, se as ações que está realizando no ambiente são boas ou ruins. Existem vários métodos e algoritmos encontrados na literatura para resolver os problemas de aprendizagem por reforço, no entanto, cada um deles possuem suas vantagens e desvantagens. A partir disso, esse trabalho apresenta uma análise estatística de alguns algoritmos e uma biblioteca de aprendizagem por reforço, chamada AILibrary-RL. A AILibrary-RL é uma biblioteca que possui o objetivo de facilitar, organizar e promover a reusabilidade de código, para a implementação de sistemas que possuem esse tipo de problemática. Antes de seu desenvolvimento, foi realizado um levantamento bibliográfico dos principais métodos que solucionam a problemática de AR, visando a análise estatística dos mesmos, com o objetivo de avaliar suas vantagens e desvantagens em ambientes variados. Nesta dissertação está descrito todo o processo deste trabalho, desde o levantamento bibliográfico, análise dos métodos, mecanismos e construção da biblioteca.
230

Search for the production of a Higgs boson in association with top quarks and decaying into a b-quark pair and b-jet identification with the ATLAS experiment at LHC / Recherche du boson de Higgs produit en association avec des quarks top dans le canal de désintégration bb et identification des jets de saveur b dans l’expérience Atlas au LHC

Calvet, Thomas 08 November 2017 (has links)
En Juillet 2012, les expériences ATLAS et CMS annoncent la découverte d'une nouvelle particule de masse 125 GeV, compatible avec le boson de Higgs prédit par le Modèle Standard. Pour établir la nature de ce boson de Higgs et la comparer au Modèle Standard, il est nécessaire de mesurer le complage du boson de Higgs au fermions. En particulier le quark top possède le plus fort couplage de Yukawa avec le boson de Higgs. Ce couplage est accessible par le processus de production d'un boson de Higgs en association avec une paire de quarks tops (ttH). Cette thèse présente la recherche d'évènement ttH où le boson de Higgs se désintègre en deux quark b dans les données du Run 2 recueillies en 2015 et 2016 par le détecteur ATLAS. La composition du bruit de fond ainsi que la mesure du signal ttH dans les données sont obtenues à partir d'un ajustemement statistique des prédictions aux données. Le bruit de fond tt+jets étant la plus grande source d'incertitudes sur le signal, une attention particulière est portée à sa description.La détection des jets issus de quarks b, appelé b-tagging, est primordiale pour l'analyse ttH(H->bb) dont l'état final contient quatre quarks b. Afin d'améliorer la compréhension des performances des algorithmes de b-tagging pour le Run 2, la définition des jets de saveur b dans les simulations Monte Carlo est revisitée. Les algorithmes standards du b-tagging ne permettant pas la différenciation des jets contenant un ou deux quarks b, une methode spécifique à été développée et est présentée dans cette thèse. / In July 2012, the ATLAS and CMS experiments announced the discovery of a new particle, with a mass about 125 GeV, compatible with the Standard Model Higgs boson. In order to assess if the observed particle is the one predicted by the Standard Model, the couplings if this Higgs boson to fermions have to be measured. In particular, the top quark has the strongest Yukawa coupling to the Higgs boson. The associated production of a Higgs boson with a pair of top quarks (ttH) gives a direct access to this coupling. The ttH process is accessible for the first time in the Run 2 of the LHC thanks to an upgrade of the detector and the increase of the center of mass energy to 13 TeV. This thesis presents the search for ttH events with the Higgs boson decaying to a pair of b-quarks using data collected by the ATLAS detector in 2015 and 2016. The description of the background and the extraction of the ttH signal in data are obtained by a statistical matching on predictions to data. In particular the tt+jets background is the main limitation to signal sensitivity and is scrutinized.The identification of jets originating from b-quarks, called b-tagging, is a vital input to the search of ttH(H->bb) events because of the four b-quarks in the final state. For Run 2 the definition of b-flavoured-jets in Monte Carlo simulations is revisited to improve the understanding of b-tagging algorithms and their performance. Standard b-tagging algorithms do not separate jets originating from a single b-quark from those originating from two b-quarks. Thus a specific method has been developed and is reviewed in this thesis.

Page generated in 0.0727 seconds