Spelling suggestions: "subject:"ppa"" "subject:"capa""
1 |
Implication de la protéine SG1 dans le maintien des épigénomes chez Arabidopsis thaliana / Involvement of SG1 protein in maintaining the epigenomes in Arabidopsis thalianaDeremetz, Aurélie 03 December 2015 (has links)
La chromatine est le support de l’information génétique et sa structure, ainsi que son activitétranscriptionnelle, peuvent être modulées par des modifications épigénétiques. Le maintien des marquesrépressives telles que la méthylation de l’ADN et des histones, hors du corps des gènes, est nécessairepour le bon développement de la plante. Chez Arabidopsis thaliana, le mutant sg1 présente des défautsdéveloppementaux sévères caractéristiques de mutants affectés dans des mécanismes épigénétiques. Nousavons montré que le phénotype de sg1 est causé par une hyperméthylation CHG et H3K9me2 dans denombreux gènes. En effet, SG1 contrôle la transcription de l’histone déméthylase IBM1 et lesmodifications de l’épigénome observées chez sg1 sont dues à une dérégulation de IBM1. Nous avonsidentifié sept protéines partenaires de SG1, dont certaines se lient aux marques chromatiniennes. Nousavons réalisé un crible suppresseur qui a permis d’identifier FPA, une protéine régulant lapolyadénylation de certains transcrits, comme acteur impliqué dans le contrôle des cibles de SG1, dontIBM1. Nos résultats montrent que le complexe SG1 régule la transcription de ses cibles en influençant,par un mécanisme encore inconnu, le choix du site de polyadénylation, en lien avec les marqueschromatiniennes présentes aux locus cibles. D’autre part, certaines épimutations induites par la mutationsg1 peuvent être maintenues pendant plusieurs générations. Pour rechercher un lien entre méthylation desgènes et conséquences phénotypiques, nous avons caractérisé des épimutations liées à un défaut dedéveloppement de la fleur et identifié un certain nombre de gènes candidats potentiellement responsablesdu phénotype. Les résultats obtenus au cours de ma thèse ont contribué à préciser le rôle joué par lecomplexe SG1 et à comprendre le lien entre celui-ci et les marques épigénétiques. / Chromatin is known to contain the genetic information and its structure and transcriptionalstate can be regulated by epigenetic modifications. Repressive marks such as DNA and histonesmethylation needs to be kept away from gene bodies to enable the proper development of the plant. InArabidopsis thaliana, sg1 mutants show a range of severe developmental defects similar to thoseobserved in mutants affected in epigenetic pathways. We have shown that sg1 mutant phenotype iscaused by an increase of CHG and H3K9me2 methylation in many gene bodies. Indeed, SG1 regulatesthe histone demethylase IBM1 transcription and the impairment observed in sg1 mutant epigenomes iscaused by IBM1 misregulation. We found seven proteins interacting with SG1, among which somepartners are able to bind chromatin marks. Through a suppressor screen we identified FPA, alreadyknown to regulate the polyadenylation of some transcripts, as a player involved in SG1 targetsregulation, including IBM1. Our results show that the SG1 complex regulates target genes transcriptionby affecting polyadenylation site choice, in a way that remains to be determined, in a chromatin marksdependent manner. We also found that some of the sg1-induced epimutations can be maintained throughseveral generations. To investigate the link between gene body methylation and phenotypicconsequences, we have characterized epimutations related to a defect in floral development andidentified some candidate genes potentially responsible for the floral phenotype. Thus, our resultscontributed to clarify the role of SG1 and to understand its connection with epigenetic marks.
|
2 |
Fabrication And Characterization Of Inp Based Quantum Well Infrared PhotodetectorsTorunoglu, Gamze 01 July 2012 (has links) (PDF)
Quantum Well Infrared Photodetectors (QWIPs) have the advantages of excellent uniformity and mature material properties. Thanks to these properties, large format and low cost QWIP focal plane arrays (FPAs) can be fabricated. The standard material system used for QWIP FPAs is AlGaAs/GaAs in the long wavelength infrared (LWIR) band. AlGaAs/GaAs material system has some disadvantages such as low quantum and conversion efficiencies under high frame rate and/or low background conditions. These limitations of the standard material system give rise to research on alternative material systems for QWIPs. InP/InGaAs material system is an alternative to AlGaAs/GaAs for LWIR QWIPs. This thesis focuses on the development of InP/InGaAs QWIP FPAs. A large format (640x512) LWIR QWIP FPA constructed with strained InP/InGaAs system is demonstrated with high quantum and conversion efficiencies. The FPA fabricated with the 40-well epilayer structure yielded a peak quantum efficiency as high as 20% with a broad spectral response (15%). The responsivity peak and the cut-off wavelengths of the FPA are 8.5 and ~9 um, respectively. The peak responsivity of the FPA pixels is larger than 1 A/W with a conversion efficiency as high as ~17 % in the bias region where the detectivity is reasonably high. The FPA provides a background limited performance (BLIP) temperature higher than 65 K (f/1.5) and satisfies the requirements of most low integration time/low background applications. Noise equivalent temperature difference (NETD) of the FPA is as low as 25 mK with integration times as short as 2 ms (f/1.5, 68 K).
|
3 |
Determination of intrinsic material flammability properties from material tests assisted by numerical modellingSteinhaus, Thomas January 2010 (has links)
Computational Fluid Dynamics (CFD) codes are being increasingly used in the field of fire safety engineering. They provide, amongst other things, velocity, species and heat flux distributions throughout the computational domain. The various sub-models associated with these have been developed sufficiently to reduce the errors below 10%-15%, and work continues on reducing these errors yet further. However, the uncertainties introduced by using material properties as an input for these models are considerably larger than those from the other sub-models, yet little work is being done to improve these. Most of the data for these material properties comes from traditional (standard) tests. It is known that these properties are not intrinsic, but are test-specific. Thus, it can be expected that the errors incurred when using these in computations can be significant. Research has been held back by a lack of understanding of the basic factors that determine material flammability. The term “flammability” is currently used to encompass a number of definitions and “properties” that are linked to standardised test methodologies. In almost all cases, the quantitative manifestations of “flammability” are a combination of material properties and environmental conditions associated with the particular test method from which they were derived but are not always representative of parameters linked intrinsically with the tested material. The result is that even the best-defined parameters associated with flammability cannot be successfully introduced into fire models to predict ignition or fire growth. The aim of this work is to develop a new approach to the interpretation of standard flammability tests in order to derive the (intrinsic) material properties; specifically, those properties controlling ignition. This approach combines solid phase and gas modelling together with standard tests using computational fluid dynamics (CFD), mass fraction of flammable gases and lean flammability limits (LFL). The back boundary condition is also better defined by introducing a heat sink with a high thermal conductivity and a temperature dependant convective heat transfer coefficient. The intrinsic material properties can then be used to rank materials based on their susceptibility to ignition and, furthermore, can be used as input data for fire models. Experiments in a standard test apparatus (FPA) were performed and the resulting data fitted to a complex pyrolysis model to estimate the (intrinsic) material properties. With these properties, it should be possible to model the heating process, pyrolysis, ignition and related material behaviour for any adequately defined heating scenario. This was achieved, within bounds, during validation of the approach in the Cone Calorimeter and under ramped heating conditions in the Fire Propagation Apparatus (FPA). This work demonstrates that standard flammability and material tests have been proven inadequate for the purpose of obtaining the “intrinsic” material properties required for pyrolysis models. A significant step has been made towards the development of a technique to obtain these material properties using test apparatuses, and to predict ignition of the tested materials under any heating scenario. This work has successfully demonstrated the ability to predict the driving force (in-depth temperature distribution) in the ignition process. The results obtained are very promising and serve to demonstrate the feasibility of the methodology. The essential outcomes are the “lessons learnt”, which themselves are of great importance to the understanding and further development of this technique. One of these lessons is that complex modelling in conjunction with current standard flammability test cannot currently provide all required parameters. The uncertainty of the results is significantly reduced when using independently determined parameters in the model. The intrinsic values of the material properties depend significantly on the accuracy of the model and precision of the data.
|
4 |
Improving the understanding of fundamental mechanisms that influence ignition and burning behavior of porous wildland fuel bedsThomas, Jan Christian January 2017 (has links)
The phenomenon of a fire occurring in nature comes with a very high level of complexity. One central obstacle is the range of scales in such fires. In order to understand wildfires, research has to be conducted across these scales in order to study the mechanisms which drive wildfire behavior. The hazard related to such fires is ever more increasing as the living space of communities continues to increase and infringe with the wildland at the wildland-urban interface. In order to do so, a strong understanding on the possible wildfire behavior that may occur is critical. An array of factors impact wildfire behavior, which are generally categorized into three groups: (1) fuel (type, moisture content, loading, structure, continuity); (2) environmental (wind, temperature, relative humidity, precipitation); and (3) topography (slope, aspect). The complexity and coupling of factors impacting various scales of wildfire behavior has been the focus of much experimental and numerical work over the past decades. More recently, the need to quantify wildland fuel flammability and use the knowledge in mitigating risks, for example by categorizing vegetation according to their flammability has been recognized. Fuel flammability is an integral part of understanding wildfire behavior, since it can provide a quantification of the ignition and burning behavior of wildland fuel beds. Determining flammability parameters for vegetative fuels is however not a straight forward task and a rigorous standardized methodology has yet to be established. It is the intent of this work to aid in the path of finding a most suitable methodology to test vegetative fuel flammability. This is achieved by elucidating the fundamental heat and mass transfer mechanisms that drive ignition and burning behavior of porous wildland fuel beds. The work presented herein is a continuation of vegetative fuel flammability research using bench-scale calorimetry (the FM Global Fire Propagation Apparatus). This apparatus allows a high level of control of critical parameters. Experimental studies investigate how varying external heat flux (radiative), ventilation conditions (forced airflow rate, oxygen concentration, and temperature), and moisture content affect the ignition and burning behavior of wildland fuel. Two distinct ignition regimes were observed for radiative heating with forced convection cooling: (1) convection/radiation for low heating rates; and (2) radiation only for high heating rates. The threshold for the given convection conditions was near 45 kW.m-2. For forced convection, ignition behavior is dominated by convection cooling in comparison to dilution; ignition times were constant when the oxygen flow rate was varied (constant flow magnitude). Analysis of a radiative Biot number including heat losses (convection and radiation) indicated that the pine needles tested behaved thermally thin for the given heating rates (up to 60 kW.m-2). A simplified onedimensional, multi-phase heat transfer model for porous media is validated with experimental results (in-depth temperature measurements, critical heat flux and ignition time). The model performance was adequate for two species only, when the convective Froude number is less than 1.0 (only one packing ratio was tested). Increasing air flow rates resulted in higher heat of combustion due to increased pyrolysis rates. In the given experiments (ventilation controlled environment) combustion efficiency decreased with increasing O2 flow rates. Flaming combustion of pine needles in such environments resulted in four times greater CO generation rates compared to post flaming smoldering combustion. A link was made to live fuel flammability that is important for understanding the occurrence of extreme fire conditions such as crowning and to test if live fuel flammability contributes to the occurrence of a typical fire season. Significant seasonal variations were observed for the ignition and burning behavior of conditioned live pine needles. Variation and peak flammability due to ignition time and heat release rate can be associated to the growing season (physical properties and chemical composition of the needles). Seasonal trends were masked when unconditioned needles were tested as the release of water dominated effects. For wet fuel, ignition time increases linearly with fuel moisture content (FMC, R2 = 0.93). The peak heat release rate decreased non-linearly with FMC (R2 = 0.77). It was determined that above a threshold of 60% FMC (d.w.), seasonal variation in the heat release rate can be neglected. A novel live fuel flammability assessment to evaluate the seasonality of ignition and burning behavior is proposed. For the given case (NJ Pine Barrens, USA), the flammability assessment indicated that the live fuel is most flammable in August. Such assessment can provide a framework for a live fuel flammability classification system that is based on rigorous experimentation in well controlled fire environments.
|
5 |
Development of a numerical and experimental framework to understand and predict the burning dynamics of porous fuel bedsEl Houssami, Mohamad January 2017 (has links)
Understanding the burning behaviour of litter fuels is essential before developing a complete understanding of wildfire spread. The challenge of predicting the fire behaviour of such fuels arises from their porous nature and from the strong coupling of the physico-chemical complexities of the fuel with the surrounding environment, which controls the burning dynamics. In this work, a method is presented to accurately understand the processes which control the burning behaviour of a wildland fuel layer using numerical simulations coupled with laboratory experiments. Simulations are undertaken with ForestFireFOAM, a modification of FireFOAM that uses a Large Eddy Simulation solver to represent porous fuel by implementing a multiphase formulation to conservation equations (mass, momentum, and energy). This approach allows the fire- induced behaviour of a porous, reactive and radiative medium to be simulated. Conservation equations are solved in an averaged control volume at a scale su cient to contain both coexisting gas and solid phases, considering strong coupling between the phases. Processes such as drying, pyrolysis, and char combustion are described through temperature-dependent interaction between the solid and gas phases. Di↵erent sub-models for heat transfer, pyrolysis, gas combustion, and smouldering have been implemented and tested to allow better representation of these combustion processes. Numerical simulations are compared with experiments undertaken in a controlled environment using the FM Global Fire Propagation Apparatus. Pine needle beds of varying densities and surface to volume ratios were subject to radiative heat fluxes and flows to interrogate the ignition and combustion behaviour. After including modified descriptions of the heat transfer, degradation, and combustion models, it is shown that key flammability parameters of mass loss rates, heat release rates, gas emissions and temperature fields agree well with experimental observations. Using this approach, we are able to provide the appropriate modifications to represent the burning behaviour of complex wildland fuels in a range of conditions representative of real fires. It is anticipated that this framework will support larger-scale model development and optimisation of fire simulations of wildland fuels.
|
6 |
Investigation of Star Formation: Instrumentation and MethodologyJanuary 2012 (has links)
abstract: A thorough exploration of star formation necessitates observation across the electromagnetic spectrum. In particular, observations in the submillimeter and ultra-violet allow one to observe very early stage star formation and to trace the evolution from molecular cloud collapse to stellar ignition. Submillimeter observations are essential for piercing the heart of heavily obscured stellar nurseries to observe star formation in its infancy. Ultra-violet observations allow one to observe stars just after they emerge from their surrounding environment, allowing higher energy radiation to escape. To make detailed observations of early stage star formation in both spectral regimes requires state-of-the-art detector technology and instrumentation. In this dissertation, I discuss the calibration and feasibility of detectors developed by Lawrence Berkeley National Laboratory and specially processed at the Jet Propulsion Laboratory to increase their quantum efficiency at far-ultraviolet wavelengths. A cursory treatment of the delta-doping process is presented, followed by a thorough discussion of calibration procedures developed at JPL and in the Laboratory for Astronomical and Space Instrumentation at ASU. Subsequent discussion turns to a novel design for a Modular Imager Cell forming one possible basis for construction of future large focal plane arrays. I then discuss the design, fabrication, and calibration of a sounding rocket imaging system developed using the MIC and these specially processed detectors. Finally, I discuss one scientific application of sub-mm observations. I used data from the Heinrich Hertz Sub-millimeter Telescope and the Sub-Millimeter Array (SMA) to observe sub-millimeter transitions and continuum emission towards AFGL 2591. I tested the use of vibrationally excited HCN emission to probe the protostellar accretion disk structure. I measured vibrationally excited HCN line ratios in order to elucidate the appropriate excitation mechanism. I find collisional excitation to be dominant, showing the emission originates in extremely dense (n&sim10;11 cm-3), warm (T&sim1000; K) gas. Furthermore, from the line profile of the v=(0, 22d, 0) transition, I find evidence for a possible accretion disk. / Dissertation/Thesis / Ph.D. Physics 2012
|
7 |
A Unification Model And Tool Support For Software Functional Size Measurement MethodsEfe, Pinar 01 June 2006 (has links) (PDF)
Software size estimation/measurement has been the objective of a lot of research in the software engineering community due to the need of reliable size estimates. FSM Methods have become widely used in software project management to measure the functional size of software since its first publication, late 1970s. Although all FSM methods measure the functional size by quantifying the FURs, each method defined its own measurement process and metric. Therefore, a piece of software has several functional sizes when measured by different methods. In order to be able to compare functional sizes of software products measured by different methods, we need to convert them to each other.
In this thesis study, the similarities and differences between four FSM methods, IFPUG FPA, Mark II FPA, COSMIC FFP and ARCHI DIM FSM are investigated and the common core concepts are presented. Accordingly a unification model of the measurement process of all four methods is proposed. The main objective of this model is to measure the functional size of a software system by applying all four methods simultaneously, using a single source of data. In order to have an infrastructure to validate the unification model by conducting empirical studies, a software tool is designed and implemented based on the unification model. Two empirical studies are conducted by utilizing the data of a real project to evaluate both the unification model proposed and the developed tool and the measurement results are discussed.
|
8 |
Melhoria na consistência da contagem de pontos de função com base na Árvore de pontos de função / Improvement in the consistency of function point counting based on the Function Points TreeFreitas Junior, Marcos de 08 December 2015 (has links)
Análise de Pontos de Função (APF) é uma das medidas usadas para obter o tamanho funcional de um software. Determinou-se, no Brasil, que toda contratação pública de desenvolvimento de software deve usar APF. Entretanto, uma das principais críticas realizadas a APF diz respeito à falta de confiabilidade entre diferentes contadores em uma mesma contagem já que, segundo alguns pesquisadores, as regras de APF são subjetivas, obrigando que cada contador faça interpretações individuais a partir delas. Existem diversas propostas para que se possa aumentar a confiabilidade dos resultados gerados com APF. Em geral, as abordagens propostas realizam mapeamentos entre componentes de artefatos desenvolvidos no ciclo de vida de software com os conceitos de APF. Porém, tais propostas simplificam em mais de 50% as regras previstas em APF comprometendo a validade dos resultados gerados pelas contagens. Como o tamanho do software é usado na derivação de outras medidas, inconsistências nos tamanhos medidos podem comprometer as medidas derivadas, o que influência negativamente nas decisões tomadas. Sem padronização dos tamanhos funcionais obtidos e consequentemente sem confiabilidade dos resultados obtidos, medidas derivadas a partir do tamanho funcional, como custo e esforço, podem estar comprometidas, fazendo com que ela não ajude a influenciar positivamente tais projetos. Diante desse contexto, o objetivo deste trabalho é desenvolver e avaliar experimentalmente uma abordagem para oferecer maior padronização e sistematização na aplicação de APF. Para isso, propõe-se incorporar o artefato Árvore de pontos de função ao processo de APF. Sua inclusão possibilitaria o levantamento de dados adicionais, necessários à contagem de pontos de função, reduzindo a ocorrência de interpretações pessoais do contador, e consequentemente, a variação de tamanho reportado. A abordagem foi denominada como Análise de Pontos de Função baseada em Árvore de Pontos de Função (APF-APF). Este trabalho baseia-se no método de pesquisa Design Science, cujo objetivo é estender os limites do ser humano e as capacidades organizacionais, criando novos artefatos que solucionem problemas ainda não resolvidos ou parcialmente resolvidos; que neste trabalho, trata-se da falta de confiabilidade na aplicação de APF devido à sua margem para diferentes interpretações. APF-APF foi testada com 11 Analistas de Sistemas / Requisitos que, baseados na especificação de um software de Recursos Humanos medido oficialmente pelo IFPUG com 125 pontos de função, modelaram a Árvore de pontos de função de modo manual ou automatizado via protótipo de ferramenta desenvolvido. Os resultados obtidos indicam que os tamanhos funcionais calculados com APF-APF possuem coeficiente de variação, respectivamente de 10,72% em relação a confiabilidade e 17,61% em relação a validade dos resultados de medição gerados. Considera-se que a abordagem APF-APF mostrou potencial para que melhores resultados possam ser obtidos. Verifica-se que a principal causa das variações observadas estava relacionada a ausência de informações requeridas para a Árvore de pontos de função, não tendo sido identificado nenhum problema específico em relação as regras definidas para APF-APF. Por fim, verificou-se que o uso do protótipo de ferramenta desenvolvido aumenta em até 47% a eficiência na contagem de pontos de função quando comparado com APF-APF manual / Function point analysis (FPA) is one of the measures used to achieve the functional size of software. It was determined, in Brazil, public procurement of software development should use FPA. However, one of the main criticisms made the FPA concerns the lack of reliability between different counters on the same count that, according to some researchers, the FPA rules are subjective, requiring that each counter do individual interpretations from them. There are various proposals in order to increase the reliability of the results generated with FPA. In General, the proposed approaches perform mappings between artifacts developed components in software life cycle with the concepts of FPA. However, such proposals simplify in more than 50% the rules laid down in FPA compromising the validity of the results generated by the scores. As the size of the software is used in the derivation of other measures, inconsistencies in sizes measured may compromise the measures derived, which negatively influence the decisions taken. Without standardization of functional sizes obtained and consequently without reliability of the results obtained, derived from measures of functional size, cost and effort, may be compromised, causing it to not help to positively influence these projects. In this context, the objective of this work is to develop and experimentally evaluate one approach to offer greater standardization and systematization in the implementation of FPA. For this, it is proposed to incorporate the artifact \"function point Tree\" to the FPA process. Its inclusion would allow additional data collection necessary for function point count, reducing the occurrence of personal interpretations of the counter, and consequently, the variation of size reported. The approach was called as Function Point Tree-based Function Point Analysis (FPT-FPA). This work is based on the method of Design Science research, whose goal is to extend the limits of the human and organizational capacities, creating new artifacts to troubleshoot unresolved or still partially resolved; in this work, it is the lack of reliability in application of FPA because of its scope for different interpretations. FPT-FPA were tested with 11 Systems analysts / requirements analysts, based on the specification of a human resources software measured by the IFPUG with 125 points, have modeled the function point Tree manually or via automated tool prototype developed. The results obtained indicate that the functional sizes calculated with FPT-FPA have coefficient of variation, respectively of 10.72% for reliability and 17.61% in relation to the validity of the measurement results generated. The FPA approach showed potential for better results can be obtained. It turns out that the main cause of the variations observed were related to the absence of information required for the tree of function points have not been identified any particular problem regarding the rules defined for FPT-FPA. Finally, it was found that the use of a prototype tool increases by up to 47% on efficiency function point count when compared to FPT-FPA manual
|
9 |
Melhoria na consistência da contagem de pontos de função com base na Árvore de pontos de função / Improvement in the consistency of function point counting based on the Function Points TreeMarcos de Freitas Junior 08 December 2015 (has links)
Análise de Pontos de Função (APF) é uma das medidas usadas para obter o tamanho funcional de um software. Determinou-se, no Brasil, que toda contratação pública de desenvolvimento de software deve usar APF. Entretanto, uma das principais críticas realizadas a APF diz respeito à falta de confiabilidade entre diferentes contadores em uma mesma contagem já que, segundo alguns pesquisadores, as regras de APF são subjetivas, obrigando que cada contador faça interpretações individuais a partir delas. Existem diversas propostas para que se possa aumentar a confiabilidade dos resultados gerados com APF. Em geral, as abordagens propostas realizam mapeamentos entre componentes de artefatos desenvolvidos no ciclo de vida de software com os conceitos de APF. Porém, tais propostas simplificam em mais de 50% as regras previstas em APF comprometendo a validade dos resultados gerados pelas contagens. Como o tamanho do software é usado na derivação de outras medidas, inconsistências nos tamanhos medidos podem comprometer as medidas derivadas, o que influência negativamente nas decisões tomadas. Sem padronização dos tamanhos funcionais obtidos e consequentemente sem confiabilidade dos resultados obtidos, medidas derivadas a partir do tamanho funcional, como custo e esforço, podem estar comprometidas, fazendo com que ela não ajude a influenciar positivamente tais projetos. Diante desse contexto, o objetivo deste trabalho é desenvolver e avaliar experimentalmente uma abordagem para oferecer maior padronização e sistematização na aplicação de APF. Para isso, propõe-se incorporar o artefato Árvore de pontos de função ao processo de APF. Sua inclusão possibilitaria o levantamento de dados adicionais, necessários à contagem de pontos de função, reduzindo a ocorrência de interpretações pessoais do contador, e consequentemente, a variação de tamanho reportado. A abordagem foi denominada como Análise de Pontos de Função baseada em Árvore de Pontos de Função (APF-APF). Este trabalho baseia-se no método de pesquisa Design Science, cujo objetivo é estender os limites do ser humano e as capacidades organizacionais, criando novos artefatos que solucionem problemas ainda não resolvidos ou parcialmente resolvidos; que neste trabalho, trata-se da falta de confiabilidade na aplicação de APF devido à sua margem para diferentes interpretações. APF-APF foi testada com 11 Analistas de Sistemas / Requisitos que, baseados na especificação de um software de Recursos Humanos medido oficialmente pelo IFPUG com 125 pontos de função, modelaram a Árvore de pontos de função de modo manual ou automatizado via protótipo de ferramenta desenvolvido. Os resultados obtidos indicam que os tamanhos funcionais calculados com APF-APF possuem coeficiente de variação, respectivamente de 10,72% em relação a confiabilidade e 17,61% em relação a validade dos resultados de medição gerados. Considera-se que a abordagem APF-APF mostrou potencial para que melhores resultados possam ser obtidos. Verifica-se que a principal causa das variações observadas estava relacionada a ausência de informações requeridas para a Árvore de pontos de função, não tendo sido identificado nenhum problema específico em relação as regras definidas para APF-APF. Por fim, verificou-se que o uso do protótipo de ferramenta desenvolvido aumenta em até 47% a eficiência na contagem de pontos de função quando comparado com APF-APF manual / Function point analysis (FPA) is one of the measures used to achieve the functional size of software. It was determined, in Brazil, public procurement of software development should use FPA. However, one of the main criticisms made the FPA concerns the lack of reliability between different counters on the same count that, according to some researchers, the FPA rules are subjective, requiring that each counter do individual interpretations from them. There are various proposals in order to increase the reliability of the results generated with FPA. In General, the proposed approaches perform mappings between artifacts developed components in software life cycle with the concepts of FPA. However, such proposals simplify in more than 50% the rules laid down in FPA compromising the validity of the results generated by the scores. As the size of the software is used in the derivation of other measures, inconsistencies in sizes measured may compromise the measures derived, which negatively influence the decisions taken. Without standardization of functional sizes obtained and consequently without reliability of the results obtained, derived from measures of functional size, cost and effort, may be compromised, causing it to not help to positively influence these projects. In this context, the objective of this work is to develop and experimentally evaluate one approach to offer greater standardization and systematization in the implementation of FPA. For this, it is proposed to incorporate the artifact \"function point Tree\" to the FPA process. Its inclusion would allow additional data collection necessary for function point count, reducing the occurrence of personal interpretations of the counter, and consequently, the variation of size reported. The approach was called as Function Point Tree-based Function Point Analysis (FPT-FPA). This work is based on the method of Design Science research, whose goal is to extend the limits of the human and organizational capacities, creating new artifacts to troubleshoot unresolved or still partially resolved; in this work, it is the lack of reliability in application of FPA because of its scope for different interpretations. FPT-FPA were tested with 11 Systems analysts / requirements analysts, based on the specification of a human resources software measured by the IFPUG with 125 points, have modeled the function point Tree manually or via automated tool prototype developed. The results obtained indicate that the functional sizes calculated with FPT-FPA have coefficient of variation, respectively of 10.72% for reliability and 17.61% in relation to the validity of the measurement results generated. The FPA approach showed potential for better results can be obtained. It turns out that the main cause of the variations observed were related to the absence of information required for the tree of function points have not been identified any particular problem regarding the rules defined for FPT-FPA. Finally, it was found that the use of a prototype tool increases by up to 47% on efficiency function point count when compared to FPT-FPA manual
|
10 |
Differenzierung von humanen Plattenepithelkarzinomen mittels IR-mikrospektroskopischem ImagingSteller, Wolfram 24 August 2007 (has links) (PDF)
Die Dissertation befasste sich mit der Entwicklung einer neuen diagnostischen Methode für in-situ-Gewebeuntersuchungen. Der Ansatzpunkt war die Untersuchung von pathologischen Veränderungen im Gewebe, die sich biochemisch in den Zellen widerspiegeln und deshalb mit schwingungsspektroskopischen Methoden, wie der IR-Spektroskopie, erfassbar sind. Das Ziel der Arbeit war die IR-spektroskopische Charakterisierung und Klassifizierung von benignen, präkanzerösen und malignen Geweben mittels chemometrischer Algorithmen auf der Basis multivariater Informationen der IR-Spektren. Um komplexe spektrale Veränderungen zu charakterisieren und die Ergebnisse statistisch abzusichern, ist für jeden Gewebetyp eine Vielzahl an Spektren erforderlich. Daher wurde zur Spektrenakkumulation das IR-mikrospektroskopische Imaging mittels Focal Plane Array Detektor (FPA) genutzt. Die Herausforderung liegt in der Datenanalyse. Der große Datenumfang macht die Anwendung multivariater Algorithmen notwendig. Angewendet wurden Clusteralgorithmen zur Spektrendifferenzierung und die SIMCA (Soft Independent Modelling of Class Analogies) zur Spektrenklassifizierung. Die Validierung der Ergebnisse erfolgt über die histologische Untersuchung der nach der spektroskopischen Messung gefärbten Gewebedünnschnitte. Die genaue Vorgehensweise bei der Auswertung wird in dieser Arbeit anhand humaner Gewebeproben dargestellt. Die untersuchten Plattenepithelkarzinome und Adenokarzinome gehören zu den epithelialen Tumoren, die oralen bzw. zervikalen Ursprungs sind. Die Übertragbarkeit der spektralen Modelle wurde mit Gewebeproben mehrerer Patienten innerhalb einer und zwischen verschiedenen Tumorarten untersucht. Das ist ein erster Schritt zum Einsatz spektroskopischer Methoden in der medizinischen Forschung und Diagnostik.
|
Page generated in 0.0503 seconds