• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 16
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Parameterized Complexity of Maximum Edge Coloring in Graphs

Goyal, Prachi January 2012 (has links) (PDF)
The classical graph edge coloring problem deals in coloring the edges of a given graph with minimum number of colors such that no two adjacent edges in the graph, get the same color in the proposed coloring. In the following work, we look at the other end of the spectrum where in our goal is to maximize the number of colors used for coloring the edges of the graph under some vertex specific constraints. We deal with the MAXIMUM EDGE COLORING problem which is defined as the following –For an integer q ≥2 and a graph G, the goal is to find a coloring of the edges of G with the maximum number of colors such that every vertex of the graph sees at most q colors. The question is very well motivated by the problem of channel assignment in wireless networks. This problem is NP-hard for q ≥ 2, and has been well-studied from the point of view of approximation. This problem has not been studied in the parameterized context before. Hence as a next step, this thesis investigates the parameterized complexity of this problem where the standard parameter is the solution size. The main focus of the work is the special case of q=2 ,i.e. MAXIMUM EDGE 2-COLORING which is theoretically intricate and practically relevant in the wireless networks setting. We first show an exponential kernel for the MAXIMUM EDGE q-COLORING problem where q is a fixed constant and q ≥ 2.We do a more specific analysis for the kernel of the MAXIMUM EDGE 2-COLORING problem. The kernel obtained here is still exponential in size but is better than the kernel obtained for MAXIMUM EDGE q-COLORING problem in case of q=2. We then show a fixed parameter tractable algorithm for the MAXIMUM EDGE 2-COLORING problem with a running time of O*∗(kO(k)).We also show a fixed parameter tractable algorithm for the MAXIMUM EDGE q-COLORING problem with a running time of O∗(kO(qk) qO(k)). The fixed parameter tractability of the dual parametrization of the MAXIMUM EDGE 2-COLORING problem is established by arguing a linear vertex kernel for the problem. We also show that the MAXIMUM EDGE 2-COLORING problem remains hard on graphs where the maximum degree is a constant and also on graphs without cycles of length four. In both these cases, we obtain quadratic kernels. A closely related variant of the problem is the question of MAX EDGE{1,2-}COLORING. For this problem, the vertices in the input graph may have different qε,{1.2} values and the goal is to use at least k colors for the edge coloring of the graph such that every vertex sees at most q colors, where q is either one or two. We show that the MAX EDGE{1,2}-COLORING problem is W[1]-hard on graphs that have no cycles of length four.
12

Survey of colostrum quality and management practices on commercial dairy farms in the Eastern Cape Province of South Africa

Schoombee, Wilhelm Sternberg 06 1900 (has links)
Bovine maternal antibodies are not transferred across the placenta during pregnancy and newborn calves are unable to produce their own antibodies within the first weeks after birth. As neonates are born agammaglobulinemic they need to acquire immunoglobulins (Ig) from their dam’s colostrum to acquire passive immunity. Colostrum fed to dairy calves, which is not timeous, of inadequate quantity and of unverified quality, might result in decreased neonate health. The aim of this study was to conduct a survey of the management of colostrum on commercial dairy farms, to estimate the quality of the Ig titre of colostrum fed to neonates and finally to recommend methods and techniques critical to the successful management of colostrum. The methods used included a questionnaire which was conducted as a structured interview on a one-on-one basis among 50 randomly selected commercial dairy farmers in the Eastern Cape Coastal Region of South Africa. The estimation of the colostrum Ig titre of colostrum fed to neonates was made by the on-farm measurement of specific gravity (SG) by making use of a commercially available KRUUSE colostrometer (Fleenor and Stott, 1980). A pooled colostrum sample, from each of the four quarters, from 90 randomly selected post-partum cows was collected on a leader commercial dairy farm. This method was used to compare colostrum samples from cows run under similar management practices. These samples were collected for analysis within 6 hrs of calving and were done over 3 seasons (autumn, winter and spring). Survey - The colostrum mass and timing of the initial feed are the most important factors when aiming to achieve adequate passive immunity. The results of the survey indicated that most of the farmers of this region feed an inadequate mass of colostrum (volume and Ig concentration) and only 52% of farmers surveyed feed colostrum less than 6 hrs post- partum. The majority (78%) of surveyed farmers did not follow up their initial colostrum feeding.Colostrum sampling - At the trial site only 10% (9 from 90 colostrum samples measured), were found to be of adequate SG quality. Cow age (parity), season of calving and colostrum temperature had an influence on the estimated colostrum SG. However, season of calving was found to have the greatest influence on SG values. These results were consistent with findings from previous studies that SG values from the cooler months were higher than those of the hotter months. Tables 4.7 (P=0.330), 4.8 (P=0.012) and 4.9 (P=0.005) showed that regression analysis confirmed that LS means across seasons were inadequately below the required 50 mg/ml Ig required for sufficient passive immunity. Tables 4.1 (P=0.164), 4.2 (P=0.011) and 4.3 (P=0.021) shows that season of calving had a much greater effect on CR than did parity, Table 4.5 (P=0.177). Table 4.4 shows that colostrum temperature has an significant effect on SG value. Recommendations for methods and techniques critical to the successful management of colostrum were made. These recommendations were based on the analysis of the data obtained from the questionnaire and the on-farm colostrum sampling study. The most important and critical management practices surveyed includes the timing of the cow and calf separation where it was found that only 30 from the 50 (60%) of the farms surveyed separate calves and dams at day (0), 19 from 50 farms (38%) separate at day (3-5) and 1 from 50 farms (2%) separate only at day 7 or later. Thus 40% of surveyed farms allow cows to nurse their calves. With regards to early exposure to pathogens this is a high risk management practice. Further to that, only 2 from 50 surveyed farms (4%) measure the colostrum quality fed to their calves and 48 from 50 farms (96%) feed colostrum of unmeasured quality. The mass of colostrum fed to calves is an important parameter for successful transmission of Ig. In the survey it was found that 28 from 50 farms (56%) feed 2L – 4L of colostrum and 11 from 50 farms (22%) feed 2L of colostrum. Thus 78% of farms feed approximately 50% of the amount of colostrum required for successful transmission of Ig. Finally only 1 from 50 farms (2%) freeze excess colostrum and 1 from 50 farms (2%) pool excess colostrum. Both these farms measure colostrum quality by colostrometer. / Agriculture Animal Health and Human Ecology / M. Sc. (Agriculture)
13

Melhoria na consistência da contagem de pontos de função com base na Árvore de pontos de função / Improvement in the consistency of function point counting based on the Function Points Tree

Freitas Junior, Marcos de 08 December 2015 (has links)
Análise de Pontos de Função (APF) é uma das medidas usadas para obter o tamanho funcional de um software. Determinou-se, no Brasil, que toda contratação pública de desenvolvimento de software deve usar APF. Entretanto, uma das principais críticas realizadas a APF diz respeito à falta de confiabilidade entre diferentes contadores em uma mesma contagem já que, segundo alguns pesquisadores, as regras de APF são subjetivas, obrigando que cada contador faça interpretações individuais a partir delas. Existem diversas propostas para que se possa aumentar a confiabilidade dos resultados gerados com APF. Em geral, as abordagens propostas realizam mapeamentos entre componentes de artefatos desenvolvidos no ciclo de vida de software com os conceitos de APF. Porém, tais propostas simplificam em mais de 50% as regras previstas em APF comprometendo a validade dos resultados gerados pelas contagens. Como o tamanho do software é usado na derivação de outras medidas, inconsistências nos tamanhos medidos podem comprometer as medidas derivadas, o que influência negativamente nas decisões tomadas. Sem padronização dos tamanhos funcionais obtidos e consequentemente sem confiabilidade dos resultados obtidos, medidas derivadas a partir do tamanho funcional, como custo e esforço, podem estar comprometidas, fazendo com que ela não ajude a influenciar positivamente tais projetos. Diante desse contexto, o objetivo deste trabalho é desenvolver e avaliar experimentalmente uma abordagem para oferecer maior padronização e sistematização na aplicação de APF. Para isso, propõe-se incorporar o artefato Árvore de pontos de função ao processo de APF. Sua inclusão possibilitaria o levantamento de dados adicionais, necessários à contagem de pontos de função, reduzindo a ocorrência de interpretações pessoais do contador, e consequentemente, a variação de tamanho reportado. A abordagem foi denominada como Análise de Pontos de Função baseada em Árvore de Pontos de Função (APF-APF). Este trabalho baseia-se no método de pesquisa Design Science, cujo objetivo é estender os limites do ser humano e as capacidades organizacionais, criando novos artefatos que solucionem problemas ainda não resolvidos ou parcialmente resolvidos; que neste trabalho, trata-se da falta de confiabilidade na aplicação de APF devido à sua margem para diferentes interpretações. APF-APF foi testada com 11 Analistas de Sistemas / Requisitos que, baseados na especificação de um software de Recursos Humanos medido oficialmente pelo IFPUG com 125 pontos de função, modelaram a Árvore de pontos de função de modo manual ou automatizado via protótipo de ferramenta desenvolvido. Os resultados obtidos indicam que os tamanhos funcionais calculados com APF-APF possuem coeficiente de variação, respectivamente de 10,72% em relação a confiabilidade e 17,61% em relação a validade dos resultados de medição gerados. Considera-se que a abordagem APF-APF mostrou potencial para que melhores resultados possam ser obtidos. Verifica-se que a principal causa das variações observadas estava relacionada a ausência de informações requeridas para a Árvore de pontos de função, não tendo sido identificado nenhum problema específico em relação as regras definidas para APF-APF. Por fim, verificou-se que o uso do protótipo de ferramenta desenvolvido aumenta em até 47% a eficiência na contagem de pontos de função quando comparado com APF-APF manual / Function point analysis (FPA) is one of the measures used to achieve the functional size of software. It was determined, in Brazil, public procurement of software development should use FPA. However, one of the main criticisms made the FPA concerns the lack of reliability between different counters on the same count that, according to some researchers, the FPA rules are subjective, requiring that each counter do individual interpretations from them. There are various proposals in order to increase the reliability of the results generated with FPA. In General, the proposed approaches perform mappings between artifacts developed components in software life cycle with the concepts of FPA. However, such proposals simplify in more than 50% the rules laid down in FPA compromising the validity of the results generated by the scores. As the size of the software is used in the derivation of other measures, inconsistencies in sizes measured may compromise the measures derived, which negatively influence the decisions taken. Without standardization of functional sizes obtained and consequently without reliability of the results obtained, derived from measures of functional size, cost and effort, may be compromised, causing it to not help to positively influence these projects. In this context, the objective of this work is to develop and experimentally evaluate one approach to offer greater standardization and systematization in the implementation of FPA. For this, it is proposed to incorporate the artifact \"function point Tree\" to the FPA process. Its inclusion would allow additional data collection necessary for function point count, reducing the occurrence of personal interpretations of the counter, and consequently, the variation of size reported. The approach was called as Function Point Tree-based Function Point Analysis (FPT-FPA). This work is based on the method of Design Science research, whose goal is to extend the limits of the human and organizational capacities, creating new artifacts to troubleshoot unresolved or still partially resolved; in this work, it is the lack of reliability in application of FPA because of its scope for different interpretations. FPT-FPA were tested with 11 Systems analysts / requirements analysts, based on the specification of a human resources software measured by the IFPUG with 125 points, have modeled the function point Tree manually or via automated tool prototype developed. The results obtained indicate that the functional sizes calculated with FPT-FPA have coefficient of variation, respectively of 10.72% for reliability and 17.61% in relation to the validity of the measurement results generated. The FPA approach showed potential for better results can be obtained. It turns out that the main cause of the variations observed were related to the absence of information required for the tree of function points have not been identified any particular problem regarding the rules defined for FPT-FPA. Finally, it was found that the use of a prototype tool increases by up to 47% on efficiency function point count when compared to FPT-FPA manual
14

Melhoria na consistência da contagem de pontos de função com base na Árvore de pontos de função / Improvement in the consistency of function point counting based on the Function Points Tree

Marcos de Freitas Junior 08 December 2015 (has links)
Análise de Pontos de Função (APF) é uma das medidas usadas para obter o tamanho funcional de um software. Determinou-se, no Brasil, que toda contratação pública de desenvolvimento de software deve usar APF. Entretanto, uma das principais críticas realizadas a APF diz respeito à falta de confiabilidade entre diferentes contadores em uma mesma contagem já que, segundo alguns pesquisadores, as regras de APF são subjetivas, obrigando que cada contador faça interpretações individuais a partir delas. Existem diversas propostas para que se possa aumentar a confiabilidade dos resultados gerados com APF. Em geral, as abordagens propostas realizam mapeamentos entre componentes de artefatos desenvolvidos no ciclo de vida de software com os conceitos de APF. Porém, tais propostas simplificam em mais de 50% as regras previstas em APF comprometendo a validade dos resultados gerados pelas contagens. Como o tamanho do software é usado na derivação de outras medidas, inconsistências nos tamanhos medidos podem comprometer as medidas derivadas, o que influência negativamente nas decisões tomadas. Sem padronização dos tamanhos funcionais obtidos e consequentemente sem confiabilidade dos resultados obtidos, medidas derivadas a partir do tamanho funcional, como custo e esforço, podem estar comprometidas, fazendo com que ela não ajude a influenciar positivamente tais projetos. Diante desse contexto, o objetivo deste trabalho é desenvolver e avaliar experimentalmente uma abordagem para oferecer maior padronização e sistematização na aplicação de APF. Para isso, propõe-se incorporar o artefato Árvore de pontos de função ao processo de APF. Sua inclusão possibilitaria o levantamento de dados adicionais, necessários à contagem de pontos de função, reduzindo a ocorrência de interpretações pessoais do contador, e consequentemente, a variação de tamanho reportado. A abordagem foi denominada como Análise de Pontos de Função baseada em Árvore de Pontos de Função (APF-APF). Este trabalho baseia-se no método de pesquisa Design Science, cujo objetivo é estender os limites do ser humano e as capacidades organizacionais, criando novos artefatos que solucionem problemas ainda não resolvidos ou parcialmente resolvidos; que neste trabalho, trata-se da falta de confiabilidade na aplicação de APF devido à sua margem para diferentes interpretações. APF-APF foi testada com 11 Analistas de Sistemas / Requisitos que, baseados na especificação de um software de Recursos Humanos medido oficialmente pelo IFPUG com 125 pontos de função, modelaram a Árvore de pontos de função de modo manual ou automatizado via protótipo de ferramenta desenvolvido. Os resultados obtidos indicam que os tamanhos funcionais calculados com APF-APF possuem coeficiente de variação, respectivamente de 10,72% em relação a confiabilidade e 17,61% em relação a validade dos resultados de medição gerados. Considera-se que a abordagem APF-APF mostrou potencial para que melhores resultados possam ser obtidos. Verifica-se que a principal causa das variações observadas estava relacionada a ausência de informações requeridas para a Árvore de pontos de função, não tendo sido identificado nenhum problema específico em relação as regras definidas para APF-APF. Por fim, verificou-se que o uso do protótipo de ferramenta desenvolvido aumenta em até 47% a eficiência na contagem de pontos de função quando comparado com APF-APF manual / Function point analysis (FPA) is one of the measures used to achieve the functional size of software. It was determined, in Brazil, public procurement of software development should use FPA. However, one of the main criticisms made the FPA concerns the lack of reliability between different counters on the same count that, according to some researchers, the FPA rules are subjective, requiring that each counter do individual interpretations from them. There are various proposals in order to increase the reliability of the results generated with FPA. In General, the proposed approaches perform mappings between artifacts developed components in software life cycle with the concepts of FPA. However, such proposals simplify in more than 50% the rules laid down in FPA compromising the validity of the results generated by the scores. As the size of the software is used in the derivation of other measures, inconsistencies in sizes measured may compromise the measures derived, which negatively influence the decisions taken. Without standardization of functional sizes obtained and consequently without reliability of the results obtained, derived from measures of functional size, cost and effort, may be compromised, causing it to not help to positively influence these projects. In this context, the objective of this work is to develop and experimentally evaluate one approach to offer greater standardization and systematization in the implementation of FPA. For this, it is proposed to incorporate the artifact \"function point Tree\" to the FPA process. Its inclusion would allow additional data collection necessary for function point count, reducing the occurrence of personal interpretations of the counter, and consequently, the variation of size reported. The approach was called as Function Point Tree-based Function Point Analysis (FPT-FPA). This work is based on the method of Design Science research, whose goal is to extend the limits of the human and organizational capacities, creating new artifacts to troubleshoot unresolved or still partially resolved; in this work, it is the lack of reliability in application of FPA because of its scope for different interpretations. FPT-FPA were tested with 11 Systems analysts / requirements analysts, based on the specification of a human resources software measured by the IFPUG with 125 points, have modeled the function point Tree manually or via automated tool prototype developed. The results obtained indicate that the functional sizes calculated with FPT-FPA have coefficient of variation, respectively of 10.72% for reliability and 17.61% in relation to the validity of the measurement results generated. The FPA approach showed potential for better results can be obtained. It turns out that the main cause of the variations observed were related to the absence of information required for the tree of function points have not been identified any particular problem regarding the rules defined for FPT-FPA. Finally, it was found that the use of a prototype tool increases by up to 47% on efficiency function point count when compared to FPT-FPA manual
15

Techniques combinatoires pour les algorithmes paramétrés et les noyaux, avec applications aux problèmes de multicoupe.

Daligault, Jean 05 July 2011 (has links) (PDF)
Dans cette thèse, nous abordons des problèmes NP-difficiles à l'aide de techniques combinatoires, en se focalisant sur le domaine de la complexité paramétrée. Les principaux problèmes que nous considérons sont les problèmes de Multicoupe et d'Arbre Orienté Couvrant avec Beaucoup de Feuilles. La Multicoupe est une généralisation naturelle du très classique problème de coupe, et consiste à séparer un ensemble donné de paires de sommets en supprimant le moins d'arêtes possible dans un graphe. Le problème d'Arbre Orienté Couvrant avec Beaucoup de Feuilles consiste à trouver un arbre couvrant avec le plus de feuilles possible dans un graphe dirigé. Les résultats principaux de cette thèse sont les suivants. Nous montrons que le problème de Multicoupe paramétré par la taille de la solution est FPT (soluble à paramètre fixé), c'est-à-dire que l'existence d'une multicoupe de taille k dans un graphe à n sommets peut être décidée en temps f(k) ∗ poly(n). Nous montrons que Multicoupe dans les arbres admet un noyau polynomial, c'est-à-dire est réductible aux instances de taille polynomiale en k. Nous donnons un algorithme en temps O∗(3.72k) pour le problème d'Arbre Orienté Couvrant avec Beaucoup de Feuilles et le premier algorithme exponentiel exact non trivial (c'est-à-dire meilleur que 2n). Nous fournissons aussi un noyau quadratique et une approximation à facteur constant. Ces résultats algorithmiques sont basés sur des résultats combinatoires et des propriétés structurelles qui concernent, entre autres, les décompositions arborescentes, les mineurs, des règles de réduction et les s−t numberings. Nous présentons des résultats combinatoires hors du domaine de la complexité paramétrée: une caractérisation des graphes de cercle Helly comme les graphes de cercle sans diamant induit, et une caractérisation partielle des classes de graphes 2-bel-ordonnées.
16

Survey of colostrum quality and management practices on commercial dairy farms in the Eastern Cape Province of South Africa

Schoombee, Wilhelm Sternberg 06 1900 (has links)
Bovine maternal antibodies are not transferred across the placenta during pregnancy and newborn calves are unable to produce their own antibodies within the first weeks after birth. As neonates are born agammaglobulinemic they need to acquire immunoglobulins (Ig) from their dam’s colostrum to acquire passive immunity. Colostrum fed to dairy calves, which is not timeous, of inadequate quantity and of unverified quality, might result in decreased neonate health. The aim of this study was to conduct a survey of the management of colostrum on commercial dairy farms, to estimate the quality of the Ig titre of colostrum fed to neonates and finally to recommend methods and techniques critical to the successful management of colostrum. The methods used included a questionnaire which was conducted as a structured interview on a one-on-one basis among 50 randomly selected commercial dairy farmers in the Eastern Cape Coastal Region of South Africa. The estimation of the colostrum Ig titre of colostrum fed to neonates was made by the on-farm measurement of specific gravity (SG) by making use of a commercially available KRUUSE colostrometer (Fleenor and Stott, 1980). A pooled colostrum sample, from each of the four quarters, from 90 randomly selected post-partum cows was collected on a leader commercial dairy farm. This method was used to compare colostrum samples from cows run under similar management practices. These samples were collected for analysis within 6 hrs of calving and were done over 3 seasons (autumn, winter and spring). Survey - The colostrum mass and timing of the initial feed are the most important factors when aiming to achieve adequate passive immunity. The results of the survey indicated that most of the farmers of this region feed an inadequate mass of colostrum (volume and Ig concentration) and only 52% of farmers surveyed feed colostrum less than 6 hrs post- partum. The majority (78%) of surveyed farmers did not follow up their initial colostrum feeding.Colostrum sampling - At the trial site only 10% (9 from 90 colostrum samples measured), were found to be of adequate SG quality. Cow age (parity), season of calving and colostrum temperature had an influence on the estimated colostrum SG. However, season of calving was found to have the greatest influence on SG values. These results were consistent with findings from previous studies that SG values from the cooler months were higher than those of the hotter months. Tables 4.7 (P=0.330), 4.8 (P=0.012) and 4.9 (P=0.005) showed that regression analysis confirmed that LS means across seasons were inadequately below the required 50 mg/ml Ig required for sufficient passive immunity. Tables 4.1 (P=0.164), 4.2 (P=0.011) and 4.3 (P=0.021) shows that season of calving had a much greater effect on CR than did parity, Table 4.5 (P=0.177). Table 4.4 shows that colostrum temperature has an significant effect on SG value. Recommendations for methods and techniques critical to the successful management of colostrum were made. These recommendations were based on the analysis of the data obtained from the questionnaire and the on-farm colostrum sampling study. The most important and critical management practices surveyed includes the timing of the cow and calf separation where it was found that only 30 from the 50 (60%) of the farms surveyed separate calves and dams at day (0), 19 from 50 farms (38%) separate at day (3-5) and 1 from 50 farms (2%) separate only at day 7 or later. Thus 40% of surveyed farms allow cows to nurse their calves. With regards to early exposure to pathogens this is a high risk management practice. Further to that, only 2 from 50 surveyed farms (4%) measure the colostrum quality fed to their calves and 48 from 50 farms (96%) feed colostrum of unmeasured quality. The mass of colostrum fed to calves is an important parameter for successful transmission of Ig. In the survey it was found that 28 from 50 farms (56%) feed 2L – 4L of colostrum and 11 from 50 farms (22%) feed 2L of colostrum. Thus 78% of farms feed approximately 50% of the amount of colostrum required for successful transmission of Ig. Finally only 1 from 50 farms (2%) freeze excess colostrum and 1 from 50 farms (2%) pool excess colostrum. Both these farms measure colostrum quality by colostrometer. / Agriculture Animal Health and Human Ecology / M. Sc. (Agriculture)

Page generated in 0.0329 seconds