391 |
Utilizing body temperature to evaluate ovulation in mature maresBowman, Marissa Coral 16 August 2006 (has links)
The equine breeding industry continues to be somewhat inefficient, even with existing technology. On average, foaling rates are low when compared with that of other livestock. One major contributor is the inability to accurately predict ovulation in mares, which ovulate before the end of estrus, leaving much variability in coordinating insemination. A more efficient, less invasive method that could replace or reduce the need for constant teasing and ultrasonography to evaluate follicular activity is needed. In both dairy cattle and women, a change in body temperature has been shown to occur immediately prior to ovulation. Research on horses has been limited, although one study reported no useable relationship between body temperature and ovulation in mares (Ammons, 1989). The current study utilized thirty-eight mature cycling American Quarter Horse mares, and was conducted from March-August 2004. Each mare was implanted in the nuchal ligament with a microchip that can be used for identification purposes, but is also capable of reporting body temperature. Once an ovulatory follicle (>35mm) was detected using ultrasonography and the mare was exhibiting signs of estrus, the mare's follicle size and temperature were recorded approximately every six hours until ovulation. Not only was the temperature collected using the microchips, but the corresponding rectal temperature was also recorded using a digital thermometer. A significant effect (p<0.05) on body temperature was noted in relation to the presence or absence of an ovulatory follicle (>35mm) under different circumstances. When evaluating the rectal temperatures, no significant difference was found in temperature in relation to the presence or absence of a follicle. However, in the temperatures obtained using the microchip, temperature was higher (p<0.05) with the presence of a follicle of greater than 35mm. This may be due to the extreme sensitivity of the microchip implant and its ability to more closely reflect minute changes in body temperature.
|
392 |
Development, assessment and application of bioinformatics tools for the extraction of pathways from metabolic networksFaust, Karoline 12 February 2010 (has links)
Genes can be associated in numerous ways, e.g. by co-expression in micro-arrays, co-regulation in operons and regulons or co-localization on the genome. Association of genes often indicates that they contribute to a common biological function, such as a pathway. The aim of this thesis is to predict metabolic pathways from associated enzyme-coding genes. The prediction approach developed in this work consists of two steps: First, the reactions are obtained that are carried out by the enzymes coded by the genes. Second, the gaps between these seed reactions are filled with intermediate compounds and reactions. In order to select these intermediates, metabolic data is needed. This work made use of metabolic data collected from the two major metabolic databases, KEGG and MetaCyc. The metabolic data is represented as a network (or graph) consisting of reaction nodes and compound nodes. Interme- diate compounds and reactions are then predicted by connecting the seed reactions obtained from the query genes in this metabolic network using a graph algorithm.
In large metabolic networks, there are numerous ways to connect the seed reactions. The main problem of the graph-based prediction approach is to differentiate biochemically valid connections from others. Metabolic networks contain hub compounds, which are involved in a large number of reactions, such as ATP, NADPH, H2O or CO2. When a graph algorithm traverses the metabolic network via these hub compounds, the resulting metabolic pathway is often biochemically invalid.
In the first step of the thesis, an already existing approach to predict pathways from two seeds was improved. In the previous approach, the metabolic network was weighted to penalize hub compounds and an extensive evaluation was performed, which showed that the weighted network yielded higher prediction accuracies than either a raw or filtered network (where hub compounds are removed). In the improved approach, hub compounds are avoided using reaction-specific side/main compound an- notations from KEGG RPAIR. As an evaluation showed, this approach in combination with weights increases prediction accuracy with respect to the weighted, filtered and raw network.
In the second step of the thesis, path finding between two seeds was extended to pathway prediction given multiple seeds. Several multiple-seed pathay prediction approaches were evaluated, namely three Steiner tree solving heuristics and a random-walk based algorithm called kWalks. The evaluation showed that a combination of kWalks with a Steiner tree heuristic applied to a weighted graph yielded the highest prediction accuracy.
Finally, the best perfoming algorithm was applied to a microarray data set, which measured gene expression in S. cerevisiae cells growing on 21 different compounds as sole nitrogen source. For 20 nitrogen sources, gene groups were obtained that were significantly over-expressed or suppressed with respect to urea as reference nitrogen source. For each of these 40 gene groups, a metabolic pathway was predicted that represents the part of metabolism up- or down-regulated in the presence of the investigated nitrogen source.
The graph-based prediction of pathways is not restricted to metabolic networks. It may be applied to any biological network and to any data set yielding groups of associated genes, enzymes or compounds. Thus, multiple-end pathway prediction can serve to interpret various high-throughput data sets.
|
393 |
Protein Structure Prediction : Model Building and Quality AssessmentWallner, Björn January 2005 (has links)
Proteins play a crucial roll in all biological processes. The wide range of protein functions is made possible through the many different conformations that the protein chain can adopt. The structure of a protein is extremely important for its function, but to determine the structure of protein experimentally is both difficult and time consuming. In fact with the current methods it is not possible to study all the billions of proteins in the world by experiments. Hence, for the vast majority of proteins the only way to get structural information is through the use of a method that predicts the structure of a protein based on the amino acid sequence. This thesis focuses on improving the current protein structure prediction methods by combining different prediction approaches together with machine-learning techniques. This work has resulted in some of the best automatic servers in world – Pcons and Pmodeller. As a part of the improvement of our automatic servers, I have also developed one of the best methods for predicting the quality of a protein model – ProQ. In addition, I have also developed methods to predict the local quality of a protein, based on the structure – ProQres and based on evolutionary information – ProQprof. Finally, I have also performed the first large-scale benchmark of publicly available homology modeling programs.
|
394 |
Experimental and Numerical Studies of Board-level Electronic Packages Subjected to Drop and Thermal Cycling TestsLe, Ye-sung 07 August 2007 (has links)
Experimental and numerical analyses were both adopted in the thesis. First, the BGA with three different solder ball components and pads, were investigated and their strength was affected by drop tests and thermal cycling test. Then the concept of numerical simulation to do the follow-up analysis was adopted. the relationships of stress, strain, and creep strain energy density were found.
The lead-free solder ball has better resistance to the drop test with lower silver content; on the contrary, it has better properties due to thermal cycling tests with higher silver content. In the drop test, the failure of solder ball were found obviously in the packages that near four corner of the test board, and concentrated in the diagonal screw holes. The failure of solder ball was distributed over the peripheral of the package in the middle cross section of test board. Comparing the different position of 15 packages due to drop test, the amount of failed solder balls showed that the package positions U3, U8, U13 was obviously fractured, and the situation of fracture was relatively slight in the positions of U1, U5, U6, U7, U9, U10, U11, U15.
In the fatigue life prediction of thermal cycling test, the simplified model of package in 45¢X direction was mostly close to the experimental data. After the except ion of the solder ball with failure mode A1, the major failure mode in drop test was mode B3. But the mode C was the majority of thermal cycling test. The structure and intensity of SMD play an important role on above experiments; the better choice of SMD can reduce the rate of failure mode A1, and improve the accuracy of the experiment.
|
395 |
Vegetation distribution predicting in Laonong River basin with Indicator KrigingLi, Yi-di 27 August 2007 (has links)
To overcome the limit of topography and manpower, vegetation prediction is an important method in vegetation mapping. There can be used in model prediction that concern about environment factor or in data interpolation that only consider about spatial distribution. In this research, indicator kriging was used to predict the spatial distribution of the vegetation of Laonong river basin. The distributions of associations were combined from the species in these associations which had been selected by Cluster analyst and TWINSPAN. Indicator kriging used presence/absence data to calculate the distribution pattern of these species, and the each species predicted raster had its own distinctly distribution. The distribution pattern of associations were related to species distribution directly. The stability of prediction pattern were evaluated by jackknife method. All standard errors of the prediction were under 0.01, with no significant difference in 4 different sampling measures.
|
396 |
Using genotypic and phenotypic methods to determine the HIV co-receptor phenotype in the clinical settingLow, Andrew John 05 1900 (has links)
Objective: The human immunodeficiency virus type 1 (HIV-1) currently infects over 30 million people worldwide. It uses one of two main co-receptors to infect cells. The primary objective of this thesis is to evaluate genotypic and phenotypic assays for co-receptor usage in the clinical setting and investigate approaches for improvement of these assays.
Methods: The concordance of recombinant co-receptor phenotyping assays and the predictive ability of genotype-based methods including the ‘11/25’ rule, position specific scoring matrices (PSSMs), and support vector machines (SVMs) were evaluated in the clinical setting using patient-derived plasma samples. Samples and patient data were evaluated in cross-sectional analyses from a retrospective population-based cohort of HIV-infected individuals enrolled in the HIV/AIDS Drug Treatment Program in British Columbia, Canada.
Results: Current implementations of HIV V3 region-based predictors for HIV co-receptor usage tested on patient derived samples are inadequate in the clinical setting, primarily due to low sensitivities as a result of difficult to detect minority species. Recombinant phenotype assays also show discordances when tested against each other on the same set of patient derived samples, raising doubts if any of these assays can truly be considered a ‘gold standard’. Significant associations between clinical progression, viral sequence-based predictors of co-receptor usage and the output of recombinant assays are observed, suggesting that sensitivity can be improved by incorporating CD4% into genotype-based predictors. This is verified with a SVM model which showed a 17% increase in sensitivity when CD4% was incorporated into training and testing.
Conclusion: This work in this thesis has exposed the difficulty in determining the co-receptor phenotype in the clinical setting, primarily due to minority species. Although genotypic methods of screening for HIV co-receptor usage prior to the administration of CCR5 antagonists may reduce costs and increase turn-around time over phenotypic methods, they are currently inadequate for use in the clinical setting due to low sensitivities. Although the addition of clinical parameters such as CD4 count significantly increases the predictive ability of genotypic methods, the presence of low-levels of X4 virus continues to reduce the sensitivity of both genotypic and phenotypic methods.
|
397 |
Evaluation Of the NRC 1996 winter feed requirements for beef cows In western CanadaBourne, Jodi Lynn 28 February 2007
A trial was conducted to evaluate the accuracy of the 1996 NCR beef model to predict DMI and ADG of pregnant cows under western Canadian conditions. Over two consecutive years, 90 Angus (587±147 kg) cows assigned to 15 pens (N=6) were fed typical diets ad libitum, formulated to stage of pregnancy. Data collection included pen DMI and ADG (corrected for pregnancy), calving date, calf weight, body condition scores and ultrasound fat measurements, weekly feed samples and daily ambient temperature. DMI and ADG for each pen of cows in each trimester was predicted using the computer program Cowbytes based on the 1996 NRC beef model. The results indicate that in the 2nd and 3rd trimester of both years the model under predicted (P≤0.05) ADG based on observed DMI. Ad libitum intake was over predicted (P≤0.05) during the 2nd trimester, and under predicted (P≤0.05) during the 3rd trimester of pregnancy. A second evaluation was carried out assuming thermal neutral (TN) conditions. In this case, it was found that during the 2nd and 3rd trimesters there was an over prediction (P≤0.05) of ADG relative to observed. Under these same TN conditions, the ad libitum intake of these cows was under predicted (P≤0.05) for both the 2nd and 3rd trimesters. These results suggest current energy equations for modelling environmental stress, over predict maintenance requirements for wintering beef cows in western Canada. The results also suggest that the cows experienced some degree of cold stress, but not as severe as modelled by the NRC (1996) equations. Further research is required to more accurately model cold stress felt by mature cattle, and their ability to acclimatise to western Canadian winter conditions.
|
398 |
Metareasoning about propagators for constraint satisfactionThompson, Craig Daniel Stewart 11 July 2011
Given the breadth of constraint satisfaction problems (CSPs) and the wide variety of CSP solvers, it is often very difficult to determine a priori which solving method is best suited to a problem. This work explores the use of machine learning to predict which solving method will be most effective for a given problem. We use four different problem sets to determine the CSP attributes that can be used to determine which solving method should be applied. After choosing an appropriate set of attributes, we determine how well j48 decision trees can predict which solving method to apply. Furthermore, we take a cost sensitive approach such that problem instances where there is a great difference in runtime between algorithms are emphasized. We also attempt to use information gained on one class of problems to inform decisions about a second class of problems. Finally, we show that the additional costs of deciding which method to apply are outweighed by the time savings compared to applying the same solving method to all problem instances.
|
399 |
Bankruptcy Prediction of Companies in the Retail-apparel Industry using Data Envelopment AnalysisKingyens, Angela Tsui-Yin Tran 17 December 2012 (has links)
Since 2008, the world has been in recession. As daily news outlets report, this crisis has prompted many small businesses and large corporations to file for bankruptcy, which has grave global social implications. Despite government intervention and incentives to stimulate the economy that have put nations in hundreds of billions of dollars of debt, and have reduced the prime rates to almost zero, efforts to combat the increase in unemployment rate as well as the decrease in discretionary income have been troublesome. It is a vicious cycle: consumers are apprehensive of spending due to the instability of their jobs and ensuing personal financial problems; businesses are weary from the lack of revenue and are forced to tighten their operations which likely translates to layoffs; and so on. Cautious movement of cash flows are rooted in and influenced by the psychology of the players (stakeholders) of the game (society). Understandably, the complexity of this economic fallout is the subject of much attention. And while the markets have recovered much of the lost ground as of late, there is still great opportunity to learn about all the possible factors of this recession, in anticipation of and bracing for one more downturn before we emerge from this crisis. In fact, there is no time like today more appropriate for research in bankruptcy prediction because of its relevance, and in an age where documentation is highly encouraged and often mandated by law, the amount and accessibility of data is paramount – an academic’s paradise! The main objective of this thesis was to develop a model supported by Data Envelopment Analysis (DEA) to predict the likelihood of failure of US companies in the retail-apparel industry based on information available from annual reports – specifically from financial statements and their corresponding Notes, Management’s Discussion and Analysis, and Auditor’s Report. It was hypothesized that the inclusion of variables which reflect managerial decision-making and economic factors would enhance the predictive power of current mathematical models that consider financial data exclusively. With a unique and comprehensive dataset of 85 companies, new metrics based on different aspects of the annual reports were created then combined with a slacks-based measure of efficiency DEA model and modified layering classification technique to capture the multidimensional complexity of bankruptcy. This approach proved to be an effective prediction tool, separating companies with a high risk of bankruptcy from those that were healthy, with a reliable accuracy of 80% – an improvement over the widely-used Altman bankruptcy model having 70%, 58% and 50% accuracy when predicting cases today, from one year back and from two years back, respectively. It also provides a probability of bankruptcy based on a second order polynomial function in addition to targets for improvement, and was designed to be easily adapted for analysis of other industries. Finally, the contributions of this thesis benefit creditors with better risk assessment, owners with time to improve current operations as to avoid failure altogether, as well as investors with information on which healthy companies to invest in and which unhealthy companies to short.
|
400 |
Exploring Virtualization Techniques for Branch Outcome PredictionSadooghi-Alvandi, Maryam 20 December 2011 (has links)
Modern processors use branch prediction to predict branch outcomes, in order to fetch ahead in the instruction stream, increasing concurrency and performance. Larger predictor tables can improve prediction accuracy, but come at the cost of larger area and longer access delay.
This work introduces a new branch predictor design that increases the perceived predictor capacity without increasing its delay, by using a large virtual second-level table allocated in the second-level caches. Virtualization is applied to a state-of-the-art multi- table branch predictor. We evaluate the design using instruction count as proxy for timing on a set of commercial workloads. For a predictor whose size is determined by access delay constraints rather than area, accuracy can be improved by 8.7%. Alternatively, the design can be used to achieve the same accuracy as a non-virtualized design while using 25% less dedicated storage.
|
Page generated in 0.0386 seconds