• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 6
  • 1
  • Tagged with
  • 19
  • 19
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Measure of robustness for complex networks

Youssef, Mina Nabil January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Caterina Scoglio / Critical infrastructures are repeatedly attacked by external triggers causing tremendous amount of damages. Any infrastructure can be studied using the powerful theory of complex networks. A complex network is composed of extremely large number of different elements that exchange commodities providing significant services. The main functions of complex networks can be damaged by different types of attacks and failures that degrade the network performance. These attacks and failures are considered as disturbing dynamics, such as the spread of viruses in computer networks, the spread of epidemics in social networks, and the cascading failures in power grids. Depending on the network structure and the attack strength, every network differently suffers damages and performance degradation. Hence, quantifying the robustness of complex networks becomes an essential task. In this dissertation, new metrics are introduced to measure the robustness of technological and social networks with respect to the spread of epidemics, and the robustness of power grids with respect to cascading failures. First, we introduce a new metric called the Viral Conductance ($VC_{SIS}$) to assess the robustness of networks with respect to the spread of epidemics that are modeled through the susceptible/infected/susceptible ($SIS$) epidemic approach. In contrast to assessing the robustness of networks based on a classical metric, the epidemic threshold, the new metric integrates the fraction of infected nodes at steady state for all possible effective infection strengths. Through examples, $VC_{SIS}$ provides more insights about the robustness of networks than the epidemic threshold. In addition, both the paradoxical robustness of Barab\'si-Albert preferential attachment networks and the effect of the topology on the steady state infection are studied, to show the importance of quantifying the robustness of networks. Second, a new metric $VC_$ is introduced to assess the robustness of networks with respect to the spread of susceptible/infected/recovered ($SIR$) epidemics. To compute $VC_$, we propose a novel individual-based approach to model the spread of $SIR$ epidemics in networks, which captures the infection size for a given effective infection rate. Thus, $VC_$ quantitatively integrates the infection strength with the corresponding infection size. To optimize the $VC_$ metric, a new mitigation strategy is proposed, based on a temporary reduction of contacts in social networks. The social contact network is modeled as a weighted graph that describes the frequency of contacts among the individuals. Thus, we consider the spread of an epidemic as a dynamical system, and the total number of infection cases as the state of the system, while the weight reduction in the social network is the controller variable leading to slow/reduce the spread of epidemics. Using optimal control theory, the obtained solution represents an optimal adaptive weighted network defined over a finite time interval. Moreover, given the high complexity of the optimization problem, we propose two heuristics to find the near optimal solutions by reducing the contacts among the individuals in a decentralized way. Finally, the cascading failures that can take place in power grids and have recently caused several blackouts are studied. We propose a new metric to assess the robustness of the power grid with respect to the cascading failures. The power grid topology is modeled as a network, which consists of nodes and links representing power substations and transmission lines, respectively. We also propose an optimal islanding strategy to protect the power grid when a cascading failure event takes place in the grid. The robustness metrics are numerically evaluated using real and synthetic networks to quantify their robustness with respect to disturbing dynamics. We show that the proposed metrics outperform the classical metrics in quantifying the robustness of networks and the efficiency of the mitigation strategies. In summary, our work advances the network science field in assessing the robustness of complex networks with respect to various disturbing dynamics.
12

Effects of chlortetracycline and copper supplementation on levels of antimicrobial resistance in the feces of weaned pigs

Agga, Getahun Ejeta January 1900 (has links)
Doctor of Philosophy / Department of Diagnostic Medicine and Pathobiology / Harvey Morgan Scott / The use of antibiotics in food animals is of major concern as a purported cause of antimicrobial resistance (AMR) in human pathogens; as a result, alternatives to in-feed antibiotics such as heavy metals have been proposed. The effect of copper and CTC supplementation in weaned pigs on AMR in the gut microbiota was evaluated. Four treatment groups: control, copper, chlortetracycline (CTC), and copper plus CTC were randomly allocated to 32 pens with five pigs per pen. Fecal samples (n = 576) were collected weekly from three pigs per pen over six weeks and two Escherichia coli isolates per sample were tested phenotypically for antimicrobial and copper susceptibilities and genotypically for the presence of tetracycline (tet), copper (pcoD) and ceftiofur bla[subscript]C[subscript]M[subscript]Y₋₂) resistance genes. CTC-supplementation significantly increased tetracycline resistance and susceptibility to copper when compared with the control group. Copper supplementation decreased resistance to most of the antibiotics, including cephalosporins, over all treatment periods. However, copper supplementation did not affect minimum inhibitory concentrations of copper or detection of pcoD. While tetA and bla[subscript]C[subscript]M[subscript]Y₋₂ genes were associated with a higher multi-drug resistance (MDR), tetB and pcoD were associated with lower MDR. Supplementations of CTC or copper alone were associated with increased tetB prevalence; however, their combination was paradoxically associated with reduced prevalence. These studies indicate that E. coli isolates from the weaned pigs studied exhibit high levels of antibiotic resistance with diverse multi-resistant phenotypic profiles. In a related study, total fecal community DNA (n = 569) was used to detect 14 tet genes and to quantify gene copies of tetA, tetB, pcoD and bla[subscript]C[subscript]M[subscript]Y₋₂. CTC and copper plus CTC supplementation increased both the prevalence and gene copies of tetA, while decreasing both the prevalence and gene copies of tetB, when compared with the control group. The diversity of tet genes were reduced over time in the gut bacterial community. The roles of copper supplementation in pig production and pco-mediated copper resistance in E. coli need to be further explored since a strong negative association of pcoD, with both tetA and bla[subscript]C[subscript]M[subscript]Y₋₂, suggests there exist opportunities to select for a more innocuous resistance profile.
13

The epidemiology of tetracycline and ceftiofur resistance in commensal Escherichia coli

McGowan, Matthew Thomas January 1900 (has links)
Master of Science / Department of Biomedical Science / H. Morgan Scott / The modern phenomenon of increasing prevalence of antibiotic resistance in clinically relevant bacteria threatens humanity’s ability to use antibiotics to treat infection in both humans and animals. Despite the marked complexity of bacterial evolution, there is tremendous importance in unfolding the process by which antibiotic resistance genes emerge, disperse, and persist in the natural world. This thesis investigates certain aspects of this process in two experimental studies that differ primarily by scale but also by methodology. The first study examined the long-term annual prevalence of ceftiofur and tetracycline resistance in Canadian beef cattle from 2002 to 2011 at both phenotypic and genotypic levels. Ceftiofur was present at a very low prevalence (<4%) that did not statistically increase over the decade (p<0.05). Relative proportions of tetracycline genes tet(A), tet(B), and tet(C) also did not significantly change over the observation period. However, it was surprising that almost 20% of isolates recovered from nonselective agar harbored tet(C) given that current literature generally indicates that tet(C) is significantly less prevalent than tet(A) or tet(B). The usage of historical samples in addition to parallel selective plating using agar supplemented with antibiotics provided insight into systemic bias present in common microbial approaches. Long-term sample freezing significantly diminished the recoverability of E. coli over time. Additionally the usage of selective MacConkey agar containing tetracycline biased the proportions of tetracycline genes to over-represent the tet(B) gene in commensal E. coli compared to nonselective MacConkey agar. The second study attempted to explain the short-term selection effects of antibiotic treatment on the overall ecological fitness of commensal E. coli using bacterial growth parameters estimated from spectrophotometric growth curves as a simple surrogate of general fitness. Treating cattle with either tetracycline or ceftiofur was found to not only select in favor of tetracycline resistant bacteria, but also increased the overall fitness among the tetracycline resistant population. However, growth curves were unable able to explain why transiently selected resistant bacteria were eventually replaced by susceptible bacteria once the selection pressure was removed.
14

Geospatial analysis of canine leptospirosis risk factors in the central Great Plains region

Raghavan, Ram January 1900 (has links)
Doctor of Philosophy / Department of Diagnostic Medicine/Pathobiology / K.R. Harkin / T.G. Nagaraja / Associations of land cover/land use, socio-economic and housing, and hydrological and soil-hydrological variables were evaluated retrospectively as potential risk factors for canine leptospirosis in Kansas and Nebraska using Geographic Information Systems (GIS). The sample included 94 dogs positive for leptospirosis based on a positive polymerase chain reaction test for leptospires in urine, isolation of leptospires on urine culture, a single reciprocal serum titer of 12,800 or greater, or a four-fold rise in reciprocal serum titers over a 2 to 4 week period; and 185 dogs negative for leptospirosis based on a negative polymerase chain reaction test and reciprocal serum titers less than 400. Publicly available geographic datasets representing land cover/land use, socio-economic and housing characteristics, and hydrologic and soil hydrologic themes were analyzed along with geocoded addresses of case/control locations in GIS. Among different land cover/land use variables evaluated, urban areas (high and medium intensity urban areas and urban areas in general) and evergreen forests and forest/woodlands in general were significant risk factors. Among socio-economic and demographic determinants evaluated, houses lacking complete plumbing facilities, poverty status by age (18-64), and living within 2500 meters of a university/college or parks/forests were significant risk factors. Proximity to water features, hydrologic density and frequently flooded areas were identified as significant risk factors for canine leptospirosis among hydrologic and soil-hydrologic variables. Pet owners whose dogs live in such areas or under these circumstances should consider vaccination to prevent canine leptospirosis.
15

Colonoscopy use by Primary Care Physicians and Colorectal Cancer Incidence and Mortality

Jacob, Binu Jose 13 December 2012 (has links)
We first studied factors associated with the rate of colonoscopy by primary care physicians (PCPs) in Ontario between the years 1996 and 2005. Next, we conducted an Instrumental Variable Analysis (IVA) to estimate the effect of colonoscopy on colorectal cancer (CRC) incidence and mortality on average-risk subjects aged 50-74 years. Finally, we explored two study cohorts, one by including subjects who had the outcomes during the exposure period (unselected cohort) and the other cohort by excluding those subjects (restricted cohort). We estimated the absolute risk reduction associated with colonoscopy in preventing CRC incidence and mortality using traditional regression analysis, propensity score analysis and IVA. PCPs who were Canadian medical graduates and with more years of experience were more likely to use colonoscopy. PCPs were more likely to use colonoscopy if their patient populations were predominantly women, older, had more illnesses, and if their patients resided in less marginalized neighborhoods (lower unemployment, fewer immigrants, higher income, higher education, and higher English/French fluency). Using PCP rate of discretionary colonoscopy as an instrumental variable, receipt of colonoscopy was associated with a 0.60% absolute reduction in 7-year CRC incidence and a 0.17% absolute reduction in 5-year risk of death due to CRC. The unselected cohort showed an increase in CRC incidence and mortality associated with colonoscopy, whereas the restricted cohort showed a reduction in CRC incidence and mortality associated with colonoscopy. In the restricted cohort, using different statistical models, the absolute risk reduction varied from 0.52-0.60% for CRC incidence and 0.08-0.17% for CRC mortality. There were social disparities in the use of colonoscopy by PCPs and this disparity increased as the overall use of colonoscopy increased over time. Colonoscopy is effective in reducing incidence and mortality due to CRC. Different methods of subject selection and statistical analysis provided different estimates of colonoscopy effectiveness.
16

Colonoscopy use by Primary Care Physicians and Colorectal Cancer Incidence and Mortality

Jacob, Binu Jose 13 December 2012 (has links)
We first studied factors associated with the rate of colonoscopy by primary care physicians (PCPs) in Ontario between the years 1996 and 2005. Next, we conducted an Instrumental Variable Analysis (IVA) to estimate the effect of colonoscopy on colorectal cancer (CRC) incidence and mortality on average-risk subjects aged 50-74 years. Finally, we explored two study cohorts, one by including subjects who had the outcomes during the exposure period (unselected cohort) and the other cohort by excluding those subjects (restricted cohort). We estimated the absolute risk reduction associated with colonoscopy in preventing CRC incidence and mortality using traditional regression analysis, propensity score analysis and IVA. PCPs who were Canadian medical graduates and with more years of experience were more likely to use colonoscopy. PCPs were more likely to use colonoscopy if their patient populations were predominantly women, older, had more illnesses, and if their patients resided in less marginalized neighborhoods (lower unemployment, fewer immigrants, higher income, higher education, and higher English/French fluency). Using PCP rate of discretionary colonoscopy as an instrumental variable, receipt of colonoscopy was associated with a 0.60% absolute reduction in 7-year CRC incidence and a 0.17% absolute reduction in 5-year risk of death due to CRC. The unselected cohort showed an increase in CRC incidence and mortality associated with colonoscopy, whereas the restricted cohort showed a reduction in CRC incidence and mortality associated with colonoscopy. In the restricted cohort, using different statistical models, the absolute risk reduction varied from 0.52-0.60% for CRC incidence and 0.08-0.17% for CRC mortality. There were social disparities in the use of colonoscopy by PCPs and this disparity increased as the overall use of colonoscopy increased over time. Colonoscopy is effective in reducing incidence and mortality due to CRC. Different methods of subject selection and statistical analysis provided different estimates of colonoscopy effectiveness.
17

Survey of Ehrlichia and Anaplasma species in white tailed deer and in ticks by real-time RT-PCR/PCR and DNA sequencing analysis

Katragadda, Chakravarthy January 1900 (has links)
Master of Science / Department of Diagnostic Medicine/Pathobiology / Roman Reddy R. Ganta / Ehrlichia and Anaplasma species are rickettsial organisms which infect a variety of mammalian species. The organisms are transmitted from ticks and are maintained in reservoir hosts. Several pathogens have been identified in recent years as the causative agents for emerging infections in people. One of the primary reservoir hosts for the pathogens is the white tailed deer. In this study, 147 deer blood samples and 37 ticks were evaluated for the prevalence of Ehrlichia/Anaplasma species by TaqMan-based real time amplification assay and DNA sequence analysis. One hundred and thirteen (74%) samples tested positive with the Ehrlichia/Anaplasma genera-specific probe. Further analysis of the samples with the probes specific for human ehrlichiosis agents, E. chaffeensis and E. ewingii identified 4 (2.7%) and 7 (4.7%) positives, respectively. Test positives from 24 randomly selected samples were further evaluated by sequence analysis targeting to a 450 bp segment of 16S rRNA gene. All 24 samples were confirmed as positive for the Ehrlichia GA isolate # 4 (GenBank #U27104.1). DNAs from 37 pools of ticks collected from the white tailed deer were also evaluated. The TaqMan-based real time PCR assay with Anaplasma/Ehrlichia common probe identified 29 (78%) tick pools as positives whereas E. chaffeensis- and E. ewingii-specific probes identified three (8%) and one (3%) positives, respectively. The PCR and sequence analysis of tick samples identified Gram-negative bacteria species which included one endosymbiont of Rickettsia species (one tick pool), one Alcaligenes faecalis strain (three tick pools), five different Pseudomonas species (9 tick pools) and five different uncultured bacteria organisms (7 tick pools). Although the pathogenic potential of the white-tailed deer isolates of Anaplasma and Ehrlichia agents remains to be established, their high prevalence and the presence of human ehrlichiosis pathogens in white-tailed deer is similar to earlier findings. The high prevalence of the deer isolates of Anaplasma and Ehrlichia species demonstrates the need for further assessment of the pathogenic potential of these organisms to people and domestic animals.
18

The Role of Social Networks in the Decision to Test for HIV

Jumbe, Clement Alexander David 10 January 2012 (has links)
The major global concern of preventing the spread of Human Immunodeficiency Virus (HIV) requires that millions of people be tested in order to identify those individuals who need treatment and care. This study’s purpose was to examine the role of social networks in an individual’s decision to test for HIV. The study sample included 62 participants of African and Caribbean origin in Toronto, Canada. Thirty-three females and 29 males, aged 16 to 49 years who had previously tested positive or negative for HIV, participated in interviews that lasted approximately 60 minutes. Measurement instruments adapted from Silverman, Hecht, McMillin, and Chang (2008) were used to identify and delimit the social networks of the participants. The instrument identified four social network types: immediate family, extended family, friends, and acquaintances. The study examined the role of these network types on the individuals’ decisions to get HIV testing. A mixed method approach (Creswell, 2008) was applied, and both qualitative and quantitative data were collected simultaneously. Participants listed their social networks and retrospectively described the role of their network members in influencing their decision to test for HIV. The participants’ narratives of the influence of social networks in HIV testing were coded. A thematic analysis of the qualitative descriptions of the network members’ influence was performed. The quantitative and the qualitative analysis results were then tallied. The results of the study demonstrated that the influence of social networks was evident in the individuals’ decisions to test for HIV. The most influential group was friends, followed in descending order of influence by immediate family, acquaintances, and extended family. These social network ties provided informational, material, and emotional support to individuals deciding to seek HIV testing. For policy makers and health professionals, coming to a more complete understanding of these dynamics will enable them to make institutional decisions and allocate resources to improve and enhance the support available from within these social networks, thus encouraging, promoting, and leading to increased testing for HIV.
19

The Role of Social Networks in the Decision to Test for HIV

Jumbe, Clement Alexander David 10 January 2012 (has links)
The major global concern of preventing the spread of Human Immunodeficiency Virus (HIV) requires that millions of people be tested in order to identify those individuals who need treatment and care. This study’s purpose was to examine the role of social networks in an individual’s decision to test for HIV. The study sample included 62 participants of African and Caribbean origin in Toronto, Canada. Thirty-three females and 29 males, aged 16 to 49 years who had previously tested positive or negative for HIV, participated in interviews that lasted approximately 60 minutes. Measurement instruments adapted from Silverman, Hecht, McMillin, and Chang (2008) were used to identify and delimit the social networks of the participants. The instrument identified four social network types: immediate family, extended family, friends, and acquaintances. The study examined the role of these network types on the individuals’ decisions to get HIV testing. A mixed method approach (Creswell, 2008) was applied, and both qualitative and quantitative data were collected simultaneously. Participants listed their social networks and retrospectively described the role of their network members in influencing their decision to test for HIV. The participants’ narratives of the influence of social networks in HIV testing were coded. A thematic analysis of the qualitative descriptions of the network members’ influence was performed. The quantitative and the qualitative analysis results were then tallied. The results of the study demonstrated that the influence of social networks was evident in the individuals’ decisions to test for HIV. The most influential group was friends, followed in descending order of influence by immediate family, acquaintances, and extended family. These social network ties provided informational, material, and emotional support to individuals deciding to seek HIV testing. For policy makers and health professionals, coming to a more complete understanding of these dynamics will enable them to make institutional decisions and allocate resources to improve and enhance the support available from within these social networks, thus encouraging, promoting, and leading to increased testing for HIV.

Page generated in 0.0319 seconds