• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 303
  • Tagged with
  • 303
  • 303
  • 303
  • 32
  • 28
  • 26
  • 20
  • 18
  • 16
  • 16
  • 16
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Investigating the Application of Opposition-Based Ideas to Ant Algorithms

Malisia, Alice Ralickas January 2007 (has links)
Opposition-based learning (OBL) was recently proposed to extend di erent machine learning algorithms. The main idea of OBL is to consider opposite estimates, actions or states as an attempt to increase the coverage of the solution space and to reduce exploration time. OBL has already been applied to reinforcement learning, neural networks and genetic algorithms. This thesis explores the application of OBL to ant algorithms. Ant algorithms are based on the trail laying and following behaviour of ants. They have been successfully applied to many complex optimization problems. However, like any other technique, they can benefit from performance improvements. Thus, this work was motivated by the idea of developing more complex pheromone and path selection behaviour for the algorithm using the concept of opposition. This work proposes opposition-based extensions to the construction and update phases of the ant algorithm. The modifications that focus on the solution construction include three direct and two indirect methods. The three direct methods work by pairing the ants and synchronizing their path selection. The two other approaches modify the decisions of the ants by using opposite-pheromone content. The extension of the update phase lead to an approach that performs additional pheromone updates using opposite decisions. Experimental validation was done using two versions of the ant algorithm: the Ant System and the Ant Colony System. The di erent OBL extensions were applied to the Travelling Salesman Problem (TSP) and to the Grid World Problem (GWP). Results demonstrate that the concept of opposition is not easily applied to the ant algorithm. One pheromone-based method showed performance improvements that were statistically significant for the TSP. The quality of the solutions increased and more optimal solutions were found. The extension to the update phase showed some improvements for the TSP and led to accuracy improvements and a significant speed-up for the GWP. The other extensions showed no clear improvement. The proposed methods for applying opposition to the ant algorithm have potential, but more investigations are required before ant colony optimization can fully benefit from opposition. Most importantly, fundamental theoretical work with graphs, specifically, clearly defining opposite paths or opposite path components, is needed. Overall, the results indicate that OBL ideas can be beneficial for ant algorithms.
252

Integration of New Technologies into Existing Mature Process to Improve Efficiency and Reduce Energy Consumption

Ahmed, Sajjad 17 June 2009 (has links)
Optimal operation of plants is becoming more important due to increasing competition and small and changing profit margins for many products. One major reason has been the realization by industry that potentially large savings can be achieved by improving processes. Growth rates and profitability are much lower now, and international competition has increased greatly. The industry is faced with a need to manufacture quality products, while minimizing production costs and complying with a variety of safety and environmental regulations. As industry is confronted with the challenge of moving toward a clearer and more sustainable path of production, new technologies are needed to achieve industrial requirements. In this research, a new methodology is proposed to integrate so-called new technologies into existing processes. Research shows that the new technologies must be carefully selected and adopted to match the complex requirements of an existing process. The new proposed methodology is based on four major steps. If the improvement in the process is not sufficient to meet business needs, new technologies can be considered. Application of a new technology is always perceived as a potential threat; therefore, financial risk assessment and reliability risk analysis help alleviate risk of investment. An industrial case study from the literature was selected to implement and validate the new methodology. The case study is a planning problem to plan the layout or design of a fleet of generating stations owned and operated by the electric utility company, Ontario Power Generation (OPG). The impact of new technology integration on the performance of a power grid consisting of a variety of power generation plants was evaluated. The reduction in carbon emissions is projected to be accomplished through a combination of fuel switching, fuel balancing and switching to new technologies: carbon capture and sequestration. The fuel-balancing technique is used to decrease carbon emissions by adjusting the operation of the fleet of existing electricity-generating stations; the technique of fuel-switching involves switching from carbon-intensive fuels to less carbon-intensive fuels, for instance, switching from coal to natural gas; carbon capture and sequestration are applied to meet carbon emission reduction requirements. Existing power plants with existing technologies consist of fossil fuel stations, nuclear stations, hydroelectric stations, wind power stations, pulverized coal stations and a natural gas combined cycle, while hypothesized power plants with new technologies include solar stations, wind power stations, pulverized coal stations, a natural gas combined cycle and an integrated gasification combined cycle with and without capture and sequestration. The proposed methodology includes financial risk management in the framework of a two stage stochastic programme for energy planning under uncertainty: demands and fuel price. A deterministic mixed integer linear programming formulation is extended to a two-stage stochastic programming model in order to take into account random parameters, which have discrete and finite probabilistic distributions. Thus, the expected value of the total costs of power generation is minimized, while the objective of carbon emission reduction is achieved. Furthermore, conditional value at risk (CVaR), a most preferable risk measure in the financial risk management, is incorporated within the framework of two-stage mixed integer programming. The mathematical formulation, which is called mean-risk model, is applied for the purpose of minimizing expected value. The process is formulated as a mixed integer linear programming model, implemented in GAMS (General Algebraic Modeling System) and solved using the CPLEX algorithm, a commercial solver embedded in GAMS. The computational results demonstrate the effectiveness of the proposed new methodology. The optimization model is applied to an existing Ontario Power Generation (OPG) fleet. Four planning scenarios are considered: a base load demand, a 1.0% growth rate in demand, a 5.0% growth rate in demand, a 10% growth rate in demand and a 20% growth rate in demand. A sensitivity analysis study is accomplished in order to investigate the effect of parameter uncertainties, such as uncertain factors on coal price and natural gas price. The optimization results demonstrate how to achieve the carbon emission mitigation goal with and without new technologies, while minimizing costs affects the configuration of the OPG fleet in terms of generation mix, capacity mix and optimal configuration. The selected new technologies are assessed in order to determine the risks of investment. Electricity costs with new technologies are lower than with the existing technologies. 60% CO2 reduction can be achieved at 20% growth in base load demand with new technologies. The total cost of electricity increases as we increase CO2 reduction or increase electricity demand. However, there is no significant change in CO2 reduction cost when CO2 reduction increases with new technologies. Total cost of electricity increases when fuel price increases. The total cost of electricity increases with financial risk management in order to lower the risk. Therefore, more electricity is produced for the industry to be on the safe side.
253

Fault Detection and Identification in Computer Networks: A soft Computing Approach

Mohamed, Abduljalil January 2009 (has links)
Governmental and private institutions rely heavily on reliable computer networks for their everyday business transactions. The downtime of their infrastructure networks may result in millions of dollars in cost. Fault management systems are used to keep today’s complex networks running without significant downtime cost, either by using active techniques or passive techniques. Active techniques impose excessive management traffic, whereas passive techniques often ignore uncertainty inherent in network alarms,leading to unreliable fault identification performance. In this research work, new algorithms are proposed for both types of techniques so as address these handicaps. Active techniques use probing technology so that the managed network can be tested periodically and suspected malfunctioning nodes can be effectively identified and isolated. However, the diagnosing probes introduce extra management traffic and storage space. To address this issue, two new CSP (Constraint Satisfaction Problem)-based algorithms are proposed to minimize management traffic, while effectively maintain the same diagnostic power of the available probes. The first algorithm is based on the standard CSP formulation which aims at reducing the available dependency matrix significantly as means to reducing the number of probes. The obtained probe set is used for fault detection and fault identification. The second algorithm is a fuzzy CSP-based algorithm. This proposed algorithm is adaptive algorithm in the sense that an initial reduced fault detection probe set is utilized to determine the minimum set of probes used for fault identification. Based on the extensive experiments conducted in this research both algorithms have demonstrated advantages over existing methods in terms of the overall management traffic needed to successfully monitor the targeted network system. Passive techniques employ alarms emitted by network entities. However, the fault evidence provided by these alarms can be ambiguous, inconsistent, incomplete, and random. To address these limitations, alarms are correlated using a distributed Dempster-Shafer Evidence Theory (DSET) framework, in which the managed network is divided into a cluster of disjoint management domains. Each domain is assigned an Intelligent Agent for collecting and analyzing the alarms generated within that domain. These agents are coordinated by a single higher level entity, i.e., an agent manager that combines the partial views of these agents into a global one. Each agent employs DSET-based algorithm that utilizes the probabilistic knowledge encoded in the available fault propagation model to construct a local composite alarm. The Dempster‘s rule of combination is then used by the agent manager to correlate these local composite alarms. Furthermore, an adaptive fuzzy DSET-based algorithm is proposed to utilize the fuzzy information provided by the observed cluster of alarms so as to accurately identify the malfunctioning network entities. In this way, inconsistency among the alarms is removed by weighing each received alarm against the others, while randomness and ambiguity of the fault evidence are addressed within soft computing framework. The effectiveness of this framework has been investigated based on extensive experiments. The proposed fault management system is able to detect malfunctioning behavior in the managed network with considerably less management traffic. Moreover, it effectively manages the uncertainty property intrinsically contained in network alarms,thereby reducing its negative impact and significantly improving the overall performance of the fault management system.
254

Distributed Document Clustering and Cluster Summarization in Peer-to-Peer Environments

Hammouda, Khaled M. January 2007 (has links)
This thesis addresses difficult challenges in distributed document clustering and cluster summarization. Mining large document collections poses many challenges, one of which is the extraction of topics or summaries from documents for the purpose of interpretation of clustering results. Another important challenge, which is caused by new trends in distributed repositories and peer-to-peer computing, is that document data is becoming more distributed. We introduce a solution for interpreting document clusters using keyphrase extraction from multiple documents simultaneously. We also introduce two solutions for the problem of distributed document clustering in peer-to-peer environments, each satisfying a different goal: maximizing local clustering quality through collaboration, and maximizing global clustering quality through cooperation. The keyphrase extraction algorithm efficiently extracts and scores candidate keyphrases from a document cluster. The algorithm is called CorePhrase and is based on modeling document collections as a graph upon which we can leverage graph mining to extract frequent and significant phrases, which are used to label the clusters. Results show that CorePhrase can extract keyphrases relevant to documents in a cluster with very high accuracy. Although this algorithm can be used to summarize centralized clusters, it is specifically employed within distributed clustering to both boost distributed clustering accuracy, and to provide summaries for distributed clusters. The first method for distributed document clustering is called collaborative peer-to-peer document clustering, which models nodes in a peer-to-peer network as collaborative nodes with the goal of improving the quality of individual local clustering solutions. This is achieved through the exchange of local cluster summaries between peers, followed by recommendation of documents to be merged into remote clusters. Results on large sets of distributed document collections show that: (i) such collaboration technique achieves significant improvement in the final clustering of individual nodes; (ii) networks with larger number of nodes generally achieve greater improvements in clustering after collaboration relative to the initial clustering before collaboration, while on the other hand they tend to achieve lower absolute clustering quality than networks with fewer number of nodes; and (iii) as more overlap of the data is introduced across the nodes, collaboration tends to have little effect on improving clustering quality. The second method for distributed document clustering is called hierarchically-distributed document clustering. Unlike the collaborative model, this model aims at producing one clustering solution across the whole network. It specifically addresses scalability of network size, and consequently the distributed clustering complexity, by modeling the distributed clustering problem as a hierarchy of node neighborhoods. Summarization of the global distributed clusters is achieved through a distributed version of the CorePhrase algorithm. Results on large document sets show that: (i) distributed clustering accuracy is not affected by increasing the number of nodes for networks of single level; (ii) we can achieve decent speedup by making the hierarchy taller, but on the expense of clustering quality which degrades as we go up the hierarchy; (iii) in networks that grow arbitrarily, data gets more fragmented across neighborhoods causing poor centroid generation, thus suggesting we should not increase the number of nodes in the network beyond a certain level without increasing the data set size; and (iv) distributed cluster summarization can produce accurate summaries similar to those produced by centralized summarization. The proposed algorithms offer high degree of flexibility, scalability, and interpretability of large distributed document collections. Achieving the same results using current methodologies require centralization of the data first, which is sometimes not feasible.
255

Investigating the Application of Opposition-Based Ideas to Ant Algorithms

Malisia, Alice Ralickas January 2007 (has links)
Opposition-based learning (OBL) was recently proposed to extend di erent machine learning algorithms. The main idea of OBL is to consider opposite estimates, actions or states as an attempt to increase the coverage of the solution space and to reduce exploration time. OBL has already been applied to reinforcement learning, neural networks and genetic algorithms. This thesis explores the application of OBL to ant algorithms. Ant algorithms are based on the trail laying and following behaviour of ants. They have been successfully applied to many complex optimization problems. However, like any other technique, they can benefit from performance improvements. Thus, this work was motivated by the idea of developing more complex pheromone and path selection behaviour for the algorithm using the concept of opposition. This work proposes opposition-based extensions to the construction and update phases of the ant algorithm. The modifications that focus on the solution construction include three direct and two indirect methods. The three direct methods work by pairing the ants and synchronizing their path selection. The two other approaches modify the decisions of the ants by using opposite-pheromone content. The extension of the update phase lead to an approach that performs additional pheromone updates using opposite decisions. Experimental validation was done using two versions of the ant algorithm: the Ant System and the Ant Colony System. The di erent OBL extensions were applied to the Travelling Salesman Problem (TSP) and to the Grid World Problem (GWP). Results demonstrate that the concept of opposition is not easily applied to the ant algorithm. One pheromone-based method showed performance improvements that were statistically significant for the TSP. The quality of the solutions increased and more optimal solutions were found. The extension to the update phase showed some improvements for the TSP and led to accuracy improvements and a significant speed-up for the GWP. The other extensions showed no clear improvement. The proposed methods for applying opposition to the ant algorithm have potential, but more investigations are required before ant colony optimization can fully benefit from opposition. Most importantly, fundamental theoretical work with graphs, specifically, clearly defining opposite paths or opposite path components, is needed. Overall, the results indicate that OBL ideas can be beneficial for ant algorithms.
256

Integration of New Technologies into Existing Mature Process to Improve Efficiency and Reduce Energy Consumption

Ahmed, Sajjad 17 June 2009 (has links)
Optimal operation of plants is becoming more important due to increasing competition and small and changing profit margins for many products. One major reason has been the realization by industry that potentially large savings can be achieved by improving processes. Growth rates and profitability are much lower now, and international competition has increased greatly. The industry is faced with a need to manufacture quality products, while minimizing production costs and complying with a variety of safety and environmental regulations. As industry is confronted with the challenge of moving toward a clearer and more sustainable path of production, new technologies are needed to achieve industrial requirements. In this research, a new methodology is proposed to integrate so-called new technologies into existing processes. Research shows that the new technologies must be carefully selected and adopted to match the complex requirements of an existing process. The new proposed methodology is based on four major steps. If the improvement in the process is not sufficient to meet business needs, new technologies can be considered. Application of a new technology is always perceived as a potential threat; therefore, financial risk assessment and reliability risk analysis help alleviate risk of investment. An industrial case study from the literature was selected to implement and validate the new methodology. The case study is a planning problem to plan the layout or design of a fleet of generating stations owned and operated by the electric utility company, Ontario Power Generation (OPG). The impact of new technology integration on the performance of a power grid consisting of a variety of power generation plants was evaluated. The reduction in carbon emissions is projected to be accomplished through a combination of fuel switching, fuel balancing and switching to new technologies: carbon capture and sequestration. The fuel-balancing technique is used to decrease carbon emissions by adjusting the operation of the fleet of existing electricity-generating stations; the technique of fuel-switching involves switching from carbon-intensive fuels to less carbon-intensive fuels, for instance, switching from coal to natural gas; carbon capture and sequestration are applied to meet carbon emission reduction requirements. Existing power plants with existing technologies consist of fossil fuel stations, nuclear stations, hydroelectric stations, wind power stations, pulverized coal stations and a natural gas combined cycle, while hypothesized power plants with new technologies include solar stations, wind power stations, pulverized coal stations, a natural gas combined cycle and an integrated gasification combined cycle with and without capture and sequestration. The proposed methodology includes financial risk management in the framework of a two stage stochastic programme for energy planning under uncertainty: demands and fuel price. A deterministic mixed integer linear programming formulation is extended to a two-stage stochastic programming model in order to take into account random parameters, which have discrete and finite probabilistic distributions. Thus, the expected value of the total costs of power generation is minimized, while the objective of carbon emission reduction is achieved. Furthermore, conditional value at risk (CVaR), a most preferable risk measure in the financial risk management, is incorporated within the framework of two-stage mixed integer programming. The mathematical formulation, which is called mean-risk model, is applied for the purpose of minimizing expected value. The process is formulated as a mixed integer linear programming model, implemented in GAMS (General Algebraic Modeling System) and solved using the CPLEX algorithm, a commercial solver embedded in GAMS. The computational results demonstrate the effectiveness of the proposed new methodology. The optimization model is applied to an existing Ontario Power Generation (OPG) fleet. Four planning scenarios are considered: a base load demand, a 1.0% growth rate in demand, a 5.0% growth rate in demand, a 10% growth rate in demand and a 20% growth rate in demand. A sensitivity analysis study is accomplished in order to investigate the effect of parameter uncertainties, such as uncertain factors on coal price and natural gas price. The optimization results demonstrate how to achieve the carbon emission mitigation goal with and without new technologies, while minimizing costs affects the configuration of the OPG fleet in terms of generation mix, capacity mix and optimal configuration. The selected new technologies are assessed in order to determine the risks of investment. Electricity costs with new technologies are lower than with the existing technologies. 60% CO2 reduction can be achieved at 20% growth in base load demand with new technologies. The total cost of electricity increases as we increase CO2 reduction or increase electricity demand. However, there is no significant change in CO2 reduction cost when CO2 reduction increases with new technologies. Total cost of electricity increases when fuel price increases. The total cost of electricity increases with financial risk management in order to lower the risk. Therefore, more electricity is produced for the industry to be on the safe side.
257

Fault Detection and Identification in Computer Networks: A soft Computing Approach

Mohamed, Abduljalil January 2009 (has links)
Governmental and private institutions rely heavily on reliable computer networks for their everyday business transactions. The downtime of their infrastructure networks may result in millions of dollars in cost. Fault management systems are used to keep today’s complex networks running without significant downtime cost, either by using active techniques or passive techniques. Active techniques impose excessive management traffic, whereas passive techniques often ignore uncertainty inherent in network alarms,leading to unreliable fault identification performance. In this research work, new algorithms are proposed for both types of techniques so as address these handicaps. Active techniques use probing technology so that the managed network can be tested periodically and suspected malfunctioning nodes can be effectively identified and isolated. However, the diagnosing probes introduce extra management traffic and storage space. To address this issue, two new CSP (Constraint Satisfaction Problem)-based algorithms are proposed to minimize management traffic, while effectively maintain the same diagnostic power of the available probes. The first algorithm is based on the standard CSP formulation which aims at reducing the available dependency matrix significantly as means to reducing the number of probes. The obtained probe set is used for fault detection and fault identification. The second algorithm is a fuzzy CSP-based algorithm. This proposed algorithm is adaptive algorithm in the sense that an initial reduced fault detection probe set is utilized to determine the minimum set of probes used for fault identification. Based on the extensive experiments conducted in this research both algorithms have demonstrated advantages over existing methods in terms of the overall management traffic needed to successfully monitor the targeted network system. Passive techniques employ alarms emitted by network entities. However, the fault evidence provided by these alarms can be ambiguous, inconsistent, incomplete, and random. To address these limitations, alarms are correlated using a distributed Dempster-Shafer Evidence Theory (DSET) framework, in which the managed network is divided into a cluster of disjoint management domains. Each domain is assigned an Intelligent Agent for collecting and analyzing the alarms generated within that domain. These agents are coordinated by a single higher level entity, i.e., an agent manager that combines the partial views of these agents into a global one. Each agent employs DSET-based algorithm that utilizes the probabilistic knowledge encoded in the available fault propagation model to construct a local composite alarm. The Dempster‘s rule of combination is then used by the agent manager to correlate these local composite alarms. Furthermore, an adaptive fuzzy DSET-based algorithm is proposed to utilize the fuzzy information provided by the observed cluster of alarms so as to accurately identify the malfunctioning network entities. In this way, inconsistency among the alarms is removed by weighing each received alarm against the others, while randomness and ambiguity of the fault evidence are addressed within soft computing framework. The effectiveness of this framework has been investigated based on extensive experiments. The proposed fault management system is able to detect malfunctioning behavior in the managed network with considerably less management traffic. Moreover, it effectively manages the uncertainty property intrinsically contained in network alarms,thereby reducing its negative impact and significantly improving the overall performance of the fault management system.
258

Probabilistic Characterization of Neuromuscular Disease: Effects of Class Structure and Aggregation Methods

Farkas, Charles January 2010 (has links)
Neuromuscular disorders change the underlying structure and function of motor units within a muscle, and are detected using needle electromyography. Currently, inferences about the presence or absence of disease are made subjectively and are largely impression-based. Quantitative electromyography (QEMG) attempts to improve upon the status quo by providing greater levels of precision, objectivity and reproducibility through numeric analysis, however, their results must be transparently presented and explained to be clinically viable. The probabilistic muscle characterization (PMC) model is ideally suited for a clinical decision support system (CDSS) and has many analogues to the subjective analysis currently used. To improve disease characterization performance globally, a hierarchical classification strategy is developed that accounts for the wide range of MUP feature values present at different levels of involvement (LOI) of a disorder. To improve utility, methods for detecting LOI are considered that balance the accuracy in reporting LOI with its clinical utility. Finally, several aggregation methods that represent commonly used human decision-making strategies are considered and evaluated for their suitability in a CDSS. Four aggregation measures (Average, Bayes, Adjusted Bayes, and WMLO) are evaluated, that offer a compromise between two common decision making paradigms: conservativeness (average) and extremeness (Bayes). Standard classification methods have high specificity at a cost of poor sensitivity at low levels of disease involvement, but tend to improve with disease progression. The hierarchical model is able to provide a better balance between low-LOI sensitivity and specificity by providing the classifier with more concise definitions of abnormality due to LOI. Furthermore, a method for detecting two discrete levels of disease involvement (low and high) is accomplished with reasonable accuracy. The average aggregation method offers a conservative decision that is preferred when the quality of the evidence is poor or not known, while the more extreme aggregators such as Bayes rule perform optimally when the evidence is accurate, but underperform otherwise due to outlier values that are incorrect. The methods developed offer several improvements to PMC, by providing a better balance between sensitivity and specificity, through the definition of a clinically useful and accurate measure of LOI, and by understanding conditions for which each of the aggregation measures is better suited. These developments will enhance the quality of decision support offered by QEMG techniques, thus improving the diagnosis, treatment and management of neuromuscular disorders.
259

Integration of Nanoparticle Cell Lysis and Microchip PCR as a Portable Solution for One-Step Rapid Detection of Bacteria

Wan, Weijie January 2011 (has links)
Bacteria are the oldest, structurally simplest, and most abundant forms of life on earth. Its detection has always been a serious question since the emerging of modern science and technology. There has been a phenomenal growth in the field of real-time bacteria detection in recent years with emerging applications in a wide range of disciplines, including medical analysis, food, environment and many more. Two important analytical functions involved in bacteria detection are cell lysis and polymerase chain reaction (PCR). Cell lysis is required to break cells open to release DNA for use in PCR. PCR is required to reproduce millions of copies of the target genes to reach detection limit from a low DNA concentration. Conventionally, cell lysis and PCR are performed separately using specialized equipments. Those bulky machines consume much more than needed chemical reagents and are very time consuming. An efficient, cost-effective and portable solution involving Nanotechnology and Lab-on-a-Chip (LOC) technology was proposed. The idea was to utilize the excellent antibacterial property of surface-functionalized nanoparticles to perform cell lysis and then to perform PCR on the same LOC system without having to remove them from the solution for rapid detection of bacteria. Nanoparticles possess outstanding properties that are not seen in their bulk form due to their extremely small size. They were introduced to provide two novel methods for LOC cell lysis to overcome problems of current LOC cell lysis methods such as low efficiency, high cost and complicated fabrication process. The first method involved using poly(quaternary ammonium) functionalized gold and titanium dioxide nanoparticles which were demonstrated to be able to lyse E. coli completely in 10 minutes. The idea originated from the excellent antibacterial property of quaternary ammonium salts that people have been using for a long time. The second method involved using titanium dioxide nanoparticles and a miniaturized UV LED array. Titanium dioxide bears photocatalytic effect which generates highly reactive radicals to compromise cell membranes upon absorbing UV light in an aqueous environment. A considerable reduction of live E. coli was observed in 60 minutes. The thesis then evaluates the effect of nanoparticles on PCR to understand the roles nanoparticles play in PCR. It was found that gold and titanium dioxide nanoparticles induce PCR inhibition. How size of gold nanoparticles affected PCR was studied as well. Effective methods were discovered to suppress PCR inhibition caused by gold and titanium dioxide nanoparticles. The pioneering work paves a way for the integration of nanoparticle cell lysis and LOC PCR for rapid detection of bacteria. In the end, an integrated system involving nanoparticle cell lysis and microchip PCR was demonstrated. The prototyped system consisted of a physical microchip for both cell lysis and PCR, a temperature control system and necessary interface connections between the physical device and the temperature control system. The research explored solutions to improve PCR specificity in a microchip environment with gold nanoparticles in PCR. The system was capable of providing the same performance while reducing PCR cycling time by up to 50%. It was inexpensive and easy to be constructed without any complicated clean room fabrication processes. It can find enormous applications in water, food, environment and many more.
260

Evaluation of endothelial cell response to drug for intraocular lens delivery

Doody, Laura January 2011 (has links)
Cataract is one of the leading causes of vision loss worldwide. The rate of cataract surgery has been steadily increasing. Toxic Anterior Segment Syndrome (TASS) is a sterile inflammatory response in the anterior segment of the eye that may occur following cataract surgery. When left untreated, it can lead to permanent vision loss. Corneal endothelial cells are the cells most affected by TASS. These cells are unable to reproduce in vivo and consequently once the density of these cells drops below a certain level, vision is reduced and cannot be reversed. The damage is thought to be mediated by cytokines and endotoxins, primarily through the NF-κΒ pathway. It is hypothesized that anti-inflammatory drug delivery intraocular lenses may help reduce the occurrence of TASS and consequent vision loss. In this research thesis project, an in vitro model was developed as a tool to select drug and delivery material to be used in an anti-TASS ophthalmic biomaterial. In an attempt to find a novel and more effective approach to TASS prevention, dexamethasone, a potent anti-inflammatory steroid drug, was compared to triptolide, a cytokine inhibitor; aprotinin, a general protease inhibitor; and PPM-18, a NF-κΒ inhibitor. To assess the efficacy of these drugs, an in vitro assay using human umbilical vein endothelial cells (HUVEC) and lipopolysaccharide as a stimulant was developed. Cell response to dexamethasone (10 nM), triptolide (3 nM), aprotinin (20 μM) and PPM-18 (10 μM) with or without LPS was characterized by cell viability and flow cytometry analysis of cell activation. Activation was characterized using markers for cell adhesion and activation ICAM-1, PECAM-1, VCAM-1, β1-integrin, CD44 and E-selectin. Following preliminarily testing, the efficacy of dexamethasone (10 nM) and PPM-18 (10 μM) loaded polymer (PDMS) and copolymer (PDMS/pNIPAAm) interpenetrating polymer networks were evaluated over a 4 day release period. The results from soluble drug and LPS (100 ng/mL) testing indicated no decrease in cell viability after 24 h. Dexamethasone, triptolide, aprotinin, and PPM-18 did not reduce the significant ICAM-1 upregulation seen in HUVECs after exposure to LPS for 4 days. PPM-18 in combination with LPS significantly upregulated E-selectin iv and CD44 from unstimulated HUVEC cells. The polymer materials without drug loading did not influence the cell phenotype. However, PPM-18 delivering polymer and copolymer materials significantly upregulated VCAM-1, CD44 when compared to all other treatments. Propidium iodide uptake in HUVEC exposed to PPM-18 drug delivering polymer and copolymer treatments indicated that these treatments caused cell necrosis. None of the drugs, or the drug delivering materials were shown to counteract the upregulation seen from LPS stimulation of HUVEC cells. Future work should focus on validating the in vitro model to more closely replicate the in vivo environment of the anterior segment with the use of primary bovine corneal endothelial cells.

Page generated in 0.0814 seconds