• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4861
  • 3799
  • 381
  • 165
  • 103
  • 25
  • 20
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 8
  • Tagged with
  • 11323
  • 1187
  • 521
  • 506
  • 506
  • 394
  • 325
  • 282
  • 261
  • 250
  • 239
  • 236
  • 201
  • 190
  • 187
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Development of endogenous and synthetic CHO promoter expression systems for recombinant protein production

Binge, Alexandra January 2018 (has links)
Chinese hamster ovary (CHO) cells are the most commonly used mammalian cell expression system for the industrial production of biotherapeutic recombinant proteins. Mammalian expression systems are a popular choice for the production of complex biopharmaceuticals due to their ability to correctly fold and assemble recombinant proteins and to carry out necessary human-like post-translational modifications. Traditionally, viral promoters are used to drive transgene expression in mammalian cells. This study reports on the investigation of endogenous CHO promoters as potential alternatives to the commonly used strong viral promoters such as SV40 and CMV. Although these promoters provide high recombinant gene expression, this is not always favourable as constitutive transgene over expression can ultimately lead to cellular stress. Furthermore, viral promoters have been reported to undergo silencing mechanisms over long culture periods, compromising product titers. Endogenous and synthetic CHO putative promoter sequences were investigated for their abilities to drive reporter and recombinant gene expression both in stable and transient systems. Initially a panel of ten putative promoters were identified from the literature, these being identified for this study as the associated gene or protein had been reported to have high transcript or protein amounts in CHO cells. The sequence 400 bp upstream of the transcript or translation start site of these genes was cloned into a promoterless eGFP reporter vector and their ability to drive transient gene expression examined. The ability of these putative promoters to drive eGFP expression was generally inferior to that of the viral SV40 promoter and always many-fold lower than that observed from the viral CMV with enhancer promoter. Regions 2 kb upstream from the transcript or translation site for each gene were then cloned into the promoterless eGFP reporter system to assess for promoter activity further upstream. For one of the 2 kb sequences, eGFP expression was enhanced above that observed from the equivalent 400 bp sequence and was greater than that from the SV40 promoter. A further nine target putative promoters were then identified from published RNAseq studies and sequences 500 bp upstream of the transcriptional start site of identified genes were cloned. Of the nine 500 bp sequences, three displayed promoter activity equivalent to or above that of SV40 and so regions 2 kb upstream of the transcriptional start site of these genes were investigated for further promoter activity. The use of the 2 kb segments resulted in an increase in stable but not transient eGFP expression from that observed from the 500 bp sequences alone. When placed directly downstream of the CMV enhancer two of the 2 kb sequences showed higher eGFP expression than when the CMV enhancer was not present. The CMV enhancer had no impact when upstream of the two other 2 kb sequences investigated showing this to be a sequence dependent effect. The 2 kb sequence which exhibited the strongest promoter activity of targets when driving transient eGFP expression, in addition to the two sequences with the CMV enhancer which showed enhanced transient eGFP expression, were investigated for their ability to drive IgG heavy and light chain, and hence IgG, stable expression in CHO cells compared to the CMV promoter with enhancer. Upon generation of pools and mini pools stably producing IgG under the control of the various promoters, the CHO endogenous 2 kb promoter outperformed the synthetic candidates, the CMV with enhancer and an industrially relevant control promoter. Indeed, the endogenous CHO promoter sequence was shown to achieve product titers of up to 3-fold greater than those from the commonly used CMV promoter with enhancer in CHO cells, demonstrating the potential for endogenous promoters to replace typically used viral promoters for recombinant gene expression. Further, colonies emerged faster after transfection and selection when using this promoter compared to the others investigated and a larger range of higher-expressing pools were available for investigation. In summary, this study has identified an endogenous CHO promoter sequence able to drive IgG expression beyond that of current widely used viral promoters and has generated additional promoters exhibiting a range of abilities to drive recombinant gene expression to varying amounts that could be applied to cell engineering approaches in the future.
362

Synthesis and characterisation of complex oxychalcogenides

Oogarah, Reeya Krishvi January 2017 (has links)
Mixed-anion compounds like oxychalcogenides, where there are both oxide anions and chalcogen anions (S-Te) create quite a rare class of materials which is gaining increasing attention. The anions have different sizes and chemical requirements which promise ordering of the oxide crystallographically, whereby their structures can be described as a mixture of 2-D structural building blocks (e.g. perovskite, fluorite, or rock-salt) stacked on top of each other along a given direction, thus giving a layered structure. These results in interesting electrical, optical and physical properties. This thesis reports a study of oxychalcogenide compounds with general formula Ln2O2Fe2OQ2 (Ln = lanthanide, Q = chalcogenides) and LnMOCh2 (Ln = Lanthanides, M = cation and Ch = chalcogenides). These compounds offer big opportunities for chemical substitution and doping within and between the layers, hence it is possible to tune the layer distances and in turn, tune the properties of these solids. The crystal structures and properties of all materials were investigated using several in-house techniques and those at central facilities. Electron doping La2O2Fe2OSe2 with Co and Ni ions onto the Fe2+ site gave limited solid solutions and semiconductor materials. Co-doping increased the transition temperature of La2O2Fe2OSe2 indicating that the distance between the Fe2O layers is reduced. Variable temperature neutron powder diffraction data of La2O2Fe2OS2 showed 2-D short range order while neutron powder diffraction data in applied magnetic field of Pr2O2Fe2OSe2 showed a subtle orthorhombic distortion around 23 K. Additional magnetic Bragg reflections, seen in both compounds, are consistent with a 2-k magnetic structure with a 2a x 2a x 2c unit cell. Moreover, both compounds showed stacking faults in the magnetic ordering on the Fe2+ sublattice which are absent in Mn2O or Co2O systems, implying that they are exclusively to the Fe2O layers and could be sensitive to the distance between Fe2O layers. Isovalent doping LaGaOS2 with Nd and Ce ions onto the La site gave limited solid solutions. From the diffuse reflectance spectroscopy data, Nd-doping and Ce-doping the parent compound gave no significant change in optical band gaps and the colour change observed in colour in Nd-doped samples are presumably due to f-f transitions. This work suggests that LaGaOS2 has a quite limited compositional flexibility in its structure and that it is not easy to modify its electronic structure using chemical pressure. Electron doping BiCuOSe with Ce (onto the Bi3+ site) and F ions (onto the O2- site) gave limited solid solutions. BiCuOSe had some Cu vacancies. The Ce-doped samples contained Ce4+ ions. Electron doping BiCuOSe with F gave metallic compounds, while doping BiCuOSe with Ce gave compounds with both semiconducting and metallic behaviour. The semiconducting behaviour may be due to impurities.
363

Panspermia : the survival of micro-organisms during hypervelocity impact events

Pasini, Luna January 2017 (has links)
The possible spread of life between planetary bodies has significant implications for any future discoveries of life elsewhere in the solar system, and for the origin of life on Earth itself. Litho-Panspermia proposes that life can survive the shock pressures associated with giant impacts which are sufficiently energetic to eject life into space. As well as this initial ejection, life must also survive the impact onto another planetary surface. The research presented shows that the micro-organisms Nannochloropsis oculata phytoplankton and tardigrade Hypsibius dujardini can be considered as viable candidates for panspermia. Using a Two-Stage Light Gas Gun, shot programmes were undertaken to impact frozen organisms at different velocities to simulate oceanic impacts from space. It is demonstrated that the organisms can survive a range of impact velocities, although survival rates decrease significantly at higher velocities. These results are explained in the context of a general model for survival after extreme shock, showing a two-regime survival with increasing shock pressure which closely follows the pattern observed in previous work on the survival of microbial life and spores exposed to extreme shock loading, where there is reasonable survival at low shock pressures, but a more severe lethality above a critical threshold pressure (a few GPa). Hydrocode modelling is then used to explore a variety of impact scenarios, and the results are compared with the experimental data during a thorough analysis of potential panspermia scenarios across the universe. These results are relevant to the panspermia hypothesis, showing that extreme shocks experienced during the transfer across space are not necessarily sterilising, and that life, could survive impacts onto other planetary bodies, thus giving a foothold to life on another world.
364

Flexibility analysis on a supply chain contract : deterministic and stochastic settings

Longomo, Eric Enkele January 2017 (has links)
This thesis is based on the application of flexibility analysis on a supply chain contract. The work was implemented using a practical case exploring a relationship between a car manufacturer (buyer) and a parts supplying company (supplier). Commonly, such contracts are ratified for a fixed duration - typically three years. A nominal order quantity (or initial capacity reservation) and a variation rate controlling the potential adjustments with respect to the nominal quantity for each period are imposed on the supplier when signing the contract. The supplier guarantees to meet the firm order should it fall within the range agreed upon, and charges a unit price of the product linear in the variation rate in order to protect himself from risks. The buyer in return is required to order the minimum quantity in each period defined by the nominal quantity and variation rate in the contract. The overall goal throughout the course of this PhD was to analyse this Quantity Flexibility (QF) contract at the strategic (or contracting) level. The prime focus - from the buyer’s perspective - was to develop a policy that determines the optimal nominal order quantity (Q) and variation rate (β) underpinning the contract that ensures the actual order quantity satisfies the actual demand as much as possible in each period and the total cost, including purchasing cost, inventory holding and backlogging costs, is minimised over the contract length. The approach taken in this study is aimed at solving the problem in two different settings. One is called the deterministic setting, where the demands are considered as deterministic and the other is called stochastic setting, where the demands are stochastic and stationary. For the deterministic setting, a parametric Linear Programming (pLP) model is developed from the buyer/retailer’s perspective to help analyse the optimal combination of values of β and Q. In the pLP model, the decision variables are the actual order quantity in each period, represented by vector x and β and Q are treated as the parameters. For each combination of values of β and Q, the optimal value of the vector x can be found by solving the corresponding Linear Programming (LP) model to optimality. However, the number of the combinations of the values of β and Q could be unlimited. To explore the optimal combination of values of β and Q, the convexity of the optimal value of the pLP model has been examined. Due to the fact that the optimal combination of values of β and Q cannot be analytically found due to mathematical intractability, this thesis numerically evaluates the best combination of β and Q to draw some managerial insights based on the findings. For the stochastic setting, this thesis analyses the long-run behaviour of the system when the signed contract is executed and calculates the mathematical expectation of per-period total purchasing, inventory holding and backlogging costs, as a function of the contracting parameters β and Q. The optimal values of these parameters are calculated through simulation of various demand patterns. For this purpose, we consider the basic case with zero lead time and a very simple order policy during the execution of the contract. These assumptions are nevertheless reasonable in the context of a car manufacturer and a supplier delivering Just-In-Time (JIT) parts using a QF contract. The evolution of the inventory position can be modelled with a Markov chain and the long-run behaviour of the system can then be analysed by considering the steady-state. Due to mathematical intractability, the steady-state is estimated through simulation. Our models differ from the previous similar works found in the literature where the QF mechanism is implemented in a way that in the models used in these works the nominal quantity (Q) and the flexibility parameter (β) are: analysed separately, coupled with other forms of coordinative drives, or computed using approximate methods and heuristics that are unable to firmly guarantee global optimum solutions.
365

Optimisation models and heuristic methods for deterministic and stochastic inventory routing problems

Moryadee, Chanicha January 2017 (has links)
The inventory routing problem (IRP) integrates two components of supply chain management, namely, inventory management and vehicle routing. These two issues have been traditionally dealt with problems in the area of logistics separately. However, these issues may reduce the total costs in which the integration can lead a greater impact on overall system performance. The IRP is a well-known NP-hard problem in the optimisation research publication. A vehicle direct delivery from the supplier with and without transhipments (Inventory Routing Problem with Transhipment, IRPT) between customers in conjunction with multi-customer routes in order to increase the flexibility of the system. The vehicle is located at a single depot, it has a limited capacity for serving a number of customers. The thesis is focused on the two main aspects: (1) Development of the optimisation models for the deterministic and stochastic demand IRP and IRPT under ML/OU replenishment polices. On the deterministic demand, the supplier deliveries products to customers whose demands are known before the vehicle arrives at the customers’ locations. Nevertheless, the stochastic demand, the supplier serves customers whose actual demands are known only when the vehicle arrives at the customers’ location. (2) Development of integrated heuristic, biased probability and simulation to solve these problems. The proposed approaches are used for solving the optimisation models of these problem in order to minimise the total costs (transportation costs, transhipment costs, penalty costs and inventory holding costs). This thesis proposed five approaches: the CWS heuristic, the Randomised CWS heuristic, the Randomised CWS and IG with local search, the Sim-Randomised CWS, and the Sim-Randomised CWS and IG with local search. Specifically, the proposed approaches are tested for solving the deterministic demand IRP and IRPT, namely, the IRP-based CWS, the IRP-based Randomised CWS, the IRP-based Randomised CWS and IG with local search. For the transhipment case are called the IRPT-based CWS, the IRPT-based Randomised CWS, and the IRP-based Randomised CWS and IG with local search. On the stochastic demand, these proposed approaches are named the SIRP-based Sim-Randomised CWS, the SIRPT-based Sim-Randomised CWS, the SIRP-based Sim-Randomised CWS and IG with local search, and the SIRPT-based Sim-Randomised CWS and IG with local search. The aim of using the sim-heuristic is to deal with stochastic demand IRP and IRPT, the stochastic behaviour is the realistic scenarios in which demand is used to be addressed using simulation. Firstly, the Sim-Randomised CWS approach, an initial solution is generated by Randomised CWS heuristic, thereafter an MCS is combined to provide further improvement in the final solution of the SIRP and the SIRPT. Secondly, the integration of Randomised CWS with MCS and IG with local search is solved on these problems. Using an IG algorithm with local search improved the solution in which it generated by Randomised CWS. The developed heuristic algorithms are experimented in several benchmark instances. Local search has been proven to be an effective technique for obtaining good solutions. In the experiments, this thesis considers the average over the five instances for each combination and the algorithms are compared. Thus, the IG algorithm and local search outperformed the solution of Sim-Randomised CWS heuristics and the best solutions in the literature. This proposed algorithm also shows a shorter computer time than that in the literature. To the best of the author’ knowledge, this is the first study that CWS, Randomised CWS heuristic, Sim-Randomised CWS and IG with local search algorithms are used to solve the deterministic and stochastic demand IRP and IRPT under ML/OU replenishment policies, resulting of knowledge contribution in supply chain and logistics domain.
366

Robust dynamic and stochastic scheduling in permutation flow shops

Al Behadili, Mohanad Riyadh Saad January 2018 (has links)
The Permutation Flow Shop Scheduling Problem (PFSP) is a fundamental problem underlying many operational challenges in the field of logistic and supply chain management. The PFSP is a well-known NP-hard problem whereby the processing sequence of the jobs is the same for all machines. The dynamic and stochastic PFSP arise in practice whenever a number of different types of disruptions or uncertainties interrupt the system. Such disruptions could lead to deviate the disrupted schedule from its initial plan. Thus, it is important to consider different solution methods including: an optimisation model that minimise different objectives that take into account stability and robustness, efficient rescheduling approach, and algorithms that can handle large size and complex dynamic and stochastic PFSP, under different uncertainties. These contributions can be described as follows: 1. Develop a multi-objective optimisation model to handle different uncertainties by minimising three objectives namely; utility, instability and robustness. 2. Propose the predictive-reactive approach to accommodate the unpredicted uncertainties. 3. Adapt the Particle Swarm Optimisation (PSO), the Iterated Greedy (IG) algorithm and the Biased Randomised IG algorithm (BRIG) to reschedule the PFSP at the reactive stage of the predictive-reactive approach. 4. Apply the Simulation-Optimisation (Sim-Opt) approach for the Stochastic PFSP (SPFSP)under different uncertainties. This approach consists of two methods, which are: the novel approach that hybridise the Monte Carlo Simulation (MCS) with the PSO (Sim-PSO)and the Monte Carlo Simulation (MCS) with the BRIG (Sim-BRIG). The main aim of using the multi-objective optimisation model with different solution methods is to minimise the instability and keep the solution as robust as possible. This is to handle uncertainty as well as to optimise against any worst instances that might arise due to data uncertainty. Several approaches have been proposed for the PFSP under dynamic and stochastic environments, where the PSO, IG and BRIG are developed for the PFSP under different uncertainties. Also, hybridised the PSO and the BRIG algorithms with the MCS to deal with SPFSP under different uncertainties. In our version of the approach, the first one is a PSO algorithm step after which an MCS is incorporated in order to improve the final solutions of problem. The second approach proposed the hybridisation of the BRIG algorithm with MCS to be applied on the SPFSP under different uncertainties. The developed multi-objective model and proposed approaches are tested on benchmark instances proposed by (Katragjini et al.,2013) in order to evaluate the effectiveness of the proposed methodologies, this benchmark is based on the well-known instances of Taillard’s (Taillard, 1993). The computational results showed that the proposed methodologies are capable of finding good solutions for the PFSP under different uncertainties and that they are robust for the dynamic and stochastic nature of the problem instances. We computed the best solutions and found that they could be highly promising in minimising the total completion time. The results obtained are quite competitive when compared to the other models found in the literature. Also, some proposed algorithms show better performance when compared to others.
367

Efficiency improvement of private education schools in the Kingdom of Saudi Arabia using a mixed methodology approach

Alzkari, Tahani Ibrahim January 2018 (has links)
Education in Saudi Arabia has received considerable attention from the government in the hope of finding strategies to improve the performance of schools, which is essential for economic development. However, education in Saudi Arabia has been criticised due to the quality of teachers, their salary, and the curricula and resources used (Khashoggi, 2014; Lindsey, 2010). Although the international literature reviews important issues of efficiency in education, there are few research projects which studies the efficiency challenges and constraints pertinent to Saudi schools in depth. The purpose of this research is to use Operational Research (OR) models focused on assessing and improving the performance of private schools in the Kingdom of Saudi Arabia, especially the Riyadh districts. A set of models are developed based on 57 schools for descriptive efficiency measurement and 12 schools for prescriptively improving the performance. Data collection was implemented through the Quality and Planning Department of the Saudi Ministry of Education who liaised with the school managers. In terms of model formulation, the techniques of: Data Envelopment Analysis (DEA) for measuring efficiency, the Analytical Hierarchy Process (AHP) for ranking priority of the criteria of improving school performance, and Goal Programming (GP) for improving the efficiency of schools are used in a combined framework. A subsidiary Goal Programming model for resolving the inconsistencies in the AHP preferences is also used. The results that can be obtained from building these models can be beneficial for parents, schools, and governments. Efficient schools yield better outcomes and support a higher quality of education, which increases the knowledge and skills of students. Those students will then contribute more effectively to the future development of their countries. In addition, high-performing schools delivering high levels of education to students can encourage the economic growth of a nation.
368

Fairness in nurse rostering problem

Glampedakis, Antonios January 2018 (has links)
Many Operational Research (OR) problems like scheduling and timetabling, areassociated with evaluating the distribution of resources in a set of entities. This set of entities can be defined as a society having some common traits. The evaluation of the distribution is traditionally done with a utilitarian approach, or using some statistical methods. In order to gain a more in depth view of distributions in problem solving new measures and models from the fields of Computer Science, Economics, and Sociology, as well OR are proposed. These models focus on 3 concepts: fairness (minimisation of inequalities), social welfare (combination of fairness and efficiency) and poverty (starvation of resources). A Multiple Criteria Decision Making (MCDM) model, combining utilitarian, fairness and poverty measures is also proposed. These measures and models are applied to the nurse rostering problem from a central decision maker point of view. Nurses are treated as a society, trying to optimise nurse satisfaction. Nurse satisfaction is investigated independently from the hospital management, forming two conflicting criteria. The results from different measures cannot be evaluated using cardinal measures, so MCDM methods and Lorenz Curves are used instead of a numerical, cardinal measure.
369

A critical evaluation of local air quality management and its contribution to meeting the EU annual mean nitrogen dioxide limit value

Barnes, J. January 2014 (has links)
Local Air Quality Management (LAQM) was initially intended to play a supplementary role in assisting the UK government in achieving its European air quality obligations (Directive 2008/50/EC) through the implementation of action plans to reduce public exposure to local air pollution hotspots. Since the inception of LAQM in 1997, however, exceedences of health-based nitrogen dioxide objectives, primarily related to road traffic sources, have proved to be more widespread and intractable than previously anticipated. The failure of the UK government to achieve the EU annual mean limit value by the prescribed deadline of 1st January 2010 for 93% of the UK’s Zones and Agglomerations has increased the emphasis on the role of LAQM. At the same time, the lack of revocations of local Air Quality Management Areas has called into question the efficacy of local authorities Air Quality Action Plans (AQAPs). This research draws on the extensive body of evidence provided by the LAQM process since 1997 to establish if it possible to determine whether local AQAPs have been effective in achieving their aims and in improving air quality at a local level. By evaluating the degree of success achieved through individual AQAPs and then building an aggregate picture of progress to achievement of their goals, it has been possible to assess the effectiveness and efficiency of the LAQM regime as a national strategy to meet national air quality objectives and to contribute to EU air quality legislative requirements. The key finding from this research is a confirmation of the thesis statement, i.e. that historically LAQM has not been a successful strategy in achieving selected EU limit values. An absence of adequate AQAP progress reporting and representatively sited robust monitoring data indicate that, collectively, the means to assess the effectiveness of LAQM in terms of reducing local concentrations of nitrogen dioxide does not currently exist. The thesis offers nine recommendations for Defra and the Devolved Administrations to improve the effectiveness of LAQM in assisting with the achievement of the NO2 annual mean EU limit value. They are proposed as solutions to the limitations and obstacles observed in undertaking this research, and in essence advocate a combined and coordinated national and local approach to reducing traffic-related nitrogen dioxide concentrations in order to achieve the EU limit value. The current revision of LAQM and the recent changes to the EU AAQD reporting requirements make this an opportune moment to instigate these proposed changes.
370

Studies of the high-latitude electric potential pattern in the CTIP model with radio tomography comparisons

Whittick, Emma L. January 2010 (has links)
This thesis explores the use of high-latitude electric potential patterns obtained from the Super Dual Auroral Radar Network (SuperDARN) as input to the Coupled Thermosphere Ionosphere Plasmasphere (CTIP) model. By using a new method of highlatitude convection input it is shown that some improvements to the modelling of the spatial distribution of the electron density can be obtained. In the earlier versions of the CTIP model the high-latitude electric potential input was selected from a restricted library of convection patterns. By introducing the SuperDARN electric potential data as the high-latitude input it is now possible to model a wide range of di erent convection patterns, notably patterns occurring as a result of Interplanetary Magnetic Field (IMF) Bz positive conditions. In order to begin to validate the use of this technique, images obtained from ionospheric radio tomography experiments were used to form case studies involving periods of time where the IMF Bz component was either stable and positive or stable and negative. This enabled the ion densities from the tomography images to be compared to the ion densities obtained from the CTIP model output. Initially this was done with two case-studies that had mature interpretations in order to prove that the concept of using SuperDARN convection patterns in CTIP was valid. Subsequent case-studies involved using the model with the new convection pattern input method to assist with the interpretation of the tomography images obtained from the Alaskan sector. iii

Page generated in 0.1769 seconds