• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 707
  • 707
  • 669
  • 165
  • 110
  • 71
  • 70
  • 62
  • 58
  • 50
  • 46
  • 44
  • 44
  • 44
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Implementing video compression algorithms on reconfigurable devices

Stewart, Graeme Robert January 2010 (has links)
The increasing density offered by Field Programmable Gate Arrays(FPGA), coupled with their short design cycle, has made them a popular choice for implementing a wide range of algorithms and complete systems. In this thesis the implementation of video compression algorithms on FPGAs is studied. Two areas are specifically focused on; the integration of a video encoder into a complete system and the power consumption of FPGA based video encoders. Two FPGA based video compression systems are described, one which targets surveillance applications and one which targets video conferencing applications. The FPGA video surveillance system makes use of a novel memory format to improve the efficiency with which input video sequences can be loaded over the system bus. The power consumption of a FPGA video encoder is analyzed. The results indicating that the motion estimation encoder stage requires the most power consumption. An algorithm, which reuses the intra prediction results generated during the encoding process, is then proposed to reduce the power consumed on an FPGA video encoder’s external memory bus. Finally, the power reduction algorithm is implemented within an FPGA video encoder. Results are given showing that, in addition to reducing power on the external memory bus, the algorithm also reduces power in the motion estimation stage of a FPGA based video encoder.
42

Influence of hydrodynamics on carbon steel erosion-corrosion and inhibitor efficiency in simulated oilfield brines

Zvandasara, Tendayi January 2010 (has links)
Corrosion within the oil and gas sector is an ongoing concern for operators. The challenging nature of extraction and processing fluids is an unavoidable cause of severe metallic corrosion. With modern emphasis on health, safety and the environment, the case for managing corrosion has become an imperative agenda. Whilst new and more effective methods of mitigation are key, an interim solution is improving the value of current methods. A literature survey carried out within this project has revealed CO2 corrosion as contributing to most corrosion related failures within the industry. The corrosion behaviour in CO2 containing environments is complex partly due to the wide range of prevailing conditions such as temperature, CO2 concentration and flow conditions. For oil and gas transportation pipelines, internal corrosion mitigation can be achieved by the use of chemical inhibitors. Inhibitors have been established to be effective but are by no means a complete solution. Issues such as their effectiveness in high velocity and high shear flow are a main consideration for their function. The hydrodynamic nature of the flowing fluids can affect inhibitor efficiency by either slowing the rate of formation of the inhibitive layer or causing degradation of well-formed inhibitive layers. A combined effect may also be active. The corrosion behaviour of carbon steel in simulated oilfield conditions is investigated in this project with emphasis on conditions of varying velocity, impinging flow and consequently shear stress. Since inhibition is the main mitigation technique for fluid related corrosion, the efficiency of a commercially used inhibitor is, in this case assessed in the abovementioned conditions. To simulate both impingement and flow, a jet impingement apparatus is used in conjunction with a segmented-electrode specimen set up to separately study the erosion-corrosion behaviour of different hydrodynamic zones under the jet. Corrosion rates are measured by gravimetric analysis and results are also evaluated with electrochemistry. Additionally, galvanic interactions between the different hydrodynamic zones have been investigated. Visual and light-optical microscopic examinations are also used to assess variable effects within the zones. Under such conditions, the corrosion rates have been found to be significantly higher in impingement zones. Aerated conditions have shown a significant variation in corrosion behaviour between impingement and non-impingement zones. The results in CO2 saturated brines are consistent but with evidence of different relations between hydrodynamics and the corrosion rate. The inhibitor has been shown to be effective in CO2 saturated brines and significantly influenced by both inhibitor concentration and hydrodynamic conditions. Inhibitor efficiency has also shown a complex dependence on concentration and establishes a need to evaluate optimum inhibitor concentrations before field application. Evaluation of the mass loss results against electrochemistry has shown a large discrepancy between the two methods. This rather surprising result suggests solid-free flow is not entirely free of erosion and synergistic effects. This comprehensive study has not only improved current knowledge on the relation between hydrodynamics and inhibitor efficiency but also indicates a critical need to evaluate suitability of current monitoring methods. Electrochemical methods are increasingly used as a method of choice and while they contribute significant monitoring data, they are observed to be unable, alone, to monitor erosion and synergy. An industry review on their suitability to monitor solid-free flow corrosion is recommended.
43

Statistical compact model strategies for nano CMOS transistors subject of atomic scale variability

Moezi, Negin January 2012 (has links)
One of the major limiting factors of the CMOS device, circuit and system simulation in sub 100nm regimes is the statistical variability introduced by the discreteness of charge and granularity of matter. The statistical variability cannot be eliminated by tuning the layout or by tightening fabrication process control. Since the compact models are the key bridge between technology and design, it is necessary to transfer reliably the MOSFET statistical variability information into compact models to facilitate variability aware design practice. The aim of this project is the development of a statistical extraction methodology essential to capture statistical variability with optimum set of parameters particularly in industry standard compact model BSIM. This task is accomplished by using a detailed study on the sensitivity analysis of the transistor current in respect to key parameters in compact model in combination with error analysis of the fitted Id-Vg characteristics. The key point in the developed direct statistical compact model strategy is that the impacts of statistical variability can be captured in device characteristics by tuning a limited number of parameters and keeping the values for remaining major set equal to their default values obtained from the “uniform” MOSFET compact model extraction. However, the statistical compact model extraction strategies will accurately represent the distribution and correlation of the electrical MOSFET figures of merit. Statistical compact model parameters are generated using statistical parameter generation techniques such as uncorrelated parameter distributions, principal component analysis and nonlinear power method. The accuracy of these methods is evaluated in comparison with the results obtained from ‘atomistic’ simulations. The impact of the correlations in the compact model parameters has been analyzed along with the corresponding transistor figures of merit. The accuracy of the circuit simulations with different statistical compact model libraries has been studied. Moreover, the impact of the MOSFET width/length on the statistical trend of the optimum set of statistical compact model parameters and electrical figures of merit has been analyzed with two methods to capture geometry dependencies in proposed statistical models.
44

High repetition rate quantum dot mode-locked lasers operating at ~1.55 μm

Mahmood, Shahid January 2013 (has links)
This thesis is concerned with the design, fabrication and investigation of InAs/InP quantum dot mode-locked lasers operating at ~1.55 μm with multi-gigahertz repetition rates. Devices with dual contact configuration operating at ~35 GHz were fabricated and mode-locking characteristics were investigated as a function of the saturable absorber length. The deposition of HR and AR coatings on the two cleaved facets provided an increase in the quantum efficiency and shifted the optimum mode-locking region to a higher injection current. This simple technological step increased the peak power of the emitted pulses by nearly a factor of 2. Furthermore, the appearance of two distinct lobes in the optical spectrum, which is a typical feature of quantum dot material systems, was also investigated. The sonogram technique confirmed the presence of two pulse trains under moderate values of current injection and stable locking of only one lobe at high injection currents. Finally, techniques for high repetition rate mode locking such as colliding pulse, asymmetric colliding pulse and double interval mode-locking were evaluated. Harmonic mode-locking at repetitions rates of ~71 GHz, ~107 GHz and ~238 GHz was demonstrated by placing the absorbers on cavity locations corresponding to the 2nd, 3rd and 7th harmonic, respectively. A monolithically integrated coupled cavity device was also explored, in which an FIB-milled intra-cavity reflector provided mode-locking at a repetition rate of ~107 GHz.
45

Development and use of simulation models in Operational Research : a comparison of discrete-event simulation and system dynamics

Tako, Antuela Anthi January 2008 (has links)
The thesis presents a comparison study of the two most established simulation approaches in Operational Research, Discrete-Event Simulation (DES) and System Dynamics (SD). The aim of the research implemented is to provide an empirical view of the differences and similarities between DES and SD, in terms of model building and model use. More specifically, the main objectives of this work are: 1. To determine how different the modelling process followed by DES and SD modellers is. 2. To establish the differences and similarities in the modelling approach taken by DES and SD modellers in each stage of simulation modelling. 3. To assess how different DES and SD models of an equivalent problem are from the users’ point of view. In line with the 3 research objectives, two separate studies are implemented: a model building study based on the first and second research objectives and a model use study, dealing with the third research objective. In the former study, Verbal Protocol Analysis is used, where expert DES and SD modellers are asked to ‘think aloud’ while developing simulation models. In the model use study a questionnaire survey with managers (executive MBA students) is implemented, where participants are requested to provide opinions about two equivalent DES and SD models. The model building study suggests that DES and SD modelling are different regarding the model building process and the stages followed. Considering the approach taken to modelling, some similarities are found in DES and SD modellers’ approach to problem structuring, data inputs, validation & verification. Meanwhile, the modellers’ approach to conceptual modelling, model coding, data inputs and model results is considered different. The model use study does not identify many significant differences in the users’ opinions regarding the specific DES and SD models used, implying that from the user’s point of view the type of simulation approach used makes little difference if any. The work described in this thesis is the first of its kind. It provides an understanding of the DES and SD simulation approaches in terms of the differences and similarities involved. The key contribution of this study is that it provides empirical evidence on the differences and similarities between DES and SD from the model building and model use point of view. Albeit the study does not provide a comprehensive comparison of the two simulation approaches, the findings of the study, provide new insights about the comparison of the two simulation approaches and contribute to the limited existing comparison literature.
46

Ambiguity resolution of single frequency GPS measurements

Tandy, Michael J. January 2011 (has links)
This thesis considers the design of an autonomous ride-on lawnmower, with particular attention paid to the problem of single frequency Global Navigation Satellite System (GNSS) ambiguity resolution. An overall design is proposed for the modification of an existing ride-on lawnmower for autonomous operation. Ways of sensing obstacles and the vehicle's position are compared. The system's computer-to-vehicle interface, software architecture, path planning and control algorithms are all described. An overview of satellite navigation systems is presented, and it is shown that existing high precision single frequency GNSS receivers often require time-consuming initialisation periods to perform ambiguity resolution. The impact of prior knowledge of the topography is analysed. A new algorithm is proposed, to deal with the situation where different areas of the map have been mapped at different levels of precision. Stationary and kinematic tests with real-world data demonstrate that when the map is sufficiently precise, substantial improvements in initialisation time are possible. Another algorithm is proposed, using a noise-detecting acceptance test taking data from multiple receivers on the same vehicle (a GNSS com- pass configuration). This allows a more demanding threshold to be used when noise levels are high, and a less demanding threshold to be used at other times. Tests of this algorithm reveal only slight performance improvements. A final algorithm is proposed, using Monte Carlo simulation to account for time-correlated noise during ambiguity resolution. The method allows a fixed failure rate configuration with variable time, meaning no ambiguities are left floating. Substantial improvements in initialisation time are demonstrated. The overall performance of the integrated system is summarised, conclusions are drawn, further work is proposed, and limitations of the techniques and tests performed are identified.
47

The physiology and bioenergetics of ultraendurance mountain bike racing

Metcalfe, John January 2011 (has links)
Ultraendurance mountain bike racing is a relatively new sport and has received scant research attention. The practical difficulty of field-testing during competition has played a role in this dearth of knowledge. The purpose of this thesis was to investigate the physiology and bioenergetics of cross-country marathon (XCM) and 24 hour team relay (24XCT) mountain bike racing. Study One analysed the physiological characteristics of XCM competitors and compared them to data from studies in the literature for Olympic-distance cross-country (XCO) mountain bike competitors. The XCM participants had lower mean peak aerobic capacity (58.4 ± 6.3 mL•kg-1•min-1), greater body mass (72.8 ± 6.7 kg) and estimated percentage body fat (10.4 ± 2.4%) compared to values reported for XCO competitors in the literature. Stature (1.77 ± 6.0 m) and normalised peak power output (5.5 ± 0.7 W•kg-1) were comparable. These data suggest that specific physiological characteristics of XCM competitors differ from those of XCO competitors. Study Two quantified and described the exercise intensity during a XCM race by monitoring heart rate responses. The mean heart rate (150 ± 10 beats•min-1) for the duration of the race equated to 82 percent of maximum heart rate and did not differ significantly throughout the race (p = 0.33). The data indicated that the XCM race was of a high aerobic intensity. Prior to the competition the relationship between heart rate and O2peak for each participant was established during an incremental laboratory test. Energy expenditure was estimated by assigning 20.2 kJ to each litre of oxygen consumed. The mean rate of energy expenditure during the race was estimated to be 59.9 kJ•min-1. Furthermore, no anthropometric or physiological measures were correlated to race speed, indicating that other factors contribute to race performance. The third study was a laboratory-based investigation to determine whether physiological factors relevant to 24XCT racing change with time of day. On separate days participants cycled on an ergometer for 20 min at 82 percent of maximum heart rate at 06:00, 12:00, 18:00, and 00:00 h. Significant differences (p < 0.05) were observed for several physiological responses (heart rate, oxygen uptake, salivary cortisol concentrations and intra-aural temperature) but not for performance variables (power output and self-selected cadence). It was concluded that the laboratory protocol lacked ecological validity and that it was necessary to test within a race using authentic 24XCT competitors. In order to measure in-race performance, Study Four examined the agreement between a bottom-bracket ambulatory ergometer (Ergomo®Pro) and the criterion SRM power meter in a field-based setting. Analysis of absolute limits of agreement found that the Ergomo®Pro had a systematic bias (± random error) of 4.9 W (± 6.12). Based on tolerances recommended in the literature the unit was considered fit for purpose for measuring power output during 24XCT racing. Study Five was a multiple case-study design that examined the physiological and performance parameters of a team during a 24XCT race. It was reported that mean work-shift speed (18.3 ± 2.6 km•h-1), power output (219 ± 50.9 W) and cadence (64.1 ± 9.3 rpm) were variable between participants and between work-shifts. A commonality amongst the participants was an increase in speed during the final work-shift compared to the penultimate one. A decline in work-shift heart rate was observed throughout the race. For the majority of participants an increase in gross efficiency (1.7 ± 1.4 %) was reported from the penultimate to the final work-shift. It was concluded that pacing strategies were employed and that the improved efficiency was caused, in part, by an increased familiarity with the course during the race. Study Six examined the nutritional practices and energy expenditure of the same team during the same 24XCT race. Energy expenditure during the work-shifts was estimated in accordance with Study Two. Resting energy expenditure during the recovery periods was estimated using the Harris and Benedict formula (1919). Food and fluid consumption were determined via food diaries and hydration status was assessed by measuring the refractive index of urine. Energy consumption (17.3 ± 2.2 MJ) was considerably less than energy expenditure (30.4 ± 6.1 MJ) with the former accounting for only 57 percent of the latter. The energy cost during the work-shifts was estimated to be 74.5 kJ•min-1. Mean fluid intake (6.3 ± 0.9 L) for the 24 h was sufficient to maintain hydration status. Based on these studies an integrated model of the factors that influence ultraendurance mountain bike performance was developed. The domains that influence race speed are physiological factors, technical and tactical factors, and nutritional strategies. The sub domain that influences these is environmental factors. Collectively this information is of practical importance to sport scientists, coaches and athletes involved with designing nutritional and tactical preparation strategies and training programmes for this sport.
48

An integrated framework for implementing technology roadmapping in industry

Mejia Pantoja de Cavin, Shirley Gisela January 2012 (has links)
Managing technological change in business is difficult. Especially for organisations in technology-based sectors where they are required to rethink and redesign their strategies to ensure they remain competitive in evolving markets. These organisations are focusing their attention on the use of managerial tools and methodologies to help generate a successful business plan. One such tool is Technology Roadmapping (TRM), whose main objective is the alignment of companies’ strategies towards the fulfilment of their business objectives and goals. A better understanding of TRM has resulted in organisations adopting this methodology into their business practices while others perceive its implementation as a complex process requiring a vast amount of information. An adequate framework facilitating the implementation process is lacking. Therefore, in order to address these needs, and driven by the gaps identified in the literature, an integrated framework supporting organisations in the task of implementing technology roadmapping is developed in this research. It is composed of three major elements. Firstly, the implementation lifecycle, that guides users through activities for implementation and application in their organisations. Secondly, an integrated data-knowledge structure composed of a set of models where data, information and knowledge from the market, product, technology, and R&D stages are identified. And finally, an integrated software tool, based on the structure and a selected roadmapping approach, which supports the execution of processes and activities during a roadmapping exercise. The framework is tested and validated in a series of case studies in the aerospace industry. The initial studies, conducted during the development of the framework, allowed refinements and improvements to be implemented prior to the second set of case studies, following the completion of this framework. The results from the case studies confirm the feasibility and usability of applying the developed framework into practice as well as providing recommendations for future work.
49

A multi-attribute decision making methodology for selecting new R&D projects portfolio with a case study of Saudi oil refining industry

Kabli, Mohammad Reda January 2009 (has links)
Energy is a resource of fundamental importance and if there is one thing that the world is going to need more in the future, it's energy. Increased energy demand is a major factor for the energy industry to invest in innovative technologies by developing processes and products that deliver improved efficiency and environmental performance. With oil continues to satisfy a major part of the energy needs, it is important for oil companies to invest wisely in Research and Development (R&D) projects. Literature is full of methods that address the problem of R&D portfolio selection. Despite their availability, R&D portfolio selection methods are not used widely. This is due to lacking several issues identified by researchers and practitioners. As a result, R&D portfolio selection is still an important area of concern. This research proposes a multi-attribute decision making methodology for selecting R&D portfolios with a case study of implementation of the methodology in the Saudi oil refining industry. Driven by the research question and some gaps identified in the related literature review, the methodology has been modified and improved. The methodology includes methods and techniques that aim to give insights to decision makers to evaluate individual projects and select the R&D portfolio. The methodology is divided into three stages with different steps in each stage by combining and modifying two well-known multi-attribute decision making methods: the Simple Multi-Attribute Rating Technique (SMART) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The case study describes further methods such as Integer Linear Programming (ILP) and Monte Carlo simulation for generating data to test the validation and operationality of the methodology. It is designed in a step-by-step, easy to apply way and considers the decision making type in a national oil company. It includes the preferences of the decision makers and takes into consideration the multiple, monetary and non- monetary, attributes that ought to be considered to satisfy not only the objectives of the Saudi national company (Aramco), but the strategic goals of the Saudi government as well.
50

Office space allocation by using mathematical programming and meta-heuristics

Ulker, Ozgur January 2013 (has links)
Office Space Allocation (OSA) is the task of efficient usage of spatial resources of an organisation. A common goal in a typical OSA problem is to minimise the wastage of space either by limiting the overuse or underuse of the facilities. The problem also contains a myriad of hard and soft constraints based on the preferences of respective organisations. In this thesis, the OSA variant usually encountered in academic institutions is investigated. Previous research in this area is rather sparse. This thesis provides a definition, extension, and literature review for the problem as well as a new parametrised data instance generator. In this thesis, two main algorithmic approaches for tackling the OSA are proposed: The first one is integer linear programming. Based on the definition of several constraints and some additional variables, two different mathematical models are proposed. These two models are not strictly alternatives to each other. While one of them provides more performance for the types of instances it is applicable, it lacks generality. The other approach provides less performance; however, it is easier to apply this model to different OSA problems. The second algorithmic approach is based on metaheuristics. A three step process in heuristic development is followed. In the first step, general local search techniques (descent methods, threshold acceptance, simulated annealing, great deluge) traverse within the neighbourhood via random relocation and swap moves. The second step of heuristic development aims to investigate large sections of the whole neighbourhood greedily via very fast cost calculation, cost update, and search for best move procedures within an evolutionary local search framework. The final step involves refinements and hybridisation of best performing (in terms of solution quality) mathematical programming and meta-heuristic techniques developed in prior steps. This thesis aims to be one of the pioneering works in the research area of OSA. The major contributions are: the analysis of the problem, a new parametrised data instance generator, mathematical programming models, and meta-heuristic approaches in order to extend the state-of-the art in this area.

Page generated in 0.0735 seconds