• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 309
  • 83
  • 33
  • 30
  • 29
  • 16
  • 7
  • 6
  • 6
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 902
  • 233
  • 221
  • 196
  • 159
  • 157
  • 138
  • 108
  • 98
  • 83
  • 79
  • 77
  • 75
  • 69
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

DETERMINATION OF ISOLATOR TRANSFER MATRIX AND INSERTION LOSS WITH APPLICATION TO SPRING MOUNTS

Sun, Shishuo 01 January 2015 (has links)
Transmissibility is the most common metric used for isolator characterization. However, engineers are becoming increasingly concerned about energy transmission through an isolator at high frequencies and how the compliance of the machine and foundation factor into the performance. In this study, the transfer matrix approach for isolator characterization is first reviewed. Two methods are detailed for determining the transfer matrix of an isolator using finite element simulation. This is accomplished by determining either the mobility or impedance matrix for the isolator and then converting to a transfer matrix. One of the more useful metrics to characterize the high frequency performance of an isolator is insertion loss. Insertion loss is defined as the difference in transmitted vibration in decibels between the unisolated and isolated cases. Insertion loss takes into account the compliance on the source and receiver sides. Accordingly, it has some advantages over transmissibility which is a function of the damping and mounted resonant frequency. A static analysis is to preload the isolator so that stress stiffening is accounted for. This is followed by modal and forced response analyses to identify the transfer matrix of the isolator. In this paper, the insertion loss of spring isolators is examined as a function of several geometric parameters including the spring diameter, wire diameter, number of active coils, and height. Results demonstrate how modifications to these parameters affect the insertion loss and the first surge frequency.
242

A dual hoist robot crane for large area sensing

Harber, John A. 27 May 2016 (has links)
Cranes are used to lift and move large objects in a wide variety of applications at constructions sites, shipping ports, and manufacturing facilities, etc. If the load to be moved is too long or heavy for a single crane, then two, or more, cranes must work in cooperation to move the payload. In a factory setting this can be accomplished using two trolleys running along the same bridge forming a dual hoist crane. Using two hoists not only increases lifting capacity, it also improves stability of the payload over traditional single hoist configurations. This research takes advantage of that increased stability and explores a novel application for dual hoist cranes: suspending a robot arm from the two trolleys. This increases the workspace of the robot to the entirety of the space covered by the crane, opening up numerous applications not possible with a stationary robot. In order to better understand and characterize the dynamics of the system, a numerical model was developed and tested against a physical system to confirm its validity. A vision system has the potential to greatly increase the usefulness of a robotic system such as the one presented in previous paragraph. The Asus Xtion was used in this work due to its versatility and low cost. An evaluation of this sensor was performed. Various tests were conducted to determine its accuracy in a range of scenarios. It was found that crane oscillations degraded the quality of data returned. This is effect is especially detrimental if the crane is moved to a specified point and sensing begins immediately. The data collection process could be delayed until the residual oscillations subside, however the time penalty incurred by waiting is large because the oscillations are lightly damped and have a long period. To address this issue a control method called input shaping was introduced to reduce the residual oscillations thereby increasing the quality of the sensor data. Finally, two promising uses of the robot arm dual-hoist crane system were introduced: painting and sandblasting. The efficiency of a factory equipped with this system can be increased at relatively low cost by automating manual tasks such as these.
243

The Antecedents of Trust in Mobile Commerce : A Quantitative Analysis on What Drives Mobile Trust, In The Brazilian Market.

Junqueira, Eduardo January 2016 (has links)
As the adoption of mobile devices grows around the world, the use of this tool to access the internet and, consequently, interact with a mobile store is also growing. A mobile commerce that is trustworthy tends to have advantages if compared to its competitors and, therefore, better results. This study focuses on understanding and measuring the influence of trust antecedents applied to mobile commerce, in Brazil. The final antecedents found are Data Controls, Website Interactivity, Reputation and Willingness to Customize. The results indicate that Data Controls, followed by Reputation and Website Interactivity are the main influencers of Trust. If Reputation is not considered as part of the model, Website Interactivity becomes an important antecedent. The results provided in this thesis are relevant, since there is a lack of research related to trust in mobile commerce. It has deep practical applications by helping an online business to focus on actions that are essential to build trust through a device with such differentiated capabilities and dynamics of use.
244

Measuring the Effectiveness of China’s Capital Flow Management and Progress on Capital Account Liberalization

Yow, Xinying 01 January 2016 (has links)
China’s goal of eventually having the renminbi (RMB) be “fully convertible” necessarily requires that its capital account be fully liberated; this paper investigates the on-going changes of the implemented capital controls by China and China’s progress on liberalizing the country’s capital account. The first portion of the paper studies deviations of the covered interest parity, a common measure of capital controls. Econometrical analysis provides evidence for significant and persistent RMB/USD interest rate differentials, calculated from monthly data of 1-month yields for the sample period of 1999 to 2014. At the same time, evidence for cointegration between the onshore and offshore yield suggests that capital flows are not fully restrictive in the long run. The second portion of the paper analyzes constructed de jure capital control indices based on IMF’s AREAER documents following Chen and Qian (2015), and actual capital account flows based on China’s Balance of Payments. The constructed de jure indices quantify the intensity of changes of capital controls, capturing the gradualist style that China adopts in implementing its policies. The index reveals that China has been increasing its pace of capital account liberalization in the recent years compared to the past, and in particular, prioritizes liberalizing controls on outward FDI flows and equity securities inflows. The constructed de jure indices and the respective flows for FDI and equity securities are found to be highly correlated, implying that flows have been responsive to changes in the controls. It also indicates that prior to the restriction lift offs, the capital controls had been relatively effective.
245

An Information Security Control Assessment Methodology for Organizations

Otero, Angel Rafael 01 January 2014 (has links)
In an era where use and dependence of information systems is significantly high, the threat of incidents related to information security that could jeopardize the information held by organizations is more and more serious. Alarming facts within the literature point to inadequacies in information security practices, particularly the evaluation of information security controls in organizations. Research efforts have resulted in various methodologies developed to deal with the information security controls assessment problem. A closer look at these traditional methodologies highlights various weaknesses that can prevent an effective information security controls assessment in organizations. This dissertation develops a methodology that addresses such weaknesses when evaluating information security controls in organizations. The methodology, created using the Fuzzy Logic Toolbox of MATLAB based on fuzzy theory and fuzzy logic, uses fuzzy set theory which allows for a more accurate assessment of imprecise criteria than traditional methodologies. It is argued and evidenced that evaluating information security controls using fuzzy set theory addresses existing weaknesses found in the literature for traditional evaluation methodologies and, thus, leads to a more thorough and precise assessment. This, in turn, results in a more effective selection of information security controls and enhanced information security in organizations. The main contribution of this research to the information security literature is the development of a fuzzy set theory-based assessment methodology that provides for a thorough evaluation of ISC in organizations. The methodology just created addresses the weaknesses or limitations identified in existing information security control assessment methodologies, resulting in an enhanced information security in organizations. The methodology can also be implemented in a spreadsheet or software tool, and promote usage in practical scenarios where highly complex methodologies for ISC selection are impractical. Moreover, the methodology fuses multiple evaluation criteria to provide a holistic view of the overall quality of information security controls, and it is easily extended to include additional evaluation criteria factor not considered within this dissertation. This is one of the most meaningful contributions from this dissertation. Finally, the methodology provides a mechanism to evaluate the quality of information security controls in various domains. Overall, the methodology presented in this dissertation proved to be a feasible technique for evaluating information security controls in organizations.
246

Complexity Management to design and produce customerspecific hydraulic controls for mobile applications

Krüßmann, Martin, Tischler, Karin 03 May 2016 (has links) (PDF)
Complexity management is the key to success for mobile machinery where the variety of customers and applications requires individual solutions. This paper presents the way Bosch Rexroth supports each OEM with hydraulic controls – from specification and conception towards application and production. It gives examples how platforms and processes are optimized according to the customer needs. The demand for flexible, short-term deliveries is met by an agile production with the technologies of Industry 4.0.
247

Three Essays in International Macroeconomics

Nanovsky, Simeon Boyanov 01 January 2015 (has links)
This dissertation spans topics related to global trade, oil prices, optimum currency areas, the eurozone, monetary independence, capital controls and the international monetary policy trilemma. It consists of four chapters and three essays. Chapter one provides a brief summary of all three essays. Chapter two investigates the impact of oil prices on global trade. It is concluded that when oil prices increase, countries start trading relatively more with their neighbors. As an application this chapter provides a new estimate of the eurozone effect on trade. Chapter three continues to study the eurozone and asks whether it is an optimum currency area using the member countries’ desired monetary policies. It is concluded that Greece, Spain, and Ireland have desired policies that are the least compatible with the common euro policy and are therefore the least likely to have formed an optimum currency area with the euro. Chapter four provides a new methodology in testing the international trilemma hypothesis. It is concluded that the trilemma holds in the context of the Asian countries.
248

Towards a non-intrusive traffic surveillance system using digital image processing

Lorio, Berino 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2001. / ENGLISH ABSTRACT: With the increased focus on the use of innovative and state-of-the-art technology in Intelligent Transport Systems (ITS), the need for more accurate and more detailed road traffic flow data has become apparent. Data obtained from vehicle detector loops, which merely act as vehicle presence sensors, is neither reliable nor accurate enough anymore. This type of sensor poses the problem that it has to be inserted into the road surface; temporarily obstructing traffic flows, and has to be replaced after pavement reconstruction. One of the solutions to this problem is to develop a traffic surveillance system that uses video image processing. In cities where Intelligent Transport Systems are used extensively, roadways are monitored through Closed Circuit Television Cameras (CCTV) that are closely watched by traffic control centre personnel. These cameras are mounted on posts on the roadside. These cameras can serve a dual purpose, being used for both human monitoring and as inputs to Video Image Processing Systems. In this study some of the digital image processing techniques that could be used in a traffic surveillance system were investigated. This report leads the reader through the various steps in the processing of a scene by a traffic surveillance system based on feature tracking, and discusses the pitfalls and problems that are experienced. The tracker was tested using three image sequences and the results are presented in the final chapter of this report. / AFRIKAANSE OPSOMMING: Met die toenemende fokus op die gebruik van innoverende oplossings en gevorderde tegnologie in Intelligente Vervoerstelsels, het die noodsaaklikheid van akkurater en meer gedetailleerde padverkeer vloeidata duidelik geword. Data wat verkry word d.m.v. voertuig deteksie lusse, wat alleenlik voertuig teenwoordigheid/afwesigheid meet, is nie meer akkuraat of betroubaar genoeg nie. Hierdie tipe sensors het egter die nadeel dat dit in die plaveisel ingesny moet word, dus vloei tydelik kan belemmer, en moet vervang word elke keer as plaveisel rekonstruksie gedoen word. Een van die oplossings vir hierdie probleem is om 'n verkeers waarnemingstelsel te ontwikkel wat van videobeeldverwerking gebruik maak. In stede waar van uitgebreide intelligente verkeerstelsels gebruik gemaak word, word paaie gemonitor d.m.v. geslote baan televisiekameras wat op pale langs die paaie aangebring is. Personeellede van die verkeers beheer sentrum hou dan die inkomende televisiebeelde dop. Hierdie kameras kan 'n dubelle rol vervul deurdat dit vir beide menslike waarneming en as invoer in 'n video-beeldverwerking stelsel gebruik kan word. In hierdie studie was verskeie digitale beeldverwerking tegnieke wat gebruik kan word in 'n verkeers waarnemingstelsel ondersoek. Hierdie verslag lei die leser deur die verskeie stappe in die verwerking van 'n toneel deur 'n verkeers waarneming stelsel wat gebaseer is op die volg van kenmerke. Die verslag beskryf ook die slaggate en probleme wat ondervind word. Die voertuig volger was getoets deur van drie reekse beelde gebruik te maak en die resultate word weergegee in die finale hoodfstuk van hierdie verslag.
249

Self-organising traffic control algorithms at signalised intersections

Einhorn, Mark David 04 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: The debilitating social, economic and environmental ramifications of traffic congestion are experienced in large cities the world over. The optimisation of traffic signal timings at signalised road intersections attempts to mitigate the extent of these adverse effects of traffic congestion by reducing the delay time experienced by vehicles in a transport network. Today, traffic signal control schemes may be classiffied into one of two main classes, namely fixed-time traffic signal control strategies, which are typically cyclic in nature, and vehicle-actuated traffic signal control strategies, which are typically acyclic in nature. Generally, cyclic control strategies tend to lack exibility, and are unable to adapt to short-term uctuations in traffic ow rates, resulting in green times that are either too long or too short. On the other hand, acyclic control strategies tend to lack coordination between intersections, resulting in vehicles being required to stop at the majority of signalised intersections they encounter. Self-organising traffic signal control has been proposed as an attractive alternative form of control which both exhibits exibility and facilitates a global coordination between intersections as a result of localised signal switching policies. Two examples of existing self-organising traffic signal control algorithms from the literature include an algorithm proposed by Lammer and Helbing in 2008 and an algorithm proposed by Gershenson and Rosenblueth in 2012. These algorithms have been shown to outperform both optimised fixed-time traffc signal control techniques as well as state-of-the-art vehicle actuated trffic signal control techniques, in terms of reducing vehicle delay time in a transport network. A draw-back of both of these self-organising approaches, however, is that their effective operation relies on carefully selected parameter values; poorly selected parameter values may render these algorithms very ineffectual. In this dissertation, three novel self-organising traffic signal traffic control algorithms are proposed. These three algorithms assume the use of existing radar detection sensors mounted at the intersection to provide the necessary input data. The radar detection sensors are capable of detecting and tracking individual vehicles approaching an intersection, providing real-time information pertaining to their physical dimensions, velocities, and ranges from the intersection in terms of both time and distance. The three traffic signal control algorithms are free of any user-specialised parameters, and instead rely solely on the data provided by the radar detection sensors to inform their signal switching policies. The first of these traffic signal control algorithms is inspired by inventory control theory, and draws parallels between the monetary costs typically considered in inventory control models and the delay time costs associated with traffic control at signalised intersections, which the algorithm attempts to minimise. The second novel traffic control algorithm is inspired by the chemical process of osmosis in which solvent molecules move unaided from a region where they are highly concentrated, across a semi-permeable membrane, into a region of high solute molecule concentration. The algorithm models vehicles approaching an intersection as solvent molecules and the physical space available for the vehicles to occupy once they have passed through the intersection as solute molecules. Following this analogy, the intersection is considered to be the semi-permeable membrane. The third traffic control algorithm is a hybrid of the inventory and osmosis-inspired algorithms together with an intersection utilisation maximisation technique, which prevents unnecessary or prolonged underutilisation of an intersection. The three novel trafficc control algorithms, together with the algorithms of Lammer and Helbing, and of Gershenson and Rosenblueth, as well as a fixed-time control algorithm, are implemented in a purpose-built microscopic traffic simulation modelling framework. Several measures are employed to evaluate the relative performances of the algorithms. These measures include the usual mean and maximum resulting delay times incurred by vehicles and the saturation level of the roadways in the transport network, as well as three novel performance measure indicators which include the mean number of stops made by vehicles, their mean normalised delay time and the mean normalised number of stops made. The algorithms are compared in the context of a linear corridor road network topology as well as a grid road network topology under various traffic ow conditions. The overall performance of the novel hybrid traffic signal control algorithm is found to be superior for the corridor road network topology, while the performance of the osmosis-inspired algorithm is found to be superior for the grid road network topology. / AFRIKAANSE OPSOMMING:Die negatiewe sosiale, ekonomiese en omgewingsimpak van verkeersopeenhoping word in groot stede regoor die w^ereld ervaar. Die doel met die optimering van verkeersligwerkverrigting by straatkruisings is om die omvang van hierdie negatiewe impak tee te werk deur die vertraging van voertuie in 'n vervoernetwerk te verminder. Hedendaagse verkeersbeheeralgoritmes kom in een van twee hoofklasse voor, naamlik vaste-tyd beheerstrategiee, wat gewoonlik siklies van aard is, en beheerstrategiee gebaseer op voertuigopsporing, wat tipies asiklies van aard is. Oor die algemeen beskik sikliese beheerstrategiee nie oor genoegsame buigsaambeid om aan te pas by kort-termyn fluktuasies in verkeersvloei nie, wat tipies daartoe lei dat hul groentye spesifiseer wat of te lank of te kort is. Aan die ander kant is asikliese beheerstrategiee nie daartoe in staat om koordinasie tussen naasliggende straatkruisings te bewerkstellig nie, wat weer daartoe lei dat voertuie genoodsaak word om by die oorgrote meerderheid straatkruisings op hul pad te stop. Die self-organiserende beheer van verkeersligte is as 'n aantrektlike, buigsame alternatief voorgestel wat in staat is om globale koordinasie tussen naasliggende straatkruisings as gevolg van gelokaliseerde seinstrategiee te bewerkstellig. Twee voorbeelde van bestaande self-organiserende verkeersbeheeralgoritmes in die literatuur is die algoritmes wat in 2008 deur Lammer and Helbing en in 2012 deur Gershenson en Rosenblueth voorgestel is. Daar is aangetoon dat hierdie algoritmes daartoe in staat is om ge-optimeerde vaste-tyd beheerstrategiee sowel as gevorderde strategiee gebaseer op voertuigopsporing uit te stof in terme van 'n vermindering van die vertraging van voertuie in 'n vervoernetwerk. 'n Nadeel van beide hierdie self-organiserende benaderings is egter dat hul doeltreffende werkverrigting berus op versigtig-gekose parameterwaardes; willekeurige parameterwaardes mag lei na hoogs ondoeltreffende werkverrigitng van die algoritmes. Drie nuwe self-organiserende verkeersbeheeralgoritmes word in hierdie proefskrif voorgestel. Hierdie drie algoritmes maak vir hul toevoerdata staat op die beskikbaarhed van bestaande radar opsporingsensors wat by straatkruisings geinstalleer is. Die sensors is daartoe in staat om individuele voertuie wat 'n straatkruising nader, op te spoor, te volg en intydse data oor hul fisiese dimensies, snelhede, en afstande na die kruising (in terme van beide tyd en afstand) te lewer. Die drie algoritmes bevat geen gebruikers-gespesifiseerde parameters nie, en maak in plaas daarvan slegs gebruik van die sensortoevoerdata om hul beheerstrategiee te bepaal. Die eerste van hierdie verkeersbeheeralgoritmes is deur die teorie van voorraadbeheer geinspireer en maak gebruik van parallelle tussen die monet^ere kostes wat tipies in voorraadbeheermodelle voorkom en die kostes in terme van vertragingstyd wat met verkeersbeheer by straatkruisings aangegaan word, en wat deur die algoritme geminimeer word. Die tweede verkeersbeheeralgoritme is deur die chemiese proses van osmose geinspireer, waar molekules van 'n oplossingsmiddel sonder eksterne hulp vanaf 'n gebied waar hul in hoe konsentrasie voorkom, deur 'n gedeeltelik-deurlaatbare membraan beweeg na 'n gebied waarin hul ook in hoe konsentrasie, maar in opgeloste vorm voorkom. Die algoritme modelleer voertuie wat 'n straatkruising nader as die molekules van die oplossingsmiddel en die fisiese ruimte wat aan die ander kant van die kruising beskikbaar is om deur voertuie beset te word, as molekules in opgeloste vorm. In hierdie analogie word die kruising self as die gedeeltelik-deurlaatbare membraan beskou. Die derde algoritme is 'n hibriede strategie waarin elemente van die eerste twee algoritmes in samewerking met 'n tegniek vir die maksimering van straatkruisingsbenutting gekombineer word, en wat wat ten doel het om onnodige of verlengte onderbenutting van die kruising te vermy. Hierdie drie nuwe verkeersbeheeralgoritmes word, tesame met die bestaande algoritmes van Lammer en Helbing, en van Gershenson en Rosenblueth, asook 'n vaste-tyd beheeralgoritme, in 'n mikroskopiese verkeersimulasiemodelleringsraamwerk wat spesifiek vir die doel ontwerp is, geimplementeer. Verskeie maatstawwe word ingespan om die relatiewe werkverrigting van die algoritmes te evalueer. Hierdie maatstawwe sluit in die gebruiklike gemiddelde en maksimum vertragingstye van voertuie en die versadigingsvlak van strate in die vervoernetwerk, sowel as drie nuwe maatstawwe, naamlik die gemiddelde aantal stoppe deur voertuie, hul genormaliseerde vertragingstye en die gemiddelde, genormaliseerde aantal stoppe. Die algoritmes word in die kontekste van 'n line^ere topologie van opeenvolgende straatkruisings en 'n netwerktopologie van reghoekige straatblokke onder verskeie verkeersdigthede met mekaar vergelyk. Daar word bevind dat die nuwe hibriede algoritme die beste vaar in die line^ere topologie, terwyl die osmose-ge inspireerde algoritme die ander algoritmes uitstof in die straatblok-netwerktopologie.
250

Indoor secondary organic aerosol formation : influence of particle controls, mixtures, and surfaces

Waring, Michael Shannon 22 October 2009 (has links)
Ozone (O₃) and terpenoids react to produce secondary organic aerosol (SOA). This work explored novel ways that these reactions form SOA indoors, with five investigations, in two categories: investigations of (i) the impacts of particle controls on indoor SOA formation, and (ii) two fundamental aspects of indoor SOA formation. For category (i), two investigations examined the particle control devices of ion generators, which are air purifiers that are ineffective at removing particles and emit ozone during operation. With a terpenoid source present (an air freshener), ion generators acted as steady-state SOA generators, both in a 15 m³ chamber and 27 m³ room. The final investigation in category (i) modeled how heating, ventilating, and air-conditioning (HVAC) systems influence SOA formation. Influential HVAC parameters were flow rates, particle filtration, and indoor temperature for residential and commercial models, as well as ozone removal by particle-laden filters for the commercial model. For category (ii), the first investigation measured SOA formation from ozone reactions with single terpenoids and terpenoid mixtures in a 90 L Teflon-film chamber, at low and high ozone concentrations. For low ozone, experiments with only d-limonene yielded the largest SOA number formation, relative to other mixtures, some of which had three times the effective amount of reactive terpenoids. This trend was not observed for high ozone experiments, and these results imply that ozone-limited reactions with d-limonene form byproducts with high nucleation potential. The second investigation in category (ii) explored SOA formation from ozone reactions with surface-adsorbed terpenoids. A model framework was developed to describe SOA formation due to ozone/terpenoid surface reactions, and experiments in a 283 L chamber determined the SOA yield for ozone/d-limonene surface reactions. The observed molar yields were 0.14–0.16 over a range of relative humidities, and lower relative humidity led to higher SOA number formation from surface reactions. Building materials on which ozone/d-limonene surface reactions are predicted to lead to substantial SOA formation are those with initially low surface reactivity, such as glass, sealed materials, or metals. The results from category (ii) suggest significant, previously unexplored mechanisms of SOA number formation indoors. / text

Page generated in 0.0496 seconds