• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 54
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 353
  • 19
  • 15
  • 14
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

NH3-free PECVD silicon nitride for photonic applications

Dominguez Bucio, Thalia January 2018 (has links)
Silicon Photonics has open the possibility of developing multilayer platforms based on complementary metal-oxide semiconductors compatible materials that have the potential to provide the density of integration required to fabricate complex photonic circuits. Amongst these materials, silicon nitride (SiN) has drawn attention due to its fabrication flexibility and advantageous intrinsic properties that can be tailored to fulfil the requirements of different linear and non-linear photonic applications covering the ultra-violet to mid-infrared wavelengths. Yet, the fabrication techniques typically used to grow SiN layers rely on processing temperatures > 400 C to obtain low propagation losses, which deem them inappropriate for multilayer integration. This thesis presents a systematic investigation that provided a comprehensive knowledge of a deposition method based on an NH3-free plasma enhanced chemical vapour deposition recipe that allows the fabrication of low-loss silicon nitride layers at temperatures < 400 C. The results of this study showed that the properties of the studied SiN layers depend mostly on their N/Si ratio, which is in fact one of the only properties that can be directly tuned with the deposition parameters. These observations provided a framework to optimise the propagation losses and optical properties of the layers in order to develop three platforms intended for specific photonic applications. The first one comprises 300nm stoichiometric SiN layers with refractive index (n) of 2 that enable the fabrication of photonic devices with propagation losses < 1 dB=cm at l = 1310nm and < 1:5 dB=cm at l = 1550 nm, which are good for applications that require efficient routing of optical signals. The second one consists on 600nm N-rich layers (n = 1.92) that allow fabricating both devices with propagation losses < 1 dB=cm at l = 1310 nm, apt for polarisation independent operation and coarse wavelength division multiplexing devices with cross-talk < 20 dB and low insertion losses. Finally, the last platform consisted of suspended Si-rich layers (n = 2.54) that permits the demonstration of photonic crystal cavities with Q factors as high as 122 000 and photonic crystal waveguides capable of operating in the slow-light regime. Hopefully, the demonstration of these platforms will stimulate the development of more complex SiN devices for multilayer routing, wavelength division multiplexing applications and non-linear integrated photonics in the future.
42

Non-rigid registration for multi-modality image fusion using prior shapes

Cui, Zheng January 2018 (has links)
Chronic obstructive pulmonary disease (COPD) is a chronic lung disease that causes breathing difficulties. One possible course of treatment for severe COPD is lung volume reduction surgery (LVRS), which involves removing, or isolating, the lobe or lobes of the lung that are most affected by the disease. A fusion of the multi-slice computed tomography (MSCT) and ventilation (V) and perfusion (Q) single photon emission computed tomography (SPECT) modalities therefore represents a powerful tool to for COPD analysis and then for guiding the lung resection surgery. Due to reduced uptake of radioisotope at the location of lesion, the V and Q of a moderate COPD patient delineate photopenic regions, which are normally misrecognised as part of the background in the target SPECT scan. Non-rigid registration, which lacks displacement constraints, is therefore performed on MSCT scans with excessive deformations. Moreover, considering the low-resolution nature of functional imaging and highly deformable property of lungs, very few published algorithms are able to accommodate current clinical demands. The motivation of this project is to develop a high-performance, statistical deformation model (SDM)-based non-rigid registration algorithm capable of achieving accurate alignment of lung MSCT and SPECT imaging. In this project, an innovative similarity registration method for volumetric shapes is proposed at the beginning. The method is based on the characteristic function, and intended to strike a desirable balance between performance and efficiency. Radial moments and spherical coordinate system-based cross-correlation are exploited here to obtain the optimal scaling, rotation and translation parameters within a reasonable time. Moreover, an iterative method is also employed to improve the robustness of the algorithm. Group shapes in the presence of significant noise and lung shapes extracted from a low-dose computed tomography database are employed in the validation experiments. In order to eliminate the influence of the weighting parameter for the statistical term, a novel MSCT/SPECT registration technique based on a parameter-reduced SDM is proposed in this thesis. The SDM is trained on prior lung shapes. In addition, the multichannel technique performs V/MSCT and Q/MSCT alignments simultaneously to derive the optimal deformations. Lung MSCT and SPECT imaging data from a real medical database, as well as the 4D extended cardiac-torso phantom, were employed in the experiments. The algorithm proposed here was validated to be capable of preventing excessive deformations, and of achieving accurate registration between the two imaging modalities. The deformations for MSCT/SPECT registration are finally used to warp lobe masks, which are then mapped onto SPECT images for lung lobe/SPECT fusion.
43

Exploring barriers to use of social media in support of non-formal learning by pupils attending secondary education in the UK : a mixed method approach

Blair, Robert David January 2018 (has links)
The problem this thesis seeks to address is that despite there being lots of evidence that young people of secondary school age in the United Kingdom embrace social media there is no established recognised best practice for incorporating it into their learning experience. This lack of best practice matters because social media has been demonstrated to support learning very effectively (closely fitting pedagogical approaches such as constructivism and connectivism), and digital literacy around the use of the World Wide Web in general and social media in particular is considered a life skill. A mixed methods, explanatory sequential approach is used to improve understanding of why and how pupils use social media, what they seek to achieve in doing so and why teachers do not appear to be promoting the use of social media to support non-formal learning. Data collection consisted of a quantitative survey undertaken by 380 pupils attending secondary schools in the counties of Hampshire, Cambridgeshire, Norfolk and Suffolk in the UK. This was followed by qualitative studies in the form of 8 focus groups with an average of 12 pupils per group and 18 individual interviews with teachers, which were thematically coded using an inductive, constant comparison approach until reaching point of saturation. An argument is presented that, although both pupils and teachers recognise the potential of social media to contribute to the non-formal learning process, this will not take place until key barriers are removed; in particular the perceptions of risk need to be addressed, and limitations created by the technical affordances of current platforms must be overcome. This thesis suggests that a set of mitigation strategies for these barriers could be developed based upon Digital Literacy education and Participatory Design led software development. This thesis therefore provides an original contribution to knowledge in the identification of barriers inhibiting use of social media by pupils in compulsory education to support non-formal learning and proposes an interdisciplinary approach to mitigate for these barriers.
44

Modelling an agent to trade on behalf of V2G drivers

Almansour, Ibrahem Abdullah January 2018 (has links)
Due to the limited availability of fuel resources, there is an urgent need for using renewable sources effectively. To achieve this, power consumers should participate actively in the consumption and production of power. Consumers nowadays can produce power and consume a portion of this locally, and they could offer the rest of the power to the grid. This new feature allows for new decisions for the power consumers. Specifically, vehicle-to-grid (V2G), which is one of the most effective sustainable solutions, could provide these opportunities due to its power storage capability. V2G is where an electric vehicle (EV) offers electric power to the grid when parked. Moreover, V2G could use solar and wind power and significantly decrease the amount of primary power that is utilized for transportation. Furthermore, it offers a potential for reducing the consumers' power cost if used effectively. In this thesis, the specific problems that we discuss can be categorized into three levels of complexity. At the simplest level is the problem of understanding the power market price behavior in the context of V2G, where we have complete information about the vehicle usage behavior and we assume there is one trip a day. At the next level, the problem of uncertainty in power market price is considered, while we keep the same assumption for the vehicle usage behavior. One of the real-life examples of this model is the bus timetable trip, where there is complete information about the trip times and the uncertainty is only on the power market prices side. Lastly, in addition to the problem of the uncertainty in the power market price, the uncertainty in vehicle usage behavior for the drivers is included for possible multiple trips in a day. The real-life example of this model is the normal vehicle drivers, where there is a chance that they will use their vehicle at any time, and so there are two types of uncertainty, in vehicle usage behavior and in the power market prices. For each of these subjects, we proposed a model and also conducted two surveys in order to attain our study aims. In more detail, initially, we develop an agent to trade on behalf of V2G users in terms of maximising their profits without uncertainty in the power market price. We then run the proposed model in three different scenarios using an optimal algorithm based on backward induction concept and we compare the results for our solution to a simple benchmark. These scenarios have been proposed to model the user behaviour for the duration of a single day where we assume that users drive their cars for a single period per day. Furthermore, these scenarios differ according to when the drivers started using their cars. We show that our solution outperformed the simple strategy in the first scenario by 49%. Moreover, in the second scenario it outperformed the simple strategy by 51%, while in the third scenario our solution outperformed the simple strategy by 10%. Next, we develop a heuristic algorithm that can trade on behalf of the V2G users in terms of maximising their profits, considering price uncertainty. Our proposed algorithm is combining the concept of consensus algorithms and expected value with a backward induction algorithm. We then run the proposed algorithm with two types of consensus algorithms using Borda, and majority voting, and with expected value algorithm and compare the results for each algorithm. The concept of consensus can be defined as that there are several samples of feasible steps to be considered at each period of time. After solving each sample, the decision that appears most frequently at time t is selected. Simulations show that, expected value algorithm outperform the other two (Borda and majority) under all power market prices scenarios considered. Finally, we increase the complexity for the problem by considering the uncertainty in the vehicle usage behavior in the context of V2G in addition to the uncertainty in the power market price. Furthermore, we consider the battery degradation cost, which happens because of the charging or discharging actions. To do such, we refine the second model and we use the multinomial logit model to consider the vehicle usage behaviour. We then run the proposed algorithm and the benchmark algorithms and compare the results for each algorithm. Simulation shows that, our proposed algorithm outperforms the naive algorithm for about 15 times in terms to the average prots when we start the experiments with a half amount of battery. Moreover, our proposed algorithm outperforms the naive algorithm for about 5 times in terms to the average prots when we start the experiments with a full amount of battery. On the other hand, our proposed algorithm performs 89% of the complete information algorithm in terms to the average prots when we start the experiments with a half amount of battery. Furthermore, complete information provides almost same results of our proposed algorithm in terms to the average prots when we start the experiments with a full amount of battery. Indeed, this is good result if we consider that, complete information algorithm deals with known information and the proposed algorithm deals with uncertain data. vehicle at any time, and so there are two types of uncertainty, in vehicle usage behavior and in the power market prices. For each of these subjects, we proposed a model and also conducted two surveys in order to attain our study aims.
45

The factors impacting the acceptance of E-assessment by academics in Saudi universities

Alruwais, Nuha January 2018 (has links)
As assessment is one of the important pillars of the learning process, E-assessment has been introduced to develop the models of assessment and to address some of the limitations and problems of paper-tests. In Saudi higher education, E-assessment has been emerging alongside E-learning systems. At present, few Saudi academics use E-assessment, but the factors that affect these academics' acceptance of E-assessment have not yet been investigated. Therefore, this study aims to find and investigate the factors that influence academics' behavioural intention to accept E-assessment. The theories and models of user acceptance of new technology that help to understand the individual behavioural intention have been reviewed and a Model of Acceptance E-assessment (MAE) proposed based on the models of user acceptance and use of technology and previous studies in the same field. The MAE consists of: attitude (perceived ease to use, perceived usefulness, and compatibility), subjective norm (peer influence and superior influence) and perceived behavioural control (self-efficacy, resource facilitating conditions, and IT support). These three main factors were used as determinants of academic behavioural intention to accept E-assessment. Age and gender were added to the model as moderating factors. The study followed a sequential mixed methods approach, which gathered qualitative and quantitative data in an ordered sequence and used different data collection tools (interview, questionnaire, and focus group discussion). The developed model (MAE) was validated through interviewing 15 experts, who confirmed all the factors except gender. Awareness of E-assessment and the existence of a strong security system were suggested by the experts as factors that should added to MAE. Later, an on-line questionnaire was sent to academics in Saudi universities and 306 responses were received from different universities in Saudi Arabia. The model and the relationship between the factors were assessed using Structural Equation Modelling (SEM). The results showed that MAE achieved a good fit with the collected data, and the model's instruments were reliable and valid. Finally, the SEM results were explored by focus groups discussions, among ten Saudi academics. The study found that Attitude is the most affecting factor on academics' behavioural intention to accept E-assessment, and Compatibility has high impact on academics' attitude, followed by perceived ease of use and perceived usefulness, while awareness has no effect on academics' attitude. Subjective norm was found to have a slight effect on academics' behavioural intention to accept E-assessment, and superior influence had a strong impact on subjective norm. Surprisingly, perceived behavioural control was found to have no influence on academics' intention, and only self-efficacy had an effect on perceived behavioural control. Additionally, the results showed that age has an effect on attitude, and slight effect on subjective norm. The research contributes to the body of knowledge in the fields of technology acceptance research and use of technology to enhance education. The MAE provides a depth understanding of academics' beliefs regarding acceptance of E-assessment that can help developers and educational institutions in Saudi Arabia to be aware of the factors that encourage academics to accept E-assessment before implementing.
46

A review on the critical success factors of agile software development : an empirical study

Aldahmash, Abdullah M. January 2018 (has links)
Given the evolution and increasing usage of agile development practices and techniques, the successful implementation of agile development is crucial. Agile software development has become one of the most commonly used methodologies for developing software, and it promises to deliver many benefits. Nevertheless, the implementation of agile practices and techniques requires many changes that might be a challenge for organisations attempting to succeed with agile software development. The relevant literature presents a great deal of research which has studied the critical success factors (CSFs) of agile software development. This study aims firstly to review the literature related to agile software development in order to identify the CSFs of agile software development. With this in mind, one of the objectives of this study is to investigate those factors which contribute to the success of agile software development. This study also aims to explore the relations between these factors and to suggest a set of measurements which could be used to measure the success of agile software development projects. To achieve these objectives, this research has employed empirical research methodologies aiming to address the research objectives. All of the research methods employed in this study have received ethical approval from the ethical committee of the School of Electronics and Computer Science at the University of Southampton. This research involved carrying out an exploratory study to investigate the identified success factors of agile software development. A web-based survey was distributed to agile practitioners in order to obtain their beliefs regarding the importance of the identified success factors. As a result, it was possible to order the CSFs of agile development by importance. Communication was found to be the most important success factor. The relations between the agile project's progress and the importance of these factors were explored. Using factor analysis, the inter-relations between the identified success factors were also investigated. The success factors were split into two components with the aim of developing a better understanding of said factors; the two resulting components were as follows: the organisational and people component, and the technical and project component. This research, moreover, developed an instrument with which the success of agile development projects could be evaluated. The proposed instrument includes a list of questions and metrics to measure the success of agile development projects. Agile experts were interviewed to review the development of the proposed instrument. Following the feedback from the experts, the instrument was amended. Once this stage had been completed, the instrument was used in three case studies; the aim of this was to seek a practical evaluation on whether the proposed instrument is valid which was confirmed and some suggestions on how it could be improved were obtained. To summarise, this research attempted to recognise the CSFs and to understand their importance, how this varies through the agile project, and their interrelations to provide insights into these CSFs. Furthermore, this research developed and validated an instrument to measure and evaluate the success in addressing these CSFs during an agile software development project.
47

Exploiting Linked Open Data (LoD) and Crowdsourcing-based semantic annotation & tagging in web repositories to improve and sustain relevance in search results

Khan, Arshad Ali January 2018 (has links)
Online searching of multi-disciplinary web repositories is a topic of increasing importance as the number of repositories increases and the diversity of skills and backgrounds of their users widens. Earlier term-frequency based approaches have been improved by ontology-based semantic annotation, but such approaches are predominantly driven by "domain ontologies engineering first" and lack dynamicity, whereas the information is dynamic; the meaning of things changes with time; and new concepts are constantly being introduced. Further, there is no sustainable framework or method, discovered so far, which could automatically enrich the content of heterogeneous online resources for information retrieval over time. Furthermore, the methods and techniques being applied are fast becoming inadequate due to increasing data volume, concept obsolescence, and complexity and heterogeneity of content types in web repositories. In the face of such complexities, term matching alone between a query and the indexed documents will no longer fulfil complex user needs. The ever growing gap between syntax and semantics needs to be continually bridged in order to address the above issues; and ensure accurate search results retrieval, against natural language queries, despite such challenges. This thesis investigates that by domain-specific expert crowd-annotation of content, on top of the automatic semantic annotation (using Linked Open Data sources), the contemporary value of content in scientific repositories, can be continually enriched and sustained. A purpose-built annotation, indexing and searching environment has been developed and deployed to a web repository, which hosts more than 3,400 heterogeneous web documents. Based on expert crowd annotations, automatic LoD-based named entity extraction and search results evaluations, this research finds that search results retrieval, having the crowd-sourced element, performs better than those having no crowd-sourced element. This thesis also shows that a consensus can be reached between the expert and non-expert crowd-sourced annotators on annotating and tagging the content of web repositories, using the controlled vocabulary (typology) and free-text terms and keywords.
48

An investigation into the performance of regular expressions within SPARQL query language

Aljaloud, Saud January 2019 (has links)
SPARQL has not simply been the standard querying language for the Resource Description Framework (RDF) within the Semantic Web, but it has also gradually become one of the main querying languages for the graph model, in general. To be able to process SPARQL in a more efficient manner, an RDF store (as a DBMS) has to be used. However, SPARQL faces huge performance challenges for various reasons: the high exibility of RDF model, the fact that the SPARQL standardisation does not always focus on the performance side, or the immaturity of RDF and SPARQL in comparison to some other models such as SQL. One of SPARQL features is the ability to search through literals/strings by using a Regular Expression (Regex) filter. This adds a very handy and expressive utility, which allows users to search through strings or filter certain URIs. However, Regex is computationally expensive as well as resource intensive in that, for example, data has to be loaded into the memory. This thesis aims to investigate the performance of Regex within SPARQL. Firstly, we propose an analysis of the way people use Regex within SPARQL by looking at a huge log of queries made available provided by various RDF store providers. The analysis indicates various use cases in which their performance can be made more efficient. There is very little in the literature to adequately test the performance of Regex within SPARQL. We also propose the first Regex-Specic benchmark, named (BSBMstr) to be applied to the area of SPARQL. BSBMstr shows how various Regex features affect the overall performance of the SPARQL queries. BSBMstr also reports its results on seven known RDF stores. SPARQL benchmarks, in general, have been a major eld that attracts much research in the area of the Semantic Web. Nevertheless, many have argued that there are still issues in their design or simulation of real-world scenarios. This thesis also proposes a generic SPARQL benchmark, named CBSBench which introduces a new design of benchmarks. Unlike other benchmarks, CBSBench measures the performance of clusters rather than fixed queries. The usage of clusters also provides a stress test on RDF stores, because of the diversity of queries within each cluster. the CBSBench results are also reported on very different RDF stores. Finally, the thesis introduces (RegjInd)ex which is a Regex index data structure that is based on a tri-grams inverted index. This index aims to reduce the result sets to be scanned to match a Regex filter within SPARQL. The proposal has been evaluated by two different Regex-specic benchmarks and implemented on top of two RDF stores. (RegjInd)ex produces a smaller index size compared to previous work, while still being able to produce results faster than the original implementations by up to an order of magnitude. In general, the thesis provide a general guidelines that can be followed by developers to investigate similar features within a given DBMS. The investigation mainly relies on real-world usage by analysing how people are using these features. From that analysis, developers can construct queries and features alongside our proposed benchmarks to run tests on their chosen subject. The thesis also discusses various ideas and techniques that can be used to enhance the performance of DBMSs.
49

Enhanced air-interfaces for fifth generation mobile broadband communication

Harbi, Yahya January 2017 (has links)
In broadband wireless multicarrier communication systems, intersymbol interference (ISI) and intercarrier interference (ICI) should be reduced. In orthogonal frequency division multiplexing (OFDM), the cyclic prefix (CP) guarantees to reduce the ISI interference. However, the CP reduces spectral and power efficiency. In this thesis, iterative interference cancellation (IIC) with iterative decoding is used to reduce ISI and ICI from the received signal in multicarrier modulation (MCM) systems. Alternative schemes as well as OFDM with insufficient CP are considered; filter bank multicarrier (FBMC/Offset QAM) and discrete wavelet transform based multicarrier modulation (DWT-MCM). IIC is applied in these different schemes. The required components are calculated from either the hard decision of the demapper output or the estimated decoded signal. These components are used to improve the received signal. Channel estimation and data detection are very important parts of the receiver design of the wireless communication systems. Iterative channel estimation using Wiener filter channel estimation with known pilots and IIC is used to estimate and improve data detection. Scattered and interference approximation method (IAM) preamble pilot are using to calculate the estimated values of the channel coefficients. The estimated soft decoded symbols with pilot are used to reduce the ICI and ISI and improve the channel estimation. The combination of Multi-Input Multi-Output MIMO and OFDM enhances the air-interface for the wireless communication system. In a MIMO-MCM scheme, IIC and MIMO-IIC-based successive interference cancellation (SIC) are proposed to reduce the ICI/ISI and cross interference to a given antenna from the signal transmitted from the target and the other antenna respectively. The number of iterations required can be calculated by analysing the convergence of the IIC with the help of EXtrinsic Information Transfer (EXIT) charts. A new EXIT approach is proposed to provide a means to define performance for a given outage probability on quasi-static channels.
50

Diphthong synthesis using the three-dimensional dynamic digital waveguide mesh

Gully, Amelia J. January 2017 (has links)
The human voice is a complex and nuanced instrument, and despite many years of research, no system is yet capable of producing natural-sounding synthetic speech. This affects intelligibility for some groups of listeners, in applications such as automated announcements and screen readers. Furthermore, those who require a computer to speak - due to surgery or a degenerative disease - are limited to unnatural-sounding voices that lack expressive control and may not match the user's gender, age or accent. It is evident that natural, personalised and controllable synthetic speech systems are required. A three-dimensional digital waveguide model of the vocal tract, based on magnetic resonance imaging data, is proposed here in order to address these issues. The model uses a heterogeneous digital waveguide mesh method to represent the vocal tract airway and surrounding tissues, facilitating dynamic movement and hence speech output. The accuracy of the method is validated by comparison with audio recordings of natural speech, and perceptual tests are performed which confirm that the proposed model sounds significantly more natural than simpler digital waveguide mesh vocal tract models. Control of such a model is also considered, and a proof-of-concept study is presented using a deep neural network to control the parameters of a two-dimensional vocal tract model, resulting in intelligible speech output and paving the way for extension of the control system to the proposed three-dimensional vocal tract model. Future improvements to the system are also discussed in detail. This project considers both the naturalness and control issues associated with synthetic speech and therefore represents a significant step towards improved synthetic speech for use across society.

Page generated in 0.0209 seconds