• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 26
  • 10
  • 7
  • 6
  • 6
  • 6
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 195
  • 195
  • 122
  • 33
  • 27
  • 25
  • 20
  • 20
  • 19
  • 17
  • 16
  • 15
  • 14
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Routeplanner: a model for the visualization of warehouse data

Gouws, Patricia Mae 31 December 2008 (has links)
This study considers the details of development and use of a model of the visualization process to transform data in a warehouse to required insight. In the context of this study, `visualization process' refers to a step-wise methodology to develop enhanced insight by using visualization techniques. The model, named RoutePlanner, was developed by the researcher from a theoretical perspective and was then used and evaluated practically in the domain of insurance brokerage. The study highlights the proposed model, which comprises stages for the identification of the relevant data, selection of visualization methods and evaluation of the visualizations, undergirded by a set of practical guidelines. To determine the effect of the use of RoutePlanner an experiment was conducted to test a theory. The practical utility of RoutePlanner was assessed during an evaluation-of-use study. The goal of this study is to present the RoutePlanner model and the effect of its use. / Theoretical Computing / M.Sc. (Information Systems)
132

Efficient high-order time domain finite element methods in electromagnetics

Marais, Neilen 03 1900 (has links)
Thesis (DEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2009. / The Finite Element Method (FEM) as applied to Computational Electromagnetics (CEM), can beused to solve a large class of Electromagnetics problems with high accuracy and good computational efficiency. For solving wide-band problems time domain solutions are often preferred; while time domain FEM methods are feasible, the Finite Difference Time Domain (FDTD) method is more commonly applied. The FDTD is popular both for its efficiency and its simplicity. The efficiency of the FDTD stems from the fact that it is both explicit (i.e. no matrices need to be solved) and second order accurate in both time and space. The FDTD has limitations when dealing with certain geometrical shapes and when electrically large structures are analysed. The former limitation is caused by stair-casing in the geometrical modelling, the latter by accumulated dispersion error throughout the mesh. The FEM can be seen as a general mathematical framework describing families of concrete numerical method implementations; in fact the FDTD can be described as a particular FETD (Finite Element Time Domain) method. To date the most commonly described FETD CEM methods make use of unstructured, conforming meshes and implicit time stepping schemes. Such meshes deal well with complex geometries while implicit time stepping is required for practical numerical stability. Compared to the FDTD, these methods have the advantages of computational efficiency when dealing with complex geometries and the conceptually straight forward extension to higher orders of accuracy. On the downside, they are much more complicated to implement and less computationally efficient when dealing with regular geometries. The FDTD and implicit FETD have been combined in an implicit/explicit hybrid. By using the implicit FETD in regions of complex geometry and the FDTD elsewhere the advantages of both are combined. However, previous work only addressed mixed first order (i.e. second order accurate) methods. For electrically large problems or when very accurate solutions are required, higher order methods are attractive. In this thesis a novel higher order implicit/explicit FETD method of arbitrary order in space is presented. A higher order explicit FETD method is implemented using Gauss-Lobatto lumping on regular Cartesian hexahedra with central differencing in time applied to a coupled Maxwell’s equation FEM formulation. This can be seen as a spatially higher order generalisation of the FDTD. A convolution-free perfectly matched layer (PML) method is adapted from the FDTD literature to provide mesh termination. A curl conforming hybrid mesh allowing the interconnection of arbitrary order tetrahedra and hexahedra without using intermediate pyramidal or prismatic elements is presented. An unconditionally stable implicit FETD method is implemented using Newmark-Beta time integration and the standard curl-curl FEM formulation. The implicit/explicit hybrid is constructed on the hybrid hexahedral/tetrahedral mesh using the equivalence between the coupled Maxwell’s formulation with central differences and the Newmark-Beta method with Beta = 0 and the element-wise implicitness method. The accuracy and efficiency of this hybrid is numerically demonstrated using several test-problems.
133

A comparative analysis of the performance and deployment overhead of parallelized Finite Difference Time Domain (FDTD) algorithms on a selection of high performance multiprocessor computing systems

Ilgner, Robert Georg 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: The parallel FDTD method as used in computational electromagnetics is implemented on a variety of different high performance computing platforms. These parallel FDTD implementations have regularly been compared in terms of performance or purchase cost, but very little systematic consideration has been given to how much effort has been used to create the parallel FDTD for a specific computing architecture. The deployment effort for these platforms has changed dramatically with time, the deployment time span used to create FDTD implementations in 1980 ranging from months, to the contemporary scenario where parallel FDTD methods can be implemented on a supercomputer in a matter of hours. This thesis compares the effort required to deploy the parallel FDTD on selected computing platforms from the constituents that make up the deployment effort, such as coding complexity and time of coding. It uses the deployment and performance of the serial FDTD method on a single personal computer as a benchmark and examines the deployments of the parallel FDTD using different parallelisation techniques. These FDTD deployments are then analysed and compared against one another in order to determine the common characteristics between the FDTD implementations on various computing platforms with differing parallelisation techniques. Although subjective in some instances, these characteristics are quantified and compared in tabular form, by using the research information created by the parallel FDTD implementations. The deployment effort is of interest to scientists and engineers considering the creation or purchase of an FDTD-like solution on a high performance computing platform. Although the FDTD method has been considered to be a brute force approach to solving computational electromagnetic problems in the past, this was very probably a factor of the relatively weak computing platforms which took very long periods to process small model sizes. This thesis will describe the current implementations of the parallel FDTD method, made up of a combination of several techniques. These techniques can be easily deployed in a relatively quick time frame on computing architectures ranging from IBM’s Bluegene/P to the amalgamation of multicore processor and graphics processing unit, known as an accelerated processing unit. / AFRIKAANSE OPSOMMING: Die parallel Eindige Verskil Tyd Domein (Eng: FDTD) metode word gebruik in numeriese elektromagnetika en kan op verskeie hoë werkverrigting rekenaars geïmplementeer word. Hierdie parallele FDTD implementasies word gereeld in terme van werkverrigting of aankoop koste vergelyk, maar word bitter min sistematies oorweeg in terme van die hoeveelheid moeite wat dit geverg het om die parallele FDTD vir 'n spesifieke rekenaar argitektuur te skep. Mettertyd het die moeite om die platforms te ontplooi dramaties verander, in the 1980's het die ontplooings tyd tipies maande beloop waarteenoor dit vandag binne 'n kwessie van ure gedoen kan word. Hierdie tesis vergelyk die inspanning wat nodig is om die parallelle FDTD op geselekteerde rekenaar platforms te ontplooi deur te kyk na faktore soos die kompleksiteit van kodering en die tyd wat dit vat om 'n kode te implementeer. Die werkverrigting van die serie FDTD metode, geïmplementeer op 'n enkele persoonlike rekenaar word gebruik as 'n maatstaf om die ontplooing van die parallel FDTD met verskeie parallelisasie tegnieke te evalueer. Deur hierdie FDTD ontplooiings met verskillende parallelisasie tegnieke te ontleed en te vergelyk word die gemeenskaplike eienskappe bepaal vir verskeie rekenaar platforms. Alhoewel sommige gevalle subjektief is, is hierdie eienskappe gekwantifiseer en vergelyk in tabelvorm deur gebruik te maak van die navorsings inligting geskep deur die parallel FDTD implementasies. Die ontplooiings moeite is belangrik vir wetenskaplikes en ingenieurs wat moet besluit tussen die ontwikkeling of aankoop van 'n FDTD tipe oplossing op 'n höe werkverrigting rekenaar. Hoewel die FDTD metode in die verlede beskou was as 'n brute krag benadering tot die oplossing van elektromagnetiese probleme was dit waarskynlik weens die relatiewe swak rekenaar platforms wat lank gevat het om klein modelle te verwerk. Hierdie tesis beskryf die moderne implementering van die parallele FDTD metode, bestaande uit 'n kombinasie van verskeie tegnieke. Hierdie tegnieke kan maklik in 'n relatiewe kort tydsbestek ontplooi word op rekenaar argitekture wat wissel van IBM se BlueGene / P tot die samesmelting van multikern verwerkers en grafiese verwerkings eenhede, beter bekend as 'n versnelde verwerkings eenheid.
134

Förändringar inom kunskapsorganisation vid ett humanistiskt specialbibliotek : En domänanalytisk studie över Vitterhetsakademiens biblioteks verksamhet och samling

Wallin, Emma January 2017 (has links)
The purpose of this study is to carry out a contextual analysis based on Birger Hjørland’s socio-epistemological theory and thereby create a coherent picture of Vitterhetsakademiens biblioteks (KVHAAB) activities, collection and knowledge organization from a historical perspective. The questions that are analyzed in this paper are how the knowledge organization, classification and search capabilities correspond to the goals that KVHAAB has set for the institution. To answer these questions, transcriptions and notes from interviews, field trips and documents about the section were used as primary sources. Interviews, analysis of documents and domain analytic method were used as methods to collect sources and perform the analysis. The method I am using to examine these questions is qualitative interviews based on the theories and methods of Steinar Kvale and Svend Brinkman. The analysis is focused on library and information scientific aspects.  The results show that KVHAAB’s knowledge-organizing tools emerged historically and pragmatically and adapted to institutional and material practice. KVHAAB has been a pioneer by participating in various digitization projects early on. At the same time, historical preferences and focus are reflected from different times in the directories used on KVHAAB. Based on an awareness of the complexity of the descriptive mission, several initiatives have been taken to develop new routines and standards for cataloging and indexing of the material. In accordance with Riksantikvarieämbetets (RAÄ) goal of preserving, developing and utilizing the cultural heritage, there is still a future need for KVHAAB to make collections visible both in digital form, but also to expand the use of reading space, physical lending and outward operations.  This paper is published as a two years master’s thesis in Library and Information Science at Uppsala University in Sweden.
135

Using Work Domain Analysis to Evaluate the Design of a Data Warehouse System

Iveroth, Axel January 2019 (has links)
Being able to perform good data analysis is a fundamental part of running any business or organization. One way of enabling data analysis is with a data warehouse system, a type of database that gathers and transforms data from multiple sources and structures it in the goal of simplifying analysis. It is commonly used to provide support in decision-making. Although a data warehouse enables data analysis, it is also relevant to consider how well the system supports analysis. This thesis is a qualitative research that aims to investigate how work domain analysis (WDA) can be used to evaluate the design of a data warehouse system. To do so, a case study at the IT company Norconsult Astando was performed. A data warehouse system was designed for an issue management system and evaluated using the abstraction hierarchy (AH) model. The research done in this thesis showed that analysis was enabled by adopting Kimball’s bottom-up approach and a star schema design with an accumulating snapshot fact table. Through evaluation of the design, it was shown that most of the design choices made for the data warehouse were captured in the AH. It was concluded that with sufficient data collection methods, WDA can be used to a large extent when evaluating a data warehouse system.
136

Análise probabilística de durabilidade aplicada a veículos de carga rodoviária. / Probabilistic analysis of durability applied to semi-trailer tank vehicle.

Hougaz, Augusto Borella 17 August 2005 (has links)
Em projetos de veículos, prever adequadamente a durabilidade de um componente estrutural é vital para a redução de custos assim como para se estipular prazos de garantia e de manutenção. Por outro lado, em diversas situações, tal previsão é difícil, pois inúmeros parâmetros não estão sob controle preciso do projetista, dentre eles os mais relevantes são: o carregamento ao qual o componente será submetido e as propriedades de vida em fadiga do material. Desta forma, pode-se aplicar, em conjunto com o MEF, tratamento probabilístico do cálculo da vida em fadiga,resultando em um valor de confiabilidade, a ser usado como esteio do critério de projeto, no intuito de conferir maior significado prático à análise e aos resultados obtidos. Assim sendo, o presente trabalho descreve e relaciona os principais aspectos de um tratamento probabilístico para cálculo de vida em fadiga, configurando-se um procedimento completo subdividido nas seguintes etapas: 1. Modelagem do carregamento sobre o veículo, visando análise espectral em MEF no domínio da freqüência; 2. Cálculo da densidade de probabilidade das amplitudes cíclicas de Rainflow para as tensões atuantes na estrutura, a partir dos resultados da análise espectral em MEF no domínio da freqüência; 3. Tratamento probabilístico das propriedades características de vida em fadiga do material; 4. Cálculo da confiabilidade estrutural de vida em fadiga, tomando os resultados prévios dos tratamentos probabilísticos de tensões e de vida em fadiga do material; 5. Demonstração da viabilidade e aplicação prática de tal procedimento, implementando-o para o caso de um semi-reboque tanque autoportante. As conclusões finais obtidas confirmam o fato de haver mais falhas estruturais em veículos no Brasil do que em países do primeiro mundo, pois se evidenciou que a pior qualidade das vias trafegáveis brasileiras exacerba a probabilidade de falha por fadiga. Por fim, a metodologia proposta na presente tese, quando implementada em um programa computacional pós-processador de MEF, que automaticamente transforme os desvios padrão das tensões, resultantes da análise espectral, em probabilidades de falha, pode, mais adequadamente, subsidiar critério de projeto fundamentado na avaliação probabilística de durabilidade de um veículo através de preceitos bem definidos de confiabilidade estrutural. / In structural design, forecast correctly the part’s lifetime is vital to reduce costs and to estimate the periods of warranty and of maintenance. On the other hand, in many situations, this forecast is difficult because many parameters are not under the engineers control, some examples of this kind of parameters are: the loads acting in the part and the fatigue properties of the material. Therefore, it is possible to apply finite element analysis with a probabilistic approach in fatigue lifetime calculation, what results in reliability values that can be used as design criterion in order to give a broader meaning to the analyses and its results. Hence, this text describes and establishes the relationship between the main aspects of the probabilistic approach in fatigue lifetime calculation, resulting in the definition of a complete procedure that is divided in the following steps: 1. Modeling of the vehicle’s load aiming spectral analysis in the frequency domain; 2. Calculation procedure of the probability density function of Rainflow stress amplitudes applied in the structure upon using the spectral analysis in frequency domain results; 3. Probabilistic approach of fatigue properties of the material; 4. Fatigue reliability calculation through the use of the previous results provided by the probabilistic approach of stresses and of fatigue properties of the material; 5. Implementation of the proposed methodology for fatigue lifetime prediction of a semi-trailer tank. The conclusion confirms the fact of having much more structural failures in Brazilian vehicles than in European ones. This can be said because the results show that the worsening of the roads quality increases the fatigue failure probability. Finally, when implemented in a post-processing software for finite element programs that automatically transforms the stresses standard deviation, obtained by spectral analysis, in the fatigue failure probability, the proposed methodology in the present thesis may, more adequately, work as design criterion based on the probabilistic evaluation of the vehicles durability, upon applying well established structural reliability theory.
137

Effects of Adaptive Antenna Array Beamforming and Power Management with Antenna Element Selection

Unknown Date (has links)
This research is the array processing help wireless communication techniques to increase the signal accuracy. This technique has an important part of prevalent applications. The wireless communication system, radar, and sonar. Beamforming is one of methods in array processing that filters signals based on their capture time at each element in an array of antennas spatially. Numerous studies in adaptive array processing have been proposed in the last several decades, which are divided in two parts. The first one related to non-adaptive beamforming techniques and the next one related to digitally adaptive Beamforming methods. The trade-off between computational complexity and performance make them different. In this thesis, we concentrate on the expansion of array processing algorithms in both non-adaptive and adaptive ones with application of beamforming in 4G mobile antenna and radar systems. The conventional and generalized side-lobe canceller (GSC) structures beamforming algorithms were employed with a phase array antenna that changed the phase of arrivals in array antenna with common phased array structure antennas. An eight-element uniform linear array (ULA), consisting of di-pole antennas, represented as the antenna array. An anechoic chamber measures the operation of beamforming algorithms performance. An extended modified Kaiser weighting function is proposed to make a semi-adaptive structure in phased array beamforming. This technique is extended to low complexity functions like hyperbolic cosine and exponential functions. Furthermore, these algorithms are used in GSC beamforming. The side-lobe levels were so lower than other algorithms in conventional beamforming around -10 dB. On the other hand, a uniform linear arrays for smart antenna purposes designed to utilize in implementing and testing the proposed algorithms. In this thesis, performance of smart antenna with rectangular aperture coupled microstrip linear array which experimental investigations carried out for obtaining X-band operation of rectangular microstrip antenna by using aperture coupled feeding technique. Frequency range set at approximately 8.6 to 10.9 GHz, by incorporating frequency range of the antenna resonates for single wideband with an impedance bandwidth of 23%. The enhancement of impedance bandwidth and gain does not affect the nature of broadside radiation characteristics. This thesis describes the design, operation, and realization of the beamforming such as Sidelobe level (SLL) control and null forming array antenna are examined with the prototype. An antenna radiation pattern beam maximum can be simultaneously placed towards the intended user or Signal of interest (SOl), and, ideally nulls can be positioned towards directions of interfering signals or signals not of interest (SNOIs). Finally, we focused on the adaptive digitally algorithms in compact antenna that faces with mutual coupling. The variable step-size normalized lease mean square (VS-NLMS) algorithm is implemented in beamforming. This algorithm utilizes continuous adaptation. The weights are attuned that the final weight vector to the most satisfied result. The gradient vector can be achieved by iterative beamforming algorithm from the available data. This algorithm is compared with LMS, NLMS, VSS-NLMS algorithms, it is determined that the VSS-NLMS algorithm is better performance to other algorithms. Finally, we introduced novel adaptive IP-NNLMS beamformer. This beamformer reaches to faster convergence and lower error floor than the previous adaptive beamformers even at low SNRs in presence of mutual coupling. The experimental results verified the simulation results that the proposed technique has better performance than other algorithms in various situations. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2016. / FAU Electronic Theses and Dissertations Collection
138

Centrum för lättläst : Ur ett domänanalytiskt perspektiv / Centrum för Lättläst : From a Domain Analytical Perspective

Andersson, Annika January 2010 (has links)
<p>The aim of this two years master thesis in Library and information science is to study the foundation <em>Centrum för lättläst</em> and its work regarding information use with focus on information structures and information needs. The aim is also to examine how <em>Centrum för lättläst</em> cooperate with Swedish libraries to promote the work concerning people with reading disabilities. This master thesis applies a domain analytic theory and method inspired by Birger Hjørland. The domain analysis is confined to examine ontology, epistemology, sociology, empirical user studies, document and genre studies and studies of structures and institutions in scientific communication. The main source material consists of information from the homepage of <em>Centrum för lättläst</em>. The result of the domain analysis describes the ontology which implies to produce easy-to-read texts. The epistemology constitutes of regulations from the Swedish government regarding the work with easy-to-read literacy. The sociology represents several organizations and authorities connected to easy-to-read literacy. The domain analysis has also identified <em>Centrum för lättläst’s</em> target groups and their information needs which are varying. The study present numerous of strategies composed by <em>Centrum för lättläst</em> to satisfy the information needs of the target groups. Furthermore; the domain analysis has distinguished the information resources <em>Centrum för lättläst</em> use to distribute and make information accessible which are represented by documents within the domain. The result also shows that <em>Centrum för lättläst’s</em> work with information use includes important operators in the domain which contributes to endorse the information needs of the targets groups. Finally the domain analysis has examined the collaboration between <em>Centrum för lättläst</em> and libraries in Sweden, where the result illustrate procedures made by <em>Centrum för lättläst </em>in order to assist libraries in their work with easy-to-read literacy and the target groups.</p>
139

A Knowledge Based Product Line For Semantic Modeling Of Web Service Families

Orhan, Umut 01 January 2009 (has links) (PDF)
Some mechanisms to enable an effective transition from domain models to web service descriptions are developed. The introduced domain modeling support provides verification and correction on the customization part. An automated mapping mechanism from the domain model to web service ontologies is also developed. The proposed approach is based on Feature-Oriented Domain Analysis (FODA), Semantic Web technologies and ebXML Business Process Specification Schema (ebBP). Major contributions of this work are the conceptualizations of a feature model for web services and a novel approach for knowledge-based elicitation of domain-specific outcomes in order to allow designing and deploying services better aligned with dynamically changing business goals, stakeholders&#039 / concerns and end-users&#039 / viewpoints. The main idea behind enabling a knowledge-based approach is to pursue automation and intelligence on reflecting business requirements into service descriptions via model transformations and automated reasoning. The proposed reference variability model encloses the domain-specific knowledge and is formalized by using Web Ontology Language (OWL). Adding formal semantics to feature models allows us to perform automated analysis over them such as the verification of model customizations through exploiting rule-based automated reasoners. This research was motivated due to the needs for achieving productivity gains, maintainability and better alignment of business requirements with technical capabilities in engineering service-oriented applications and systems.
140

Modeling Of Software As A Service Architectures And Investigation On Their Design Alternatives

Ozturk, Karahan 01 July 2010 (has links) (PDF)
In general, a common reference architecture can be derived for Software as a Service (SaaS) architecture. However, while designing particular applications one may derive various different application design alternatives from the same reference SaaS architecture specification. To meet the required functional and nonfunctional requirements of different enterprise applications it is important to model the possible design so that a feasible alternative can be defined. In this thesis, we propose a systematic approach and the corresponding tool support for guiding the design of the SaaS application architecture. The approach defines a SaaS reference architecture, a family feature model and a set of reference design rules. Based on the business requirements an application feature model is defined using the family feature model. Selected features are related to design decisions and a SaaS application architecture design is derived. By defining multiple application architectures based on different application feature models we can even compare multiple alternatives and based on this select the most feasible alternative.

Page generated in 0.0757 seconds