561 |
Analytical investigation of internally resonant second harmonic lamb waves in nonlinear elastic isotropic platesMueller, Martin Fritz 24 August 2009 (has links)
This research deals with the second harmonic generation of Lamb waves in nonlinear elastic, homogeneous, isotropic plates. These waves find current applications in the field of ultrasonic, nondestructive testing and evaluation of materials. The second harmonic Lamb wave generation is investigated analytically in order to provide information on suitable excitation modes maximizing the second harmonic amplitude.
Using an existing solution for the problem of second harmonic generation in wave guides, the solution is explained for the plate and examined as to the symmetry properties of the second harmonic wave, since published results are contradictory. It is shown that the cross-modal generation of a symmetric secondary mode by an antisymmetric primary mode is possible.
Modes showing internal resonance, whose conditions are nonzero power flux from the primary wave and phase velocity matching, are shown to be most useful for measurements. In addition, group velocity matching is required. A material-independent analysis of the linear Lamb mode theory provides mode types satisfying all three requirements.
Using the example of an aluminum plate, the found internally resonant modes are evaluated with regard to the rate of second harmonic generation and practical issues such as excitability and ease of measurement. Pros and cons of each mode type are presented.
|
562 |
Modeller för restidsuppskattning baserat på Floating Car Data / Models for travel time estimation based on Floating Car DataWestman, Freddie January 2002 (has links)
<p>I storstadsregionerna blir trafikläget allt mer ansträngt för vart år som går. Inflyttningen fortsätter i oförändrad takt och fler människor måste försöka samsas om samma utrymme. Situationen på vägarna börjar bli ohållbar och det måste till att dessa problem löses snart för att utvecklingen i regionerna inte ska stagnera. Möjligheter för ytterligare utbyignationer finns dock i en begränsad grad och man måste börja se till andra lösningar. Inom området för intelligenta transportsystem(ITS) erbjuds många nya tillämpningar där man med ny teknik försöker hitta lösningar till dagens trafikproblem. Ett led i detta är att samla in och distribuera information om restider på vägarna, för att försöka fördela trafiken mer jämt över hela vägnätet. Det finns olika metoder för att hämta in den här typen av information, men den här rapporten fokuserar sig vid att beskriva system baserat på Floating Car Data(FCD). </p><p>Arbetet som beskrivs i rapporten har i huvudsak analyserat fyra olika restidsuppskattnings-modeller och jämfört dessa med varandra. Modellerna baserar sina beräkningar på observationer från oidentifierade fordon, dvs att observationerna inte har någon identitetsstämpel som kan kopplas till ett specifikt fordon. Två av modellerna betraktar länkarna som en helhet och utför beräkningarna med detta som grund, medan de två andra delar upp varje länk i mindre segment vilket skapar möjlighet för en större noggrannhet. Modellerna testades inledningsvis på simulerad data baserat på trafikmätningar i Göteborgstrakten. Alla beräkningar begränsades ner till länknivå och inte hela vägnät. Detta p.g.a. att det initialt var för komplicerat att skapa en map- matchingmetod som skulle krävas för genomföra beräkningar på olika länkar samtidigt. </p><p>Efter genomförda tester på simulerad data prövades modellerna även på en reella datamängd hämtad från projektet Probe i Stockholmsområdet. Resultaten från de utförda testerna visar på att det inte skiljer sig nämnvärt i restidsuppskattningarna mellan de olika modellerna. Sträckan som valdes att analyseras i de simulerade fallen, påverkades inte av några större störningar eller flödesvariationer. Det resulterade i att alla modellerna genererade likvärdiga restider. Även i fallet med den reella datamängden, som innehöll större flödesvariationer över tiden, kunde de olika modellernas uppskattningar inte skiljas åt nämnvärt. </p><p>Slutsatsen är att trafiken i allmänhet inte har så kraftiga förändringar i flödet över tiden, att det krävs särskilt avancerade modeller för att beräkna restider på länknivå. I alla fall inte om man bortser från incidenter. De framräknade restiderna och den information som dessa ger, bör främst användas för direkt trafikstyrning för att uppnå önskat resultat. Människor förlitar sig mer till sina egna erfarenheter i kända områden, så information av den här typen lämpar sig mer som hjälpmedel för den enskilde individen vid resor i okänd trafik.</p>
|
563 |
Fingerprint Growth Prediction, Image Preprocessing and Multi-level Judgment Aggregation / Fingerabdruckswachstumvorhersage, Bildvorverarbeitung und Multi-level Judgment AggregationGottschlich, Carsten 26 April 2010 (has links)
No description available.
|
564 |
Migration and Regional Sorting of SkillsTano, Sofia January 2014 (has links)
This thesis consists of an introductory part and four papers. Paper [I] estimates jointly the choice of whether to enroll in education and the choice of location among young people. Being a particularly mobile group, the location choices of young individuals shape much of the regional distribution of human capital, growth, and local public sector budgets. Applying Swedish register data on nest leavers, we seek to determine factors deciding the education and location choice of young people. The results indicate a systematic selection higher education based on school grades and preferences for locations with higher per capita tax bases and with lower shares of elderly people. The importance of family networks for the choice of location is confirmed. Paper [II] examines how individual ability, reflected by the grade point average (GPA) from comprehensive school affects the probability of migration among university graduates. The econometric analysis applies detailed micro-data of two entire cohorts of young individuals retrieved from the Swedish population registers. The results indicate that individual abilities are strongly influential both concerning completion of a university degree and for the migration decision. In addition, we find a positive relationship between the GPA and migrating from regions with lower per capita tax bases and/or a relatively small share of highly educated individuals. Analogously, individuals with a high GPA tend to stay in more densely populated regions, suggesting a clustering of human capital vis-à-vis school grades. Paper [III] estimates the relationship between migration across labour market regions and the subsequent changes in earnings by using the GPA from the final year of comprehensive school as a proxy for ability. This measure aims to capture heterogeneity in the returns to migration for individuals conditional on education attainment. Using Swedish register data on young adults, a difference-in-difference propensity score matching estimator is applied to estimate income differences measured up to seven years after migration. The results show variation between different ability groups regarding the returns to regional migration. There are indications of larger gains for individuals holding top grades, while the bottom half seems to benefit less, or face slightly negative effects. Paper [IV] examines whether power couple formation and the location choice of such couples are driven by factors already inherent in young people during their formative school years. The paper also extends the analysis by modeling location choice among different sizes of labor market areas, given different power statuses of the couples. Based on analysis of Swedish register data, we produce evidence that power spouses evolve from the population of high achieving school age individuals; the latter is identified by high academic performance during their years of compulsory school. Regarding location choice, the results indicate that power couples display a relatively high tendency to migrate from their regions of origin to large cities.
|
565 |
Offline Approximate String Matching forInformation Retrieval : An experiment on technical documentationDubois, Simon January 2013 (has links)
Approximate string matching consists in identifying strings as similar even ifthere is a number of mismatch between them. This technique is one of thesolutions to reduce the exact matching strictness in data comparison. In manycases it is useful to identify stream variation (e.g. audio) or word declension (e.g.prefix, suffix, plural). Approximate string matching can be used to score terms in InformationRetrieval (IR) systems. The benefit is to return results even if query terms doesnot exactly match indexed terms. However, as approximate string matchingalgorithms only consider characters (nor context neither meaning), there is noguarantee that additional matches are relevant matches. This paper presents the effects of some approximate string matchingalgorithms on search results in IR systems. An experimental research design hasbeen conducting to evaluate such effects from two perspectives. First, resultrelevance is analysed with precision and recall. Second, performance is measuredthanks to the execution time required to compute matches. Six approximate string matching algorithms are studied. Levenshtein andDamerau-Levenshtein computes edit distance between two terms. Soundex andMetaphone index terms based on their pronunciation. Jaccard similarity calculatesthe overlap coefficient between two strings. Tests are performed through IR scenarios regarding to different context,information need and search query designed to query on a technicaldocumentation related to software development (man pages from Ubuntu). Apurposive sample is selected to assess document relevance to IR scenarios andcompute IR metrics (precision, recall, F-Measure). Experiments reveal that all tested approximate matching methods increaserecall on average, but, except Metaphone, they also decrease precision. Soundexand Jaccard Similarity are not advised because they fail on too many IR scenarios.Highest recall is obtained by edit distance algorithms that are also the most timeconsuming. Because Levenshtein-Damerau has no significant improvementcompared to Levenshtein but costs much more time, the last one is recommendedfor use with a specialised documentation. Finally some other related recommendations are given to practitioners toimplement IR systems on technical documentation.
|
566 |
Concepts for compact solid-state lasers in the visible and UVJohansson, Sandra January 2006 (has links)
In many fields, scientific or industrial, optical devices that can be tailored in terms of spectral qualities and output power depending on the application in question are attractive. Nonlinear optics in combination with powerful laser sources provide a tool to achieve essentially any wavelength in the electromagnetic spectrum, and the advancement of material technology during the last decade has opened up new possibilities in terms of realising such devices. The main part of the thesis deals with the development of compact functional lasers based on nonlinear interaction utilising diode-pumped solid-state lasers and also laser diodes. Efficient frequency conversion into the visible and ultraviolet part of the electromagnetic spectrum has been achieved, using both Nd:YAG and Nd:YVO4 lasers as well as a semiconductor laser as the fundamental light sources. For the nonlinear conversion, periodically poled potassium titanyl phosphate (PPKTP), bismuth triborate (BiBO) and beta barium borate (BBO) have been employed. In the search for compact and reliable light sources emitting in the visible part of the spectrum, two different approaches have been explored. First, a scheme based on sum-frequency mixing of a diode-pumped solid-state laser and a laser diode of good beam quality. The idea of this approach is to take advantage of the individual strength of each device, which would be the flexibility in terms of wavelength for the laser diode and the possibility to reach high output power from the diode-pumped solid-state laser. Second, by mixing two different solid-state lasers substantially more output power could be generated albeit at a cost of less freedom in the choice of spectral output. As these two light sources had their central wavelength at 492 nm and 593 nm, respectively, they are highly interesting in biomedical applications since they correspond to the peak absorption of several popular fluorophores. In applications such as lithography, material synthesis and fibre grating fabrication, laser sources emitting in the deep-UV spectrum are desired. An all solid-state 236 nm laser source with 20 mW of average power have been designed and constructed, by frequency-quadrupling a passively Q-switched Nd:YAG laser lasing on a quasi-three level transition. Also, a novel concept for miniaturising solid-state lasers has been examined. Using a heat-conductive polymer carrier, a generic approach especially suited for mass-production of functional laser devices is presented. Finally, it has been proven that GRIN lenses can provide a very compact beam shaping solution to standard laser diodes based on the beam twisting approach. This method offers several advantages such as compactness of the beam shaping system, automated assembly in solid-state laser manufacturing due to the shape of these lenses and polarisation preservation of the laser diode output. / QC 20100903
|
567 |
Computations on Massive Data Sets : Streaming Algorithms and Two-party CommunicationKonrad, Christian 05 July 2013 (has links) (PDF)
In this PhD thesis, we consider two computational models that address problems that arise when processing massive data sets. The first model is the Data Streaming Model. When processing massive data sets, random access to the input data is very costly. Therefore, streaming algorithms only have restricted access to the input data: They sequentially scan the input data once or only a few times. In addition, streaming algorithms use a random access memory of sublinear size in the length of the input. Sequential input access and sublinear memory are drastic limitations when designing algorithms. The major goal of this PhD thesis is to explore the limitations and the strengths of the streaming model. The second model is the Communication Model. When data is processed by multiple computational units at different locations, then the message exchange of the participating parties for synchronizing their calculations is often a bottleneck. The amount of communication should hence be as little as possible. A particular setting is the one-way two-party communication setting. Here, two parties collectively compute a function of the input data that is split among the two parties, and the whole message exchange reduces to a single message from one party to the other one. We study the following four problems in the context of streaming algorithms and one-way two-party communication: (1) Matchings in the Streaming Model. We are given a stream of edges of a graph G=(V,E) with n=|V|, and the goal is to design a streaming algorithm that computes a matching using a random access memory of size O(n polylog n). The Greedy matching algorithm fits into this setting and computes a matching of size at least 1/2 times the size of a maximum matching. A long standing open question is whether the Greedy algorithm is optimal if no assumption about the order of the input stream is made. We show that it is possible to improve on the Greedy algorithm if the input stream is in uniform random order. Furthermore, we show that with two passes an approximation ratio strictly larger than 1/2 can be obtained if no assumption on the order of the input stream is made. (2) Semi-matchings in Streaming and in Two-party Communication. A semi-matching in a bipartite graph G=(A,B,E) is a subset of edges that matches all A vertices exactly once to B vertices, not necessarily in an injective way. The goal is to minimize the maximal number of A vertices that are matched to the same B vertex. We show that for any 0<=ε<=1, there is a one-pass streaming algorithm that computes an O(n^((1-ε)/2))-approximation using Ô(n^(1+ε)) space. Furthermore, we provide upper and lower bounds on the two-party communication complexity of this problem, as well as new results on the structure of semi-matchings. (3) Validity of XML Documents in the Streaming Model. An XML document of length n is a sequence of opening and closing tags. A DTD is a set of local validity constraints of an XML document. We study streaming algorithms for checking whether an XML document fulfills the validity constraints of a given DTD. Our main result is an O(log n)-pass streaming algorithm with 3 auxiliary streams and O(log^2 n) space for this problem. Furthermore, we present one-pass and two-pass sublinear space streaming algorithms for checking validity of XML documents that encode binary trees. (4) Budget-Error-Correcting under Earth-Mover-Distance. We study the following one-way two-party communication problem. Alice and Bob have sets of n points on a d-dimensional grid [Δ]^d for an integer Δ. Alice sends a small sketch of her points to Bob and Bob adjusts his point set towards Alice's point set so that the Earth-Mover-Distance of Bob's points and Alice's points decreases. For any k>0, we show that there is an almost tight randomized protocol with communication cost Ô(kd) such that Bob's adjustments lead to an O(d)-approximation compared to the k best possible adjustments that Bob could make.
|
568 |
Federated Product Information Search and Semantic Product Comparisons on the Web / Föderierte Produktinformationssuche und semantischer Produktvergleich im WebWalther, Maximilian Thilo 20 September 2011 (has links) (PDF)
Product information search has become one of the most important application areas of the Web. Especially considering pricey technical products, consumers tend to carry out intensive research activities previous to the actual acquisition for creating an all-embracing view on the product of interest. Federated search backed by ontology-based product information representation shows great promise for easing this research process.
The topic of this thesis is to develop a comprehensive technique for locating, extracting, and integrating information of arbitrary technical products in a widely unsupervised manner. The resulting homogeneous information sets allow a potential consumer to effectively compare technical products based on an appropriate federated product information system. / Die Produktinformationssuche hat sich zu einem der bedeutendsten Themen im Web entwickelt. Speziell im Bereich kostenintensiver technischer Produkte führen potenzielle Konsumenten vor dem eigentlichen Kauf des Produkts langwierige Recherchen durch um einen umfassenden Überblick für das Produkt von Interesse zu erlangen. Die föderierte Suche in Kombination mit ontologiebasierter Produktinformationsrepräsentation stellt eine mögliche Lösung dieser Problemstellung dar.
Diese Dissertation stellt Techniken vor, die das automatische Lokalisieren, Extrahieren und Integrieren von Informationen für beliebige technische Produkte ermöglichen. Die resultierenden homogenen Produktinformationen erlauben einem potenziellen Konsumenten, zugehörige Produkte effektiv über ein föderiertes Produktinformationssystem zu vergleichen.
|
569 |
Uma abordagem em paralelo para matching de grandes ontologias com balanceamento de carga. / A parallel approach for matching large ontologies with load balancing.ARAÚJO, Tiago Brasileiro. 01 August 2018 (has links)
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-01T19:28:54Z
No. of bitstreams: 1
TIAGO BRASILEIRO ARAÚJO - DISSERTAÇÃO PPGCC 2016..pdf: 18742851 bytes, checksum: 92b3eefe5e78ab27784255e850871df9 (MD5) / Made available in DSpace on 2018-08-01T19:28:54Z (GMT). No. of bitstreams: 1
TIAGO BRASILEIRO ARAÚJO - DISSERTAÇÃO PPGCC 2016..pdf: 18742851 bytes, checksum: 92b3eefe5e78ab27784255e850871df9 (MD5)
Previous issue date: 2016-03-07 / Atualmente, o uso de grandes ontologias em diversos domínios do conhecimento está aumentando. Uma vez que estas ontologias podem apresentar sobreposição de conteúdo, a identificação de correspondências entre seus conceitos se torna necessária. Esse processo é chamado de Matching de Ontologias (MO). Um dos maiores desafios do matching de grandes ontologias é o elevado tempo de execução e o excessivo consumo de recursos de computacionais. Assim, para melhorar a eficiência, técnicas de particionamento de ontologias e paralelismo podem ser empregadas no processo de MO. Este trabalho apresenta uma abordagem para o Matching de Ontologias baseado em Particionamento e Paralelismo (MOPP) que particiona as ontologias de entrada em subontologias e executa as comparações entre conceitos em paralelo, usando o framework MapReduce como solução programável. Embora as técnicas de paralelização possam melhorar a eficiência do processo de MO, essas técnicas apresentam problemas referentes ao desbalanceamento de carga. Por essa razão, o presente trabalho propõe ainda duas técnicas para balanceamento de carga (básica e refinada) para serem aplicadas junto à abordagem MOPP, a fim de orientar a distribuição uniforme das
comparações (carga de trabalho) entre os nós de uma infraestrutura computacional. O desempenho da abordagem proposta é avaliado em diferentes cenários (diferentes tamanhos de ontologias e graus de desbalanceamento de carga) utilizando uma infraestrutura computacional e ontologias reais e sintéticas. Os resultados experimentais indicam que a abordagem MOPP é escalável e capaz de reduzir o tempo de execução do processo de MO. No que diz respeito às técnicas de balanceamento de carga, os resultados obtidos mostram que a abordagem MOPP é robusta, mesmo em cenários com elevado grau de desbalanceamento de carga, com a utilização da técnica refinada de balanceamento de carga. / Currently, the use of large ontologies in various áreas of knowledge is increasing. Since,
these ontologies can present contents overlap, the identification of correspondences among their concepts is necessary. This process is called Ontologies Matching (OM). One of the major challenges of the large ontologies matching is the high execution time and the computational resources consumption. Therefore, to get the efficiency better, partition and parallel techniques can be employed in the MO process. This work presents a Partition-Parallelbased Ontology Matching (PPOM) approach which partitions the input ontologies in subontologies and executes the comparisons between concepts in parallel, using the framework MapReduce as a programmable solution. Although the parallel techniques can get the MO efficiency process better, these techniques present problems concerning to the load imbalancing. For that reason, our work has proposed two techniques to the load balancing - the basic and the fine-grained one - which are supposed to be applied together with the PPOM approach, in order to orientate the uniform distribution of the comparisons (workload) between the nodes of a computing infrastructure. The performance of the proposed approach is assessed in different settings (different sizes of ontologies and degrees of load imbalancing) using a computing infrastructure and real and synthetic ontologies. The experimental results have indicated that the PPOM approach is scalable and able to reduce the OM process execution time. Referring to the load balancing techniques, the obtained results have shown that the PPOM approach is robust, even in settings with a high load imbalancing, with the fine-grained load balancing technique.
|
570 |
Uma Abordagem Semi-automática para Geração Incremental de Correspondências entre Ontologias / A Semi-Automatic approach for generating incremental correspondences between ontologiesHortêncio Filho, Fernando Wagner Brito January 2011 (has links)
HORTÊNCIO FILHO, Fernando Wagner Brito. Uma Abordagem Semi-automática para Geração Incremental de Correspondências entre Ontologias. 2011. 81 f. : Dissertação (mestrado) - Universidade Federal do Ceará, Centro de Ciências, Departamento de Computação, Fortaleza-CE, 2011. / Submitted by guaracy araujo (guaraa3355@gmail.com) on 2016-06-27T19:11:59Z
No. of bitstreams: 1
2011_dis_fwbhortênciofilho.pdf: 2807164 bytes, checksum: f2d22503112321ee69d172f0ac56d4c8 (MD5) / Approved for entry into archive by guaracy araujo (guaraa3355@gmail.com) on 2016-06-27T19:14:07Z (GMT) No. of bitstreams: 1
2011_dis_fwbhortênciofilho.pdf: 2807164 bytes, checksum: f2d22503112321ee69d172f0ac56d4c8 (MD5) / Made available in DSpace on 2016-06-27T19:14:07Z (GMT). No. of bitstreams: 1
2011_dis_fwbhortênciofilho.pdf: 2807164 bytes, checksum: f2d22503112321ee69d172f0ac56d4c8 (MD5)
Previous issue date: 2011 / The discovery of semantic correspondences between schemas is an important task for different fields of applications such as data integration, data warehousing and data mashup. In most cases, the data sources involved are heterogeneous and dynamic, making it even harder the performance of that task. Ontologies are being used in order to define common vocabulary used to describe the elements of the schemas involved in a particular application. The problem of matching between ontologies, or ontology matching, consists in the discovery of correspondences between terms of vocabularies (represented by ontologies) used between the various applications. The solutions proposed in the literature, despite being fully automatic have heuristic nature, and may produce non-satisfactory results. The problem intensifies when dealing with large data sources. The purpose of this paper is to propose a method for generation and incremental refinement of correspondences between ontologies. The proposed approach makes use of filtering techniques of ontologies, as well as user feedback to support the generation and refining these matches. For validation purposes, a tool was developed and experiments were conducted / A descoberta de correspondências semânticas entre esquemas é uma importante tarefa para diversos domínios de aplicações, tais como integração de dados, data warehouse e mashup de dados. Na maioria dos casos, as fontes de dados envolvidas são heterogêneas e dinâmicas, dificultando ainda mais a realização dessa tarefa. Ontologias vêm sendo utilizadas no intuito de definir vocabulários comuns usados para descrever os elementos dos esquemas envolvidos em uma determinada aplicação. O problema de matching entre ontologias, ou ontology matching, consiste na descoberta de correspondências entre os termos dos vocabulários (representados por ontologias) usados entre as diversas aplicações. As soluções propostas na literatura, apesar de serem totalmente automáticas possuem natureza heurística, podendo produzir resultados não-satisfatórios. O problema se intensifica quando se lida com grandes fontes de dados. O objetivo deste trabalho é propor um método para geração e refinamento incremental de correspondências entre ontologias. A abordagem proposta faz uso de técnicas de filtragem de ontologias, bem como do feedback do usuário para dar suporte à geração e ao refinamento dessas correspondências. Para fins de validação, uma ferramenta foi desenvolvida e experimentos foram realizados.
|
Page generated in 0.0444 seconds