• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 15
  • 10
  • 6
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 119
  • 18
  • 16
  • 15
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Bit-interleaved coded modulation for hybrid rf/fso systems

He, Xiaohui 05 1900 (has links)
In this thesis, we propose a novel architecture for hybrid radio frequency (RF)/free–space optics (FSO) wireless systems. Hybrid RF/FSO systems are attractive since the RF and FSO sub–systems are affected differently by weather and fading phenomena. We give a thorough introduction to the RF and FSO technology, respectively. The state of the art of hybrid RF/FSO systems is reviewed. We show that a hybrid system robust to different weather conditions is obtained by joint bit–interleaved coded modulation (BICM) of the bit streams transmitted over the RF and FSO sub–channels. An asymptotic performance analysis reveals that a properly designed convolutional code can exploit the diversity offered by the independent sub–channels. Furthermore, we develop code design and power assignment criteria and provide an efficient code search procedure. The cut–off rate of the proposed hybrid system is also derived and compared to that of hybrid systems with perfect channel state information at the transmitter. Simulation results show that hybrid RF/FSO systems with BICM outperform previously proposed hybrid systems employing a simple repetition code and selection diversity.
72

Pairwise Balanced Designs of Dimension Three

Niezen, Joanna 20 December 2013 (has links)
A linear space is a set of points and lines such that any pair of points lie on exactly one line together. This is equivalent to a pairwise balanced design PBD(v, K), where there are v points, lines are regarded as blocks, and K ⊆ Z≥2 denotes the set of allowed block sizes. The dimension of a linear space is the maximum integer d such that any set of d points is contained in a proper subspace. Specifically for K = {3, 4, 5}, we determine which values of v admit PBD(v,K) of dimension at least three for all but a short list of possible exceptions under 50. We also observe that dimension can be reduced via a substitution argument. / Graduate / 0405 / jniezen@uvic.ca
73

The assessment of the quality of science education textbooks : conceptual framework and instruments for analysis

Swanepoel, Sarita 04 1900 (has links)
Science and technology are constantly transforming our day-to-day living. Science education has become of vital importance to prepare learners for this everchanging world. Unfortunately, science education in South Africa is hampered by under-qualified and inexperienced teachers. Textbooks of good quality can assist teachers and learners and facilitate the development of science teachers. For this reason thorough assessment of textbooks is needed to inform the selection of good textbooks. An investigation revealed that the available textbook evaluation instruments are not suitable for the evaluation of the physical science textbooks in the South African context. An instrument is needed that focusses on science education textbooks and which prescribes the criteria, weights, evaluation procedure and rating scheme that can ensure justifiable, transparent, reliable and valid evaluation results. This study utilised elements from the Analytic Hierarchy Process (AHP) to develop such an instrument and verified the reliability and validity of the instrument’s evaluation results. Development of the Instrument for the Evaluation of Science Education Textbooks started with the formulation of criteria. Characteristics that influence the quality of textbooks were identified from literature, existing evaluation instruments and stakeholders’ concerns. In accordance with the AHP, these characteristics or criteria were divided into categories or branches to give a hierarchical structure. Subject experts verified the content validity of the hierarchy. Expert science teachers compared the importance of different criteria. The data were used to derive weights for the different criteria with the Expert Choice computer application. A rubric was formulated to act as rating-scheme and score sheet. During the textbook evaluation process the ratings were transferred to a spreadsheet that computed the scores for the quality of a textbook as a whole as well as for the different categories. The instrument was tested on small scale, adjusted and then applied on a larger scale. The results of different analysts were compared to verify the reliability of the instrument. Triangulation with the opinions of teachers who have used the textbooks confirmed the validity of the evaluation results obtained with the instrument. Future investigations on the evaluation instrument can include the use of different rating scales and limiting of criteria. / Thesis (M. Ed. (Didactics))
74

Análise do fatores para o compartilhamento do conhecimento operário em indústrias do setor automotivo no Brasil / Analysis of factors for knowledge sharing among workers in automotive industries in Brazil

Petrini, Stefano [UNESP] 15 March 2016 (has links)
Submitted by STEFANO PETRINI DE OLIVEIRA null (stefanopetrini@hotmail.com) on 2016-05-06T00:57:17Z No. of bitstreams: 1 Dissertação - Stefano Petrini.pdf: 1880894 bytes, checksum: 90c6382ea8269445954a2790db221707 (MD5) / Approved for entry into archive by Juliano Benedito Ferreira (julianoferreira@reitoria.unesp.br) on 2016-05-09T20:04:22Z (GMT) No. of bitstreams: 1 petrini_s_me_guara.pdf: 1880894 bytes, checksum: 90c6382ea8269445954a2790db221707 (MD5) / Made available in DSpace on 2016-05-09T20:04:22Z (GMT). No. of bitstreams: 1 petrini_s_me_guara.pdf: 1880894 bytes, checksum: 90c6382ea8269445954a2790db221707 (MD5) Previous issue date: 2016-03-15 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / Recent research about the identification of factors that contribute to better sharing of knowledge and ways to evaluate this topic highlight the importance of knowledge management for the organization. Exploring opportunities in this scenario, the present study examines the share of the workers' knowledge in the automotive industry by factors related to the Production Organization, Work Organization and Knowledge Management, with attention to the influence of organizational and interpersonal context in the knowledge sharing process. This delimitation of the industrial sector in the light of the production area is justified by this area has dependence of the workers' tacit knowledge. The research uses a qualitative and quantitative approach in survey format and uses a questionnaire with managers (coordinators and supervisors) to assess the importance of leveraging factors of knowledge management in the view of the leadership body. It employed the Incomplete Pairwise Comparison method proposed by Harker (1986) based on the Analytic Hierarchy Process of Saaty (1977). The survey shows an integration between the factors and highlights the importance of systemic and technical conversation among the workers to improve their knowledge sharing, plus the role of communication, training and work instruction in the knowledge conversion processes. This research expands the theme of the conceptual limits knowledge management found in literature and contributes mainly in managerial direction for the qualification and learning new employees in the continuous process of recycling knowledge and mitigation of knowledge waste. Thus, it contributes to the promotion of an enabling environment for the creation and sharing of knowledge by the people of the workers' environment. / Pesquisas recentes relativas a identificação de fatores que contribuem para o melhor compartilhamento do conhecimento e formas de avaliação do tema evidenciam a importância da Gestão do Conhecimento para a organização. Explorando oportunidades neste cenário, a presente pesquisa analisa o compartilhamento do conhecimento operário na indústria automotiva por meio de fatores relativos à Organização da Produção, Organização do Trabalho e a Gestão do Conhecimento, com atenção a influência do contexto organizacional e interpessoal no processo de compartilhamento de conhecimento. Esta delimitação de setor industrial à luz da área de Produção justifica-se por esta área possuir dependência do conhecimento tácito operário. A pesquisa utiliza uma abordagem quali-quantitativa no formato survey e emprega um questionário com gestores (coordenadores e supervisores) para avaliar a importância dos fatores alavancadores da Gestão do Conhecimento na ótica do corpo de liderança. É empregada o método Incomplete Pairwise Comparison, proposta por Harker (1986) baseada no Analytic Hierarchy Process de Saaty (1977). A pesquisa evidencia uma integração entre os fatores e destaca a importância da conversa sistêmica e técnica entre os operários para o melhor compartilhamento de seu conhecimento, além do papel da comunicação, do treinamento e da instrução de trabalho nos processos de conversão do conhecimento. Esta pesquisa expande os limites conceituais do tema Gestão do Conhecimento verificado na pesquisa bibliográfica e contribui, principalmente, no direcionamento gerencial para a qualificação e aprendizado de novos funcionários, no processo contínuo de reciclagem de conhecimento e mitigação do desperdício do conhecimento. Logo, ela contribui à promoção de um contexto favorável à criação e ao compartilhamento do conhecimento pelas pessoas do ambiente operário.
75

Sobre o número máximo de retas duas a duas disjuntas em superfícies não singulares em P3

Lira, Dayane Santos de 24 February 2017 (has links)
Submitted by ANA KARLA PEREIRA RODRIGUES (anakarla_@hotmail.com) on 2017-08-22T13:57:08Z No. of bitstreams: 1 arquivototal.pdf: 1762696 bytes, checksum: 53bf47b7590ebc1271d2f0d81822f00c (MD5) / Made available in DSpace on 2017-08-22T13:57:08Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 1762696 bytes, checksum: 53bf47b7590ebc1271d2f0d81822f00c (MD5) Previous issue date: 2017-02-24 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This work aims to determine the maximum number of pairwise disjoint lines that a non-singular surface of degree d in P3 can contain. In the case of degrees d = 1 and d = 2 we found that these values are zero and in nite, respectively. Furthermore, in the case of degree d = 3 we did show that the maximum number of pairwise disjoint lines is 6, these con gurations were studied in 1863 by the Swiss Ludwig Schl a i (1814-1895) in [15]. For the case d = 4, in 1975 the Russian Viacheslav Nikulin in [10] showed that non-singular quartic surfaces contain at most 16 pairwise disjoint lines. In our work, we have been able to show that Schur's famous quartic achieves this bound and that Fermat's quartic has at most 12 pairwise disjoint lines. We also determined lower bounds for the maximum number of pairwise disjoint lines in the case of non-singular surfaces of degree d 5. For example, the Rams's family in [11] allows us to nd one of these lower bounds. / Este trabalho objetiva determinar a quantidade máxima de retas duas a duas disjuntas que uma superfície não singular de grau d em P3 pode conter. No caso dos graus d = 1 e d = 2 verificamos que estes valores s~ao zero e in nito, respectivamente. Al em disso, no caso de grau d = 3 mostramos que o n umero m aximo de retas duas a duas disjuntas e 6, ditas con gura c~oes foram estudadas em 1863 pelo sui co Ludwig Schl a i (1814-1895) em [15]. Para o caso d = 4, em 1975 o russo Viacheslav Nikulin em [10] mostrou que as superf cies qu articas n~ao singulares cont^em no m aximo 16 retas duas a duas disjuntas. No nosso trabalho, conseguimos mostrar que a famosa qu artica de Schur atinge essa cota e que qu artica de Fermat possui no m aximo 12 retas duas a duas disjuntas. Determinamos ainda cotas inferiores para o n umero m aximo de retas duas a duas disjuntas no caso de superf cies n~ao singulares de grau d 5. Por exemplo, a fam lia de Rams em [11] nos permite achar uma dessas cotas inferiores.
76

Modelagem estatística de extremos espaciais com base em processos max-stable aplicados a dados meteorológicos no estado do Paraná / Statistical modelling of spatial extremes based on max-stable processes applied to environmental data in the Parana State

Ricardo Alves de Olinda 09 August 2012 (has links)
A maioria dos modelos matemáticos desenvolvidos para eventos raros são baseados em modelos probabilísticos para extremos. Embora as ferramentas para modelagem estatística de extremos univariados e multivariados estejam bem desenvolvidas, a extensão dessas ferramentas para modelar extremos espaciais integra uma área de pesquisa em desenvolvimento muito ativa atualmente. A modelagem de máximos sob o domínio espacial, aplicados a dados meteorológicos é importante para a gestão adequada de riscos e catástrofes ambientais nos países que tem a sua economia profundamente dependente do agronegócio. Uma abordagem natural para tal modelagem é a teoria de extremos espaciais e o processo max-stable, caracterizando-se pela extensão de dimensões infinitas da teoria de valores extremos multivariados, podendo-se então incorporar as funções de correlação existentes na geoestatística e consequentemente, verificar a dependência extrema por meio do coeficiente extremo e o madograma. Neste trabalho descreve-se a aplicação de tais processos na modelagem da dependência de máximos espaciais de precipitação máxima mensal do estado do Paraná, com base em séries históricas observadas em estações meteorológicas. Os modelos propostos consideram o espaço euclidiano e uma transformação denominada espaço climático, que permite explicar a presença de efeitos direcionais, resultantes de padrões meteorológicos sinóticos. Essa metodologia baseia-se no teorema proposto por De Haan (1984) e nos modelos de Smith (1990) e de Schlather (2002), verifica-se também o comportamento isotrópico e anisotrópico desses modelos via simulação Monte Carlo. Estimativas são realizadas através da máxima verossimilhança pareada e os modelos são comparados usando-se o Critério de Informação Takeuchi. O algoritmo utilizado no ajuste é bastante rápido e robusto, permitindo-se uma boa eficiência computacional e estatística na modelagem da precipitação máxima mensal, possibilitando-se a modelagem dos efeitos direcionais resultantes de fenômenos ambientais. / The most mathematical models developed for rare events are based on probabilistic models for extremes. Although the tools for statistical modeling of univariate and multivariate extremes are well-developed, the extension of these tools to model spatial extremes data is currently a very active area of research. Modeling of maximum values under the spatial domain, applied to meteorological data is important for the proper management of risks and environmental disasters in the countries where the agricultural sector has great influence on the economy. A natural approach for such modeling is the theory of extreme spatial and max-stable process, characterized by infinite dimensional extension of multivariate extreme value theory, and we can then incorporate the current correlation functions in geostatistics and thus, check the extreme dependence through the extreme coefficient and the madogram. This thesis describes the application of such procedures in the modeling of spatial maximum dependency of monthly maximum rainfall of Paraná State, historical series based on observed meteorological stations. The proposed models consider the Euclidean space and a transformation called climatic space, which makes it possible to explain the presence of directional effects resulting from synoptic weather patterns. This methodology is based on the theorem proposed by De Haan (1984) and Smith (1990) models and Schlather (2002), checking the isotropic and anisotropic behavior these models through Monte Carlo simulation. Estimates are performed using maximum pairwise likelihood and the models are compared using the Takeuchi information criterion. The algorithm used in the fit is very fast and robust, allowing a good statistical and computational efficiency in monthly maximum rainfall modeling, allowing the modeling of directional effects resulting from environmental phenomena.
77

Bit-interleaved coded modulation for hybrid rf/fso systems

He, Xiaohui 05 1900 (has links)
In this thesis, we propose a novel architecture for hybrid radio frequency (RF)/free–space optics (FSO) wireless systems. Hybrid RF/FSO systems are attractive since the RF and FSO sub–systems are affected differently by weather and fading phenomena. We give a thorough introduction to the RF and FSO technology, respectively. The state of the art of hybrid RF/FSO systems is reviewed. We show that a hybrid system robust to different weather conditions is obtained by joint bit–interleaved coded modulation (BICM) of the bit streams transmitted over the RF and FSO sub–channels. An asymptotic performance analysis reveals that a properly designed convolutional code can exploit the diversity offered by the independent sub–channels. Furthermore, we develop code design and power assignment criteria and provide an efficient code search procedure. The cut–off rate of the proposed hybrid system is also derived and compared to that of hybrid systems with perfect channel state information at the transmitter. Simulation results show that hybrid RF/FSO systems with BICM outperform previously proposed hybrid systems employing a simple repetition code and selection diversity. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
78

Visualisation Studio for the analysis of massive datasets

Tucker, Roy Colin January 2016 (has links)
This thesis describes the research underpinning and the development of a cross platform application for the analysis of simultaneously recorded multi-dimensional spike trains. These spike trains are believed to carry the neural code that encodes information in a biological brain. A number of statistical methods already exist to analyse the temporal relationships between the spike trains. Historically, hundreds of spike trains have been simultaneously recorded, however as a result of technological advances recording capability has increased. The analysis of thousands of simultaneously recorded spike trains is now a requirement. Effective analysis of large data sets requires software tools that fully exploit the capabilities of modern research computers and effectively manage and present large quantities of data. To be effective such software tools must; be targeted at the field under study, be engineered to exploit the full compute power of research computers and prevent information overload of the researcher despite presenting a large and complex data set. The Visualisation Studio application produced in this thesis brings together the fields of neuroscience, software engineering and information visualisation to produce a software tool that meets these criteria. A visual programming language for neuroscience is produced that allows for extensive pre-processing of spike train data prior to visualisation. The computational challenges of analysing thousands of spike trains are addressed using parallel processing to fully exploit the modern researcher’s computer hardware. In the case of the computationally intensive pairwise cross-correlation analysis the option to use a high performance compute cluster (HPC) is seamlessly provided. Finally the principles of information visualisation are applied to key visualisations in neuroscience so that the researcher can effectively manage and visually explore the resulting data sets. The final visualisations can typically represent data sets 10 times larger than previously while remaining highly interactive.
79

An Introduction and Evaluation of a Lossless Fuzzy Binary AND/OR Compressor / En introduktion och utvärdering av ett Lossless Fuzzy binär och / eller kompressor

Alipour, Philip Baback, Ali, Muhammad January 2010 (has links)
We report a new lossless data compression algorithm (LDC) for implementing predictably-fixed compression values. The fuzzy binary and-or algorithm (FBAR), primarily aims to introduce a new model for regular and superdense coding in classical and quantum information theory. Classical coding on x86 machines would not suffice techniques for maximum LDCs generating fixed values of Cr >= 2:1. However, the current model is evaluated to serve multidimensional LDCs with fixed value generations, contrasting the popular methods used in probabilistic LDCs, such as Shannon entropy. The currently introduced entropy is of ‘fuzzy binary’ in a 4D hypercube bit flag model, with a product value of at least 50% compression. We have implemented the compression and simulated the decompression phase for lossless versions of FBAR logic. We further compared our algorithm with the results obtained by other compressors. Our statistical test shows that, the presented algorithm mutably and significantly competes with other LDC algorithms on both, temporal and spatial factors of compression. The current algorithm is a steppingstone to quantum information models solving complex negative entropies, giving double-efficient LDCs > 87.5% space savings. / Vi rapporterar en ny förlustfri komprimering algoritm (MUL) för att genomföra förutsägbart-fast komprimering värden. Den luddiga binär och-eller algoritm (FBAR), syftar bland annat att införa en ny modell för regelbunden och superdense kodning i klassiska och kvantmekaniska information teori. Klassiska kodning på x86-maskiner inte skulle räcka teknik för maximal LDC att skapa fasta värden av Cr >= 2:1. Men den nuvarande modellen utvärderas för att tjäna flerdimensionella LDC med fast värde generationer, där de populära metoder som används i probabilistiska LDC, såsom Shannon entropi. De närvarande in entropi är av "fuzzy binära" i en 4D blixtkub lite flagga modell, med en produkt värde av minst 50% komprimering. Vi har genomfört komprimering och simulerade den tryckfall fasen för förlustfri versioner av FBAR logik. Jämförde vi ytterligare vår algoritm med de resultat som andra kompressorer. Vår statistiska testet visar att den presenterade algoritmen mutably och betydligt konkurrerar med andra LDC algoritmer på båda, tidsmässiga och geografiska faktorer av kompression. Den nuvarande algoritmen är en steppingstone att kvantinformationsteknik modeller lösa komplexa negativa entropies, vilket ger dubbel-effektiva LDC> 87,5 besparingar utrymme. / +46 455 38 50 00
80

Porovnání časově závislých metod pro převod barevného obrazu na šedotónový / Comparison of Time-Dependent Color-to-Gray Conversions

Vlkovič, Vladimír January 2018 (has links)
This masters thesis is focused around the comparison of time dependent video grayscale conversion methods based on a user experiment. The test methodology is based on the pairwise comparison method 2AFC. It is composed of two test variants: a test with a reference video and a test without a reference video. The coefficients of agreement, consistency and correlation are utilized in the result analysis. The testing was done on 60 subjects which have done 7200 pairwise comparisons. The test results show that the time dependent method Hu14 is the most universal. Time dependent method Kim09 was bested by some non-dependent methods. The results also indicate that there is some correlation between the two test variants and that the choice of the input video can have impact on the method performance. The main contribution of this thesis is that non-dependent methods can under certain circumstances rival the performance of dependent methods and that the addition of a reference video did not have a meaningful impact on the test subjects judgements.

Page generated in 0.0282 seconds