• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 46
  • 31
  • 29
  • 27
  • 26
  • 14
  • 8
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 383
  • 61
  • 40
  • 39
  • 31
  • 30
  • 29
  • 28
  • 24
  • 23
  • 23
  • 21
  • 21
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Přenos televizních formátů do českého prostředí od vzniku duálního systému v České republice / Television formats transmission into czech environment since the creation of the dual system in Czech republic

Kantorová, Katarzyna January 2014 (has links)
Diploma thesis "Television formats transmission into Czech environment since the creation of the dual system in the Czech Republic" focuses on television formats which have been adapted in the Czech television industry since 1994. TV formats, the primary analytical object of this thesis, belong among the strongest aspects of media globalization. Since specific television formats spread the same structure, idea and concept, they increasingly influence the practice in television markets and the process of globalization all over the world. With this continuity, the paper describes the theory of globalization, focusing on cultural and media imperialism. Further, newer theories of globalization and their subsequent transformation are described. Based on this theoretical basis, the thesis describes, which TV formats have been adapted since 1994 in the Czech Republic. Our analysis is based on in-depth interviews with Czech television professionals who have experiences with the purchasing and subsequent adaptation of television formats. The thesis provides the overview of all adapted TV formats in the Czech Republic, and describes reasons, difficulties and the overall process of adapting licensed television formats. Powered by TCPDF (www.tcpdf.org)
242

Talentová show Got Talent jako televizní formát a jeho kulturní odlišnosti: komparativní studie čtyř světových verzí / Talent show Got talent as television format and its cultural differencies: comparative study of four global versions

Vondroušová, Jana January 2014 (has links)
This thesis deals with talent show Got Talent and through this television format compares four culturally different adaptations. This comparision is based on analysis of these four shows: Česko Slovensko má talent, America's Got Talent, Britain's Got Talent and China's Got Talent. Content of the shows is different in many respects thanks to distincts in television histories and diverse culture in each territory. The theoretical part focus on culture and development in television broadcasting in selected countries. Another chapter describes the market with licenses for television formats and differencies in local adaptations of shows and series. Content analysis itself was based on comparison of selected series of the show Got Talent and particular episode. It was found that worldwide television formats differ in many respects. Rules of the show, roles of the judges, dramaturgical structure of episode and casting of contestants are adapt to local culture and lifestyle. There are some common features in shows on the other hand: emphasis on personal life stories of the contestants and effort to create integrated story for whole episode through specific structure of the episode. We can say that television shows has influence on culture and culture affects content of the programme.
243

Conception et optimisation de circuits électroniques communicants pour une intégration au format carte bancaire : application à une serrure de vélo à assistance électrique / Design and optimization of communicating electronic circuits for an integration in a credit card size

Lahmani, Fatine 12 February 2014 (has links)
Depuis son apparition dans les années 70, les cartes à puce ont envahi le marché mondial, leur utilisation n'a cessé d'augmenter et de se diversifier. Sans forcément nous en rendre compte, chacun de nous en a plusieurs dans son portefeuille, son sac, son attaché-case… Toutes ces cartes ont pour point commun le fait de contenir des informations sur son titulaire qui servent à son identification dans les différentes actions qu'il souhaite effectuer. Ces informations sont présentes sur la piste magnétique et/ou la puce embarquée dans la carte. Avec les progrès technologiques actuels et plus précisément la miniaturisation des composants électroniques, nous sommes de plus en plus amenés à voir des composants complexes embarqués dans des cartes à puce pour satisfaire des besoins en ressources plus grands pour des applications de plus en plus sophistiquées. L'utilisation croissante du nombre des systèmes embarqués sur une carte à puce amène à prendre en compte différentes contraintes lors de la conception. Tout d'abord, il y a celles liées aux systèmes embarqués standards, telles que la surface, la consommation et la rapidité d'exécution. Ensuite viennent celles liées à la carte à puce en elle-même, des spécificités liées à l'épaisseur et aux contraintes mécaniques. On retrouve également des contraintes de consommation et de surface. L'apparition du sans-contact a révolutionné le domaine de la carte à puce. Plus besoin d'introduire la carte dans un lecteur pour lire les informations. Les données ne transitent plus par la puce mais via l'air grâce à une antenne intégrée. Il suffit de se trouver à proximité du lecteur sans forcément sortir la carte de poche ou du sac. Elles sont connues sous le nom de cartes RFID pour Radio Frequency Identification ou identifiction par radio fréquence. D'autres contraintes de conception sont alors apparues : choix de la fréquence à laquelle va se faire la communication et l'échange des données, la géométrie de l'antenne, le choix du tag… Tous les composants ont besoin d'une source d'alimentation. Les circuits RFID basiques dits passifs puisent leur énergie dans le champ magnétique produit à proximité du lecteur mais la complexité de certains circuits nécessite la présence d'une source d'alimentation intégrée dans la carte, dans ce cas les circuits sont désignés par actifs. En général, ce sont des batteries fines et flexibles qui sont utilisées. Là aussi, la technologie a fait d'immenses progrès et des batteries plus fines et avec de plus grandes capacités voient le jour. Ce sont ces batteries qui viennent alimenter les composants de la carte. Tous ces éléments constituent un véritable circuit électronique.Cette thèse industrielle a pour but dans un premier temps de concevoir un circuit électronique embarqué dans une carte au format bancaire en répondant à un cahier des charges bien défini tout en prenant en compte les différentes contraintes imposées par ce format. Ce circuit se devra d'être flexible, autonome et consommant le moins d'énergie possible. Dans un deuxième temps, une fois le produit réalisé et validé le but est de l'optimiser en proposant des solutions afin de faire gagner du temps en amont de la conception par exemple ou en proposant des modèles simples mais qui prennent en compte toutes les contraintes liées à ce type d'applications. / Since its emergence in the 70s , smart cards have invaded the world market , their use has been steadily increasing and diversifying . Without necessarily realizing it , each of us has more than one in his wallet, bag, his briefcase ...All these cards have in common the fact of containing information about the holder, which can be used for identification in the different activities they want to perform . These information is present on the magnetic stripe and / or on the chip embedded in the card. With current technology and more specifically the miniaturization of electronic components , we are seeing complex components embedded in smart cards to meet greater needs for resources for applications increasingly sophisticated .The increasing use of on-board on a smart card systems leads to take into account various constraints in the design . Firstly, there are those related to embedded systems standards , such as the area, consumption and speed of execution. Followed by those related to the smart card itself , specificities related to the thickness and mechanical stress . There are also the constraints of consumption and surface.The appearance of non-contact has revolutionized the field of smart card. No more Need to insert the card into a reader to get the information . The data are not routed by the chip but via air through an integrated antenna. You have to be near the reader without necessarily take the card out of a pocket. They are known as RFID cards for Radio Frequency Identification.Other design constraints then appeared : the choice of the frequency communication for data exchange, the geometry of the antenna , the choice of the tag ...All components require a power source . The basic circuits called passive RFID draw their energy from the magnetic field near the reader but the complexity of certain circuits requires the presence of an integrated power supply into the card, in this case the circuits are called active tags. In general , thin and flexible batteries are used. Again , technology has made tremendous progress and finer batteries with larger capacities emerged. All these elements constitute a real electronic circuit.This industrial thesis aims firstly to design an electronic circuit embedded in a bank card format meets the specifications defined taking into account the various constraints imposed by this format.This circuit must be flexible , autonomous and consuming the least possible energy.In a second step , once the product is produced and validated the goal is to optimize it proposing solutions to save time upstream design example or offering simple models, but taking into account all the constraints associated with this type of applications.
244

Avaliação de produtos aerofotogramétricos alternativos com câmaras digitais não métricas de pequeno formato em voo apoiado. / Evaluation of alternative products aerophotogrammetrics with digital cameras no metrics in small formats in flight supported.

Diniz, Émerson Andrade 20 June 2016 (has links)
Os produtos cartográficos gerados pelo processo de Aerolevantamento são uma importante ferramenta de análise e tomada de decisões na engenharia moderna. Por outro lado a crescente demanda de projetos está levando pesquisadores a buscarem meios mais rápidos, econômicos e eficientes para obter bons produtos. Dessa maneira, vem surgindo novos equipamentos e produtos nessa área. Paralelamente, com o advento do posicionamento por satélite é possível um maior controle da qualidade cartográfica e a verificação da eficácia desses novos produtos. O presente estudo analisa diferentes tecnologias associadas ao uso da câmara digital não métrica de pequeno formato, como a HASSELBLAD H4D-31, com a utilização do Posicionamento por Ponto Preciso associado a um sistema inercial com e sem a utilização de uma base de referência (voo não apoiado), para a elaboração de ortofotos e cartas. Com relação à precisão geométrica e atendimento do padrão brasileiro de qualidade foram testadas comparativamente essas e outras alternativas, contando com dados da RBMC e apoio suplementar de campo. O produto resultante foi analisado também quanto à eficácia no que se diz respeito à boa qualidade da imagem com a identificação dos objetos em campo para utilização em projetos de engenharia. Ao final pode-se dizer que o produto foi validado, quanto à qualidade informativa e quanto à precisão necessária, atingindo o PEC classe A, para a escala 1:2.000. Ou seja, esse produto é uma alternativa viável tecnicamente e de menor custo, para aplicações como as apresentadas no presente trabalho. / The cartographic products generated by Aerial Survey process is an important tool for analysis and decision in modern engineering. On the other way the growing demand for projects is leading researchers to seek ways more faster, economical and efficient to get good products. In this way, are emerging new equipments and products in this area. At the same time, with the advent of satellite positioning it is possible greater control of cartographic quality and checking the effectiveness of these new products. This study analyzes different technologies associated with the use of the digital camera not metric of small-format, as the Hasselblad H4D-31, using the Positioning Precise Point associated with an inertial system without the use of a reference base (flight not supported ), for the development of orthophotos and maps. Regarding the geometric precision and care of the Brazilian standard of quality we were tested compared these and other alternatives, with data RBMC and additional support field. The resulting product was analyzed as well as the effectiveness as it relates to the quality of the image with the identification of field objects for use in engineering projects. At the end it can be said that the product was validated information as to quality and as to the necessary precision, reaching the PEC class A, to 1: 2000. That is, this product is a technically feasible and low-cost alternative for applications such as those presented in this work.
245

Hudební dramaturgie českých alternativních rozhlasových stanic / The Musical programming of Czech alternative radio stations

Pyatkina, Maria January 2019 (has links)
This master's thesis studies and compares the methods that Czech alternative radio stations, Radio 1 and Radio Wave, use while creating the music component of their broadcast. Their alternative format defines their specific approach towards music, which is appealing for this type of research. The theoretical part of this thesis provides insight into the issues of the radio station's musical programming, describes the alternative radio format and the studied radio stations, including the musical component. Music directors and DJ's go through decision- making processes in a similar way that news editors write news articles. That is why the second part of this thesis applies the gatekeeping research framework to the musical programming. Based on the interviews with the radio station's music directors and DJ's, this thesis discovers a wide range of aspects that affect alternative radio station's musical broadcasts. The gatekeeping research levels help to compare these influences within the two stations.
246

Associative CAD References in the Neutral Parametric Canonical Form

Staves, Daniel Robert 01 March 2016 (has links)
Due to the multiplicity of computer-aided engineering applications present in industry today, interoperability between programs has become increasingly important. A survey conducted among top engineering companies found that 82% of respondents reported using 3 or more CAD formats during the design process. A 1999 study by the National Institute for Standards and Technology (NIST) estimated that inadequate interoperability between the OEM and its suppliers cost the US automotive industry over $1 billion per year, with the majority spent fixing data after translations. The Neutral Parametric Canonical Form (NPCF) prototype standard developed by the NSF Center for e-Design, BYU Site offers a solution to the translation problem by storing feature data in a CAD-neutral format to offer higher-fidelity parametric transfer between CAD systems. This research has focused on expanding the definitions of the NPCF to enforce data integrity and to support associativity between features to preserved design intent through the neutralization process. The NPCF data structure schema was defined to support associativity while maintaining data integrity. Neutral definitions of new features was added including multiple types of coordinate systems, planes and axes. Previously defined neutral features were expanded to support new functionality and the software architecture was redefined to support new CAD systems. Complex models have successfully been created and exchanged by multiple people in real-time to validated the approach of preserving associativity and support for a new CAD system, PTC Creo, was added.
247

Mixed-format test score equating: effect of item-type multidimensionality, length and composition of common-item set, and group ability difference

Wang, Wei 01 December 2013 (has links)
Mixed-format tests containing both multiple-choice (MC) items and constructed-response (CR) items are now widely used in many testing programs. Mixed-format tests often are considered to be superior to tests containing only MC items although the use of multiple item formats leads to measurement challenges in the context of equating conducted under the common-item nonequivalent groups design (CINEG). The purpose of this dissertation was to investigate how various test characteristics and examinee characteristics influence CINEG mixed-format test score equating results. Simulated data were used in this dissertation. Simulees' item responses were generated using items selected from one MC item pool and one CR item pool which were constructed based on the College Board Advanced Placement examinations from various subject areas. Five main factors were investigated in this dissertation, including item-type dimensionality, group ability difference, within group ability difference, length and composition of the common-item set, and format representativeness of the common-item set. In addition, the performance of two equating methods, the presmoothed frequency estimation method (PreSm_FE) and the presmoothed chained equipercentile equating method (PreSm_CE), was compared under various conditions. To evaluate equating results, both conditional statistics and overall summary statistics were considered: absolute bias, standard error of equating, and root mean squared error. The difference that matters (DTM) also was used as a criterion for evaluating whether adequate equating results were obtained. The main findings based on the simulation studies are as follows: (1) For most situations, item-type multidimensionality did not have substantial impact on random error, regardless of the common-item set. However, its influence on bias depended on the composition of common-item sets; (2) Both the group ability difference factor and the within group ability difference factor had no substantial influence on random error. When group ability differences were simulated, the common-item set with more items or more total score points had less equating error. When a within group ability difference existed, conditions in which there was a balance of different item formats in the common-item set displayed more accurate equating results than did unbalanced common-item sets. (3) The relative performance of common-item sets with various lengths and compositions was dependent on the levels of group ability difference, within group ability difference, and test dimensionality. (4) The common-item set containing only MC items performed similarly to the common-item set with both item formats when the test forms were unidimensional and no within group ability difference existed or when groups of examinees did not differ in proficiency. (5) The PreSm_FE method was more sensitive to group ability difference than the PreSm_CE method. When the within group ability difference was non-zero, the relative performance of the two methods depended on the length and composition of the common-item set. The two methods performed almost the same in terms of random error. The studies conducted in this dissertation suggest that when equating multidimensional mixed-format test forms in practice, if groups of examinees differ substantially in overall proficiency, inclusion of both item formats should be considered for the common-item set. When within group ability differences are likely to exist, balancing different item formats in the common-item set appears to be even more important than the use of a larger number of common items for obtaining accurate equating results. Because only simulation studies were conducted in this dissertation, caution should be exercised when generalizing the conclusions to practical situations.
248

Ontological lockdown assessment : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Information Technology at Massey University, Palmerston North, New Zealand

Steele, Aaron January 2008 (has links)
In order to keep shared access computers secure and stable system administrators resort to locking down the computing environment in order to prevent intentional and unintentional damage by users. Skilled attackers are often able to break out of locked down computing environments and intentionally misuse shared access computers. This misuse has resulted in cases of mass identity theft and fraud, some of which have had an estimated cost ranging in millions. In order to determine if it is possible to break out of locked down computing environments an assessment method is required. Although a number of vulnerability assessment techniques exist, none of the existing techniques are sufficient for assessing locked down shared access computers. This is due to the existing techniques focusing on traditional, application specific, software vulnerabilities. Break out path vulnerabilities (which are exploited by attackers in order to break out of locked down environments) differ substantially from traditional vulnerabilities, and as a consequence are not easily discovered using existing techniques. Ontologies can be thought of as a modelling technique that can be used to capture expert knowledge about a domain of interest. The method for discovering break out paths in locked down computers can be considered expert knowledge in the domain of shared access computer security. This research proposes an ontology based assessment process for discovering break out path vulnerabilities in locked down shared access computers. The proposed approach is called the ontological lockdown assessment process. The ontological lockdown assessment process is implemented against a real world system and successfully identifies numerous break out path vulnerabilities.
249

Low-complexity block dividing coding method for image compression using wavelets : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University, Palmerston North, New Zealand

Zhu, Jihai January 2007 (has links)
Image coding plays a key role in multimedia signal processing and communications. JPEG2000 is the latest image coding standard, it uses the EBCOT (Embedded Block Coding with Optimal Truncation) algorithm. The EBCOT exhibits excellent compression performance, but with high complexity. The need to reduce this complexity but maintain similar performance to EBCOT has inspired a significant amount of research activity in the image coding community. Within the development of image compression techniques based on wavelet transforms, the EZW (Embedded Zerotree Wavelet) and the SPIHT (Set Partitioning in Hierarchical Trees) have played an important role. The EZW algorithm was the first breakthrough in wavelet based image coding. The SPIHT algorithm achieves similar performance to EBCOT, but with fewer features. The other very important algorithm is SBHP (Sub-band Block Hierarchical Partitioning), which attracted significant investigation during the JPEG2000 development process. In this thesis, the history of the development of wavelet transform is reviewed, and a discussion is presented on the implementation issues for wavelet transforms. The above mentioned four main coding methods for image compression using wavelet transforms are studied in detail. More importantly the factors that affect coding efficiency are identified. The main contribution of this research is the introduction of a new low-complexity coding algorithm for image compression based on wavelet transforms. The algorithm is based on block dividing coding (BDC) with an optimised packet assembly. Our extensive simulation results show that the proposed algorithm outperforms JPEG2000 in lossless coding, even though it still leaves a narrow gap in lossy coding situations
250

Wide Field Aperture Synthesis Radio Astronomy

Bock, Douglas Carl-Johan January 1998 (has links)
This thesis is focussed on the Molonglo Observatory Synthesis Telescope (MOST), reporting on two primary areas of investigation. Firstly, it describes the recent upgrade of the MOST to perform an imaging survey of the southern sky. Secondly, it presents a MOST survey of the Vela supernova remnant and follow-up multiwavelength studies. The MOST Wide Field upgrade is the most significant instrumental upgrade of the telescope since observations began in 1981. It has made possible the nightly observation of fields with area ~5 square degrees, while retaining the operating frequency of 843 MHz and the pre-existing sensitivity to point sources and extended structure. The MOST will now be used to make a sensitive (rms approximately 1 mJy/beam) imaging survey of the sky south of declination -30&deg. This survey consists of two components: an extragalactic survey, which will begin in the south polar region, and a Galactic survey of latitudes |b| &lt 10&deg. These are expected to take about ten years. The upgrade has necessitated the installation of 352 new preamplifiers and phasing circuits which are controlled by 88 distributed microcontrollers, networked using optic fibre. The thesis documents the upgrade and describes the new systems, including associated testing, installation and commissioning. The thesis continues by presenting a new high-resolution radio continuum survey of the Vela supernova remnant (SNR), made with the MOST before the completion of the Wide Field upgrade. This remnant is the closest and one of the brightest SNRs. The contrast between the structures in the central pulsar-powered nebula and the synchrotron radiation shell allows the remnant to be identified morphologically as a member of the composite class. The data are the first of a composite remnant at spatial scales comparable with those available for the Cygnus Loop and the Crab Nebula, and make possible a comparison of radio, optical and soft X-ray emission from the resolved shell filaments. The survey covers an area of 50 square degrees at a resolution of 43&quot x 60&quot, while imaging structures on scales up to 30'. It has been used for comparison with Wide Field observations to evaluate the performance of the upgraded MOST. The central plerion of the Vela SNR (Vela X) contains a network of complex filamentary structures. The validity of the imaging of these filaments has been confirmed with Very Large Array (VLA) observations at 1.4 GHz. Unlike the situation in the Crab Nebula, the filaments are not well correlated with H-alpha emission. Within a few parsec of the Vela pulsar the emission is much more complex than previously seen: both very sharp edges and more diffuse emission are present. It has been postulated that one of the brightest filaments in Vela X is associated with the X-ray feature (called a `jet') which appears to be emanating from the region of the pulsar. However, an analysis of the MOST and VLA data shows that this radio filament has a flat spectral index similar to another more distant filament within the plerion, indicating that it is probably unrelated to the X-ray feature.

Page generated in 0.0361 seconds