• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 752
  • 179
  • 109
  • 91
  • 28
  • 26
  • 24
  • 23
  • 18
  • 18
  • 12
  • 7
  • 5
  • 5
  • 5
  • Tagged with
  • 1597
  • 289
  • 241
  • 199
  • 199
  • 190
  • 168
  • 163
  • 150
  • 144
  • 139
  • 136
  • 118
  • 111
  • 107
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Adaptive Search Range for Full-Search Motion Estimation

Chu, Kung-Hsien 17 August 2004 (has links)
Due to the progress of Internet technology and technical improvement, the growths of multimedia products and services ,such as Multimedia Message Service¡]MMS¡^, Multimedia on Demand¡]MoD¡^, Video Conferencing, and Digital TV, are very fast. All of these services need good video compression and audio compression standards to support. It is impossible to transmit source data of multimedia on networks. Motion Estimation needs the most computing complexity in the video compression. In our research, we focus on how to reduce candidate blocks and keep video quality. We study some fast motion estimation algorithms and architectures, and design a fast motion estimation architecture which supports resolution of 1280x720 at 30fps frame rate in HDTV specification based on hierarchical motion estimation algorithm. In the limit of hardware resources and the compressed video quality, the architecture can improve inter-coding performance. Two adjacent MacroBlocks have similar Motion Vector in our observation. We arrange a 16x8 processing element array to deal with two adjacent MacroBlocks together. The design can reduce a lot of clock cycles in the hierarchical motion estimation architecture, and keep high video quality. Furthermore, we propose a search range prediction method¡]called ASR¡^which reflect the motion behavior of video sequences into search range on MB-By-MB Basis. ASR can reduce the unnecessary operation of candidate blocks and keep very high video quality compared with Full Search Block Matching algorithm by the implementation in official software of the new video compression standard, Joint Model of H.264/AVC.
122

Hierarchical Transmission of Huffman Code Using Multi-Code/Multi-Rate DS/SS Modulation with Appropriate Power Control

Makido, Satoshi, Yamazato, Takaya, Katayama, Masaaki, Ogawa, Akira 12 1900 (has links)
No description available.
123

none

Chang, Guo-Chou 23 June 2000 (has links)
none
124

An Exploration of Key Factors of Attracting Investments in Kaohsiung Export Processing Zone

Huang, Ti-fen 11 July 2008 (has links)
Since the EPZ was set up in 1966, it has attracted foreign investment, introduced technology, opened up foreign trade and provided employment opportunities. When EPZ was first set up, it had every kind of investment-rewarding regulations, ¡§One-stop Window¡¨ administrational measures, so it could attract a lot of foreign and overseas Chinese capitals. But, after 2 years, the 68-hectare area was not enough for using. The EPZA has continuously expanded the zones up to 9 so far: the total area is about 576.81 hectares. The EPZs are export-oriented, their products are all exported, and the professions have increased step by sep, including trade, logistics and warehouses, software consulting and related industries. Its capital resources were mainly from Europe, U.S., Japan and Hong Kong, but converts into domestic mostly now, with 47.79%, the highest ratio. The Zones¡¦ industrial structure also changed from the original garments, plastics and leather with more than 55% to electronics & electric machinery with 80% the most now. The Export Processing Zone Administration (EPZA) has also coped with the era trend changes by innovating the policies and functions, such as efforts in zone transformation, deregulations, trade liberalization and industrial clustering. As the accumulations of Taiwan¡¦s capital and technology and the enormous changes both in global politics and economy, the in-zone enterprises have also changed the industrial structures and production processes, walk out the OEM shadows and walk in the ODM, so as to produce and do marketing by themselves and to expand the markets. They have transformed from labor-intensive industries of garments, plastics, leather to capital-intensive high-tech industries of IC, LCD, optics, precise tools, information software and digital content and so on. When Taiwan¡¦s producing costs are rising little by little, the enterprises form all over the world are moving to the mainland China and the booming southeastern Asian countries to reduce costs and grip the markets. It is the question how to mold the more suitable investment environment to lure the Taiwanese enterprises come back home and the foreign investment turn around to Taiwan. This study made the in-depth interviews and did the questionnaires to scholars, government agencies and the industrial circle for over 30 copies. This study also analyzes the key factors of attracting investment by AHP hierarchical analysis method. The results revealed that, firstly, the enterprises¡¦ interactions and the neighboring of customers, the benefits of industrial cluster and the completion of supply chain of down and up stream, are vital to attract investment. Secondly, the completion of the in-zone infrastructure is the most important key factor for attracting investment. Thirdly, The EPZA ¡¥s favorable measures and administrative efficiency are also the prior considering factors. This study suggested that the administration units should upgrade the functions of their in-zone administrative steps, build the modern facilities of the infrastructure and attach importance to the living functions, strengthen the cooperation between industries, governments and schools, and propagate the investment-rewarding measures offered by governments to the enterprises, so that the governments¡¦ good policies can truly do favors to the in-zone enterprises, and thus construct the zones¡¦ better investment environment and attract investment.
125

Characterization of neuron models

Boatin, William 14 July 2005 (has links)
Modern neuron models are large, complex entities. The ability to better simulate these complex models has been iven by the development of ever more powerful and cheap computers. The capacity to manage and understand the models has lagged behind improvements in simulation ability almost from the inception of neuron modeling. Despite the computing power currently available, more powerful simulation platforms and strategies are needed to cope with current and next generation neuron models. This thesis develops methodologies aimed at better characterizing motoneuron models. The hypothesis presented is that relationships between model outputs in addition to the relationships between model inputs (parameters) and outputs (behaviors) provide a characteristic description of the model that describes the model in a more useful way than just model behaviors. This description can be used to compare a model to different implementations of the same motoneuron and to experiment data. Data mining and data reduction techniques were used to process the data. Principal component analysis was used to indicate a significant, consistent reduction in dimensionality in an intermediate, mechanistic layer between model inputs and outputs. This layer represents the non-linear relationships between input and output, implying that if the non-linear relationships of a model were better understood and accessible, a model could be manipulated by varying the mechanism layer members, or rather the model parameters that primarily affect a mechanism layer member. Hierarchical cluster analysis showed similarity between sensitivity analyses data from models with random parameter sets. A main cluster represented the main region of model behavior with outlying clusters representing non-physiological behavior. This indicates that sensitivity analysis data is a good candidate for a model signature. The results demonstrate the usefulness of cluster analysis in identifying the similarities between data used as a model characterization metric or model signature. Its application is also valuable in identifying the main region of useful activity of a model, thus helping to identify a potential 'average' parameter set. Furthermore, factor analysis also proves functional in identifying members of the mechanism layer as well as the degree to which model outputs are affected by these members.
126

Development of a hierarchical k-selecting clustering algorithm – application to allergy.

Malm, Patrik January 2007 (has links)
<p>The objective with this Master’s thesis was to develop, implement and evaluate an iterative procedure for hierarchical clustering with good overall performance which also merges features of certain already described algorithms into a single integrated package. An accordingly built tool was then applied to an allergen IgE-reactivity data set. The finally implemented algorithm uses a hierarchical approach which illustrates the emergence of patterns in the data. At each level of the hierarchical tree a partitional clustering method is used to divide data into k groups, where the number k is decided through application of cluster validation techniques. The cross-reactivity analysis, by means of the new algorithm, largely arrives at anticipated cluster formations in the allergen data, which strengthen results obtained through previous studies on the subject. Notably, though, certain unexpected findings presented in the former analysis where aggregated differently, and more in line with phylogenetic and protein family relationships, by the novel clustering package.</p>
127

Visual hierarchical dimension reduction

Yang, Jing. January 2002 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: hierarchy; sunburst; dimension reduction; high dimensional data set; multidimensional visualization; parallel coordinates; scatterplot matrices; star glyphs. Includes bibliographical references (p. 86-91).
128

Examining the invariance of item and person parameters estimated from multilevel measurement models when distribution of person abilities are non-normal

Moyer, Eric 24 September 2013 (has links)
Multilevel measurement models (MMM), an application of hierarchical generalized linear models (HGLM), model the relationship between ability levels estimates and item difficulty parameters, based on examinee responses to items. A benefit of using MMM is the ability to include additional levels in the model to represent a nested data structure, which is common in educational contexts, by using the multilevel framework. Previous research has demonstrated the ability of the one-parameter MMM to accurately recover both item difficulty parameters and examinee ability levels, when using both 2- and 3-level models, under various sample size and test length conditions (Kamata, 1999; Brune, 2011). Parameter invariance of measurement models, that parameter estimates are equivalent regardless of the distribution of the ability levels, is important when the typical assumption of a normal distribution of ability levels in the population may not be correct. An assumption of MMM is that the distribution of examinee abilities, which is represented by the level-2 residuals in the HGLM, is normal. If the distribution of abilities in the population are not normal, as suggested by Micceri (1989), this assumption of MMM is violated, which has been shown to affect the estimation of the level-2 residuals. The current study investigated the parameter invariance of the 2-level 1P-MMM, by examining the accuracy of item difficulty parameter estimates and examinee ability level estimates. Study conditions included the standard normal distribution, as a baseline, and three non-normal distributions having various degrees of skew, in addition to various test lengths and sample sizes, to simulate various testing conditions. The study's results provide evidence for overall parameter invariance of the 2-level 1P-MMM, when accounting for scale indeterminacy from the estimation process, for the study conditions included. Although, the error in the item difficulty parameter and examinee ability level estimates in the study were not of practical importance, there was some evidence that ability distributions may affect the accuracy of parameter estimates for items with difficulties greater than represented in this study. Also, the accuracy of abilities estimates for non-normal distributions seemed less for conditions with greater test lengths and sample sizes, indicating possible increased difficulty in estimating abilities from non-normal distributions. / text
129

A Bayesian network classifier for quantifying design and performance flexibility with application to a hierarchical metamaterial design problem

Matthews, Jordan Lauren 18 March 2014 (has links)
Design problems in engineering are typically complex, and are therefore decomposed into a hierarchy of smaller, simpler design problems by the design management. It is often the case in a hierarchical design problem that an upstream design team’s achievable performance space becomes the design space for a downstream design team. A Bayesian network classifier is proposed in this research to map and classify a design team’s attainable performance space. The classifier will allow for enhanced collaboration between design teams, letting an upstream design team efficiently identify and share their attainable performance space with a downstream design team. The goal is that design teams can work concurrently, rather than sequentially, thereby reducing lead time and design costs. In converging to a design solution, intelligently narrowing the design space allows for resources to be focused in the most beneficial regions. However, the process of narrowing the design space is non-trivial, as each design team must make performance trade-offs that may unknowingly affect other design teams. The performance space mapping provided by the Bayesian network classifier allows designers to better understand the consequences of narrowing the design space. This knowledge allows design decisions to be made at the system-level, and be propagated down to the subsystem-level, leading to higher quality designs. The proposed methods of mapping the performance space are then applied to a hierarchical, multi-level metamaterial design problem. The design problem explores the possibility of designing and fabricating composite materials that have desirable macro-scale mechanical properties as a result of embedded micro-scale inclusions. The designed metamaterial is found to have stiffness and loss properties that surpass those of conventional composite materials. / text
130

A Semi-Automated Approach for Structuring Multi Criteria Decision Problems

Maier, Konradin, Stix, Volker 03 1900 (has links) (PDF)
This article seeks to enhance multi criteria decision making by providing a scientic approach for decomposing and structuring decision problems. We propose a process, based on concept mapping, which integrates group creativity techniques, card sorting procedures, quantitative data analysis and algorithmic automatization to construct meaningful and complete hierarchies of criteria. The algorithmic aspect is covered by a newly proposed recursive cluster algorithm, which automatically generates hierarchies from card sorting data. Based on comparison with another basic algorithm and empirical engineered and real-case test data, we validate that our process efficiently produces reasonable hierarchies of descriptive elements like goal- or problem-criteria. (authors' abstract)

Page generated in 0.0758 seconds