Spelling suggestions: "subject:"checkcif"" "subject:"checkbit""
81 |
A Comparison Study on a Set of Space Syntax based Methods : Applying metric, topological and angular analysis to natural streets, axial lines and axial segmentsXia, Xiaolin January 2013 (has links)
Recently, there has been an increasing interest in looking at urban environment as a complex system. More and more researchers are paying attention to the study of the configuration of urban space as well as human social activities within it. It has been found that correlation exists between the morphological properties of urban street network and observed human social movement patterns. This correlation implies that the influence of urban configurations on human social movements is no longer only revealed from the sense of metric distance, but also revealed from topological and geometrical perspectives. Metric distances, topological relationships and angular changes between streets should be considered when applying space syntax analysis to an urban street network. This thesis is mainly focused on the comparison among metric, topological and angular analyses based on three kinds of urban street representation models: natural streets, axial lines and axial segments. Four study areas (London, Paris, Manhattan and San Francisco) were picked up for empirical study. In the study, space syntax measures were calculated for different combinations of analytical methods and street models. These theoretical space syntax accessibility measures (connectivity, integration and choice) were correlated to the corresponding practical human movement to evaluate the correlations. Then the correlation results were compared in terms of analytical methods and street representation models respectively. In the end, the comparison of results show that (1) natural-street based model is the optimal street model for carrying out space syntax analysis followed by axial lines and axial segments; (2) angular analysis and topological analysis are more advanced than metric analysis; and (3) connectivity, integration and local integration (two-step) are more suitable for predicting human movements in space syntax. Furthermore, it can be hypothesized that topological analysis method with natural-street based model is the best combination for the prediction of human movements in space syntax, for the integration of topological and geometrical thinking.
|
82 |
Essays on demand enhancement by food industry participantsSchulz, Lee Leslie January 1900 (has links)
Doctor of Philosophy / Department of Agricultural Economics / Ted Schroeder / This dissertation empirically examines how demand-enhancing activities conducted by food industry participants affect retail beef steak pricing, consumer demand for ground beef, and industry concentration. It follows the journal article style and includes three self-contained chapters. Chapter 1 uses a two step hedonic model with retail scanner data of consumer beef steak purchases to determine if there are incentives to identify certain attributes and to determine what types of attributes entertain price premiums and at what levels these premiums exists. Results indicate that most branded beef steak products garnered premiums along with organic claim, religious processing claim, and premium steak cuts. Factors influencing brand equity are new brands targeting emerging consumer trends, brands with regional prominence, and those positioned as special-labels, program/breed specific, and store brands.
Chapter 2 reports tests of aggregation over elementary ground beef products and estimates composite demand elasticities. Results suggest consumers differentiate ground beef according to lean percentage (70-77%, 78-84%, 85-89%, 90-95%, and 96-100%) and brand type (local/regional, national, store, and unbranded). The range in composite elasticity estimates shows the value of analyzing demand elasticity based on differentiation and not simply considering ground beef as being homogeneous. Composite elasticity estimates provide improved understanding of how consumers make decisions concerning ground beef purchases.
Chapter 3 examines industry concentration for the U.S. food manufacturing sector. This study is the first to examine whether particular subsectors within the food manufacturing industry, which operate in the presence of industry-funded check-off programs such as marketing orders, are more or less concentrated than industries without such research and marketing programs. Results provide evidence to support the hypothesis that industries with demand-enhancing check-off programs have lower concentration relative to industries without these programs.
|
83 |
LOW DENSITY PARITY CHECK CODES FOR TELEMETRY APPLICATIONSHayes, Bob 10 1900 (has links)
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Next generation satellite communication systems require efficient coding schemes that enable high data rates, require low overhead, and have excellent bit error rate performance. A newly rediscovered class of block codes called Low Density Parity Check (LDPC) codes has the potential to revolutionize forward error correction (FEC) because of the very high coding rates. This paper presents a brief overview of LDPC coding and decoding. An LDPC algorithm developed by Goddard Space Flight Center is discussed, and an overview of an accompanying VHDL development by L-3 Communications Cincinnati Electronics is presented.
|
84 |
我迷我打卡?台灣K-POP迷妹的打卡實踐 / I Fan Therefore I Check In? Taiwan K-pop Fans’ Check-in Practices蘇湘棻, Su, Hsiang Fen Unknown Date (has links)
迷文化從1990年至今一直是文化研究的寵兒,新科技與跨國文化的加入豐富了迷文化的實踐樣貌,也帶出了新的研究方向。本研究探討喜愛K-POP的迷妹如何利用Facebook的打卡功能,展示自己的迷身分,透過研究深入了解打卡當下的脈絡與實踐心態,進一步探討迷的各式資本是如何在打卡動態上呈現。
本研究使用資料觀察分析法與深度訪談法。針對7位受訪者,利用打卡資料抓取程式濾出Facebook打卡訊息,事先審視其打卡資料,確保資料的完整度與真實性,再透過深度訪談,更加了解迷的打卡實踐心態與過程,期以兩種研究方法交互參照,擁有更豐富的研究資料。
研究發現,迷打卡出現的場合非常多元,除了迷活動相關的地點,生活打卡也可看出迷實踐的蹤跡。迷打卡種類繁多,展現的資本也非常複雜,除了一眼就能看出的資本呈現,更多時候是必須搭配打卡的情境與脈絡才能知悉。除此之外,這些不斷增生、轉換、積累的資本,透過迷在打卡的展演,進而成為迷獲取象徵資本的籌碼與管道。迷展現資本,同時也會暴露自己的階級,但在網路場域,除了讓資本呈現不同的特色風貌,階級的呈現能夠被隱匿,低度資本也有翻轉的可能。 / Studies of fan cultures have been discussed since 1990. However, influenced by new technology and cultural differences, fans have a variety of media use that leads to culture consumption. This study tries to find out the media use behavior through K-POP fans’ check in practice.
In order to be informational and accurate, the study contains two methods. First, seven participants share their experiences of Facebook check-in behaviors, and then data crawling is used to get all check-in data.
My findings indicate that fans use check-ins on Facebook to perform fans' behavior in fan-activities and in the daily life. The capitals displaying on check-ins are multifarious. Some of them are easy to see but some hidden capitals will be found by understanding the context behind check-ins. In addition, fans emerge, transform, accumulate and display different capitals on facebook check-ins and those displaying capitals are the factors to obtain symbolic capitals as well. Fans show capitals they have on Facebook, which will expose their social classes in the mean time. However, on the internet, fans can hide the clues of classes consciously, and show more capitals than they have.
|
85 |
SIMULATED PERFORMANCE OF SERIAL CONCATENATED LDPC CODESPanagos, Adam G. 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / With the discovery of Turbo Codes in 1993, interest in developing error control coding schemes that approach channel capacity has intensified. Some of this interest has been focused on lowdensity parity-check (LDPC) codes due to their high performance characteristics and reasonable decoding complexity. A great deal of literature has focused on performance of regular and irregular LDPC codes of various rates and on a variety of channels. This paper presents the simulated performance results of a serial concatenated LDPC coding system on an AWGN channel. Performance and complexity comparisons between this serial LDPC system and typical LDPC systems are made.
|
86 |
UNEQUAL ERROR PROTECTION FOR JOINT SOURCE-CHANNEL CODING SCHEMESSankaranarayanan, Sundararajan, Cvetković, Aleksandar, Vasić, Bane 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / A joint source-channel coding scheme (JSCCS) used in applications, like sending images, voice, music etc. over internet/ wireless networks, involves source coding to compress the information and channel coding to detect/ correct errors, introduced by the channel. In this paper, we investigate the unequal error protection (UEP) capability of a class of low-density parity-check (LDPC) codes in a JSCCS. This class of irregular LDPC codes is constructed from cyclic difference families (CDFs).
|
87 |
Facebookanvändares attityder gentemot företag aktiva på FacebookAndersson, Tedh, Jinnemo, Marie, Nyberg, Andreas January 2010 (has links)
No description available.
|
88 |
Characterization and Advanced Communication Techniques for Free-Space Optical ChannelsAnguita, Jaime A January 2007 (has links)
Free-Space Optical (FSO) communication through the terrestrial atmospheric channel offers many benefits in the wireless communications arena, like power efficiency; suitability for secure communications; absence of electromagnetic interference; and potentially very high bandwidth. An optical beam propagating through the atmosphere is subject to optical turbulence. Optical turbulence is a random process that distorts the intensity and phase structure of a propagating optical beam and induces a varying signal at the receiver of an FSO communication link. This phenomenon (usually referred to as scintillation) degrades the performance of the FSO link by increasing the probability of error. In this dissertation we seek to characterize the effects of the scintillation-induced power fluctuations by determining the channel capacity of the optical link using numerical methods. We find that capacity decreases monotonically with increasing turbulence strength in weak turbulence conditions, but it is non-monotonic in strong turbulence conditions. We show that low-density parity-check (LDPC) codes provide strong error control capabilities in this channel if a perfect interleaver is used. Multiple transmit optical beams can be used to reduce scintillation. We characterize the spatial correlation of the atmospheric optical channel and determine a scintillation model for the multiple-beam scheme. With this model we can predict the effective reduction in scintillation as a function of the system design parameters. A Multi-channel FSO communications system based on orbital angular momentum (OAM)-carrying beams is studied. We analyze the effects of turbulence on the system and find that turbulence induces attenuation and crosstalk among OAM channels. Based on a model in which the constituent channels are binary symmetric and crosstalk is a Gaussian noise source, we find optimal sets of OAM states at each turbulence condition studied, and determine the aggregate capacity of the multi-channel system at those conditions. At very high data rates the FSO channel shows inter-symbol interference (ISI). We address the problem of joint sequence detection in ISI channels and decoding of LDPC codes. We derive the belief propagation equations that allow the simultaneous detection and decoding of a LDPC codeword in a ISI channel.
|
89 |
Congestion and medium access control in 6LoWPAN WSNMichopoulos, Vasilis January 2012 (has links)
In computer networks, congestion is a condition in which one or more egressinterfaces are offered more packets than are forwarded at any given instant [1]. In wireless sensor networks, congestion can cause a number of problems including packet loss, lower throughput and poor energy efficiency. These problems can potentially result in a reduced deployment lifetime and underperforming applications. Moreover, idle radio listening is a major source of energy consumption therefore low-power wireless devices must keep their radio transceivers off to maximise their battery lifetime. In order to minimise energy consumption and thus maximise the lifetime of wireless sensor networks, the research community has made significant efforts towards power saving medium access control protocols with Radio Duty Cycling. However, careful study of previous work reveals that radio duty cycle schemes are often neglected during the design and evaluation of congestion control algorithms. This thesis argues that the presence (or lack) of radio duty cycle can drastically influence the performance of congestion control mechanisms. To investigate if previous findings regarding congestion control are still applicable in IPv6 over low power wireless personal area and duty cycling networks; some of the most commonly used congestion detection algorithms are evaluated through simulations. The research aims to develop duty cycle aware congestion control schemes for IPv6 over low power wireless personal area networks. The proposed schemes must be able to maximise the networks goodput, while minimising packet loss, energy consumption and packet delay. Two congestion control schemes, namely DCCC6 (Duty Cycle-Aware Congestion Control for 6LoWPAN Networks) and CADC (Congestion Aware Duty Cycle MAC) are proposed to realise this claim. DCCC6 performs congestion detection based on a dynamic buffer. When congestion occurs, parent nodes will inform the nodes contributing to congestion and rates will be readjusted based on a new rate adaptation scheme aiming for local fairness. The child notification procedure is decided by DCCC6 and will be different when the network is duty cycling. When the network is duty cycling the child notification will be made through unicast frames. On the contrary broadcast frames will be used for congestion notification when the network is not duty cycling. Simulation and test-bed experiments have shown that DCCC6 achieved higher goodput and lower packet loss than previous works. Moreover, simulations show that DCCC6 maintained low energy consumption, with average delay times while it achieved a high degree of fairness. CADC, uses a new mechanism for duty cycle adaptation that reacts quickly to changing traffic loads and patterns. CADC is the first dynamic duty cycle pro- tocol implemented in Contiki Operating system (OS) as well as one of the first schemes designed based on the arbitrary traffic characteristics of IPv6 wireless sensor networks. Furthermore, CADC is designed as a stand alone medium access control scheme and thus it can easily be transfered to any wireless sensor network architecture. Additionally, CADC does not require any time synchronisation algorithms to operate at the nodes and does not use any additional packets for the exchange of information between the nodes (For example no overhead). In this research, 10000 simulation experiments and 700 test-bed experiments have been conducted for the evaluation of CADC. These experiments demonstrate that CADC can successfully adapt its cycle based on traffic patterns in every traffic scenario. Moreover, CADC consistently achieved the lowest energy consumption, very low packet delay times and packet loss, while its goodput performance was better than other dynamic duty cycle protocols and similar to the highest goodput observed among static duty cycle configurations.
|
90 |
FPGA implementation of advanced FEC schemes for intelligent aggregation networksZou, Ding, Djordjevic, Ivan B. 13 February 2016 (has links)
In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10(-15) in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.
|
Page generated in 0.0402 seconds