Spelling suggestions: "subject:"codes""
81 |
Power, Social Identity and Fashion Consumption : A thesis on how female executives use power-coded dressing as a tool to accentuate power as a part of their social identity.Ordonez Asenjo, Carolina January 2014 (has links)
The aim of the thesis is to contribute to the CCT research field on social identity, by placing a focus on power from a customer perspective and studying how power can be accentuated within social identity. Theory from CCT with a focus on social identity has been used in combination with extensive literature on power and authority from a sociological perspective and literature from Fashion-Studies focusing on power-dressing, conspicuous consumption and luxury. The research question is: How is power-dressing and consumption of high-end luxury fashion brands used by female executives/senior managers in an attempt to accentuate power as a part of their social identity? In-depth semi-structured interviews where used as the main data collection method interviewing five female senior managers/female executives working in Stockholm; using the fashion consumption of female senior managers as its empirical sample. The main conclusion on this thesis is the creation of the concept of power-coded-dressing.This thesis implications are that it develops the CCT field slightly by adding a consumer-power perspective into the theoretical discourse. Its practical and social implications help women accentuate their power through, power-coded-dressing.
|
82 |
An Exploratory Comparison of B-RAAM and RAAM ArchitecturesKjellberg, Andreas January 2003 (has links)
Artificial intelligence is a broad research area and there are many different reasons why it is interesting to study artificial intelligence. One of the main reasons is to understand how information might be represented in the human brain. The Recursive Auto Associative Memory (RAAM) is a connectionist architecture that with some success has been used for that purpose since it develops compact distributed representations for compositional structures. A lot of extensions to the RAAM architecture have been developed through the years in order to improve the performance of RAAM; Bi coded RAAM (B-RAAM) is one of those extensions. In this work a modified B-RAAM architecture is tested and compared to RAAM regarding: Training speed, ability to learn with smaller internal representations and generalization ability. The internal representations of the two network models are also analyzed and compared. This dissertation also includes a discussion of some theoretical aspects of B-RAAM. It is found here that the training speed for B-RAAM is considerably lower than RAAM, on the other hand, RAAM learns better with smaller internal representations and is better at generalize than B-RAAM. It is also shown that the extracted internal representation of RAAM reveals more structural information than it does for B-RAAM. This has been shown by hieratically cluster the internal representation and analyse the tree structure. In addition to this a discussion is added about the justifiability to label B-RAAM as an extension to RAAM.
|
83 |
"Det som man oftast ser" : En studie om elevers könskodning av musikinstrument på gymnasiet. / ”What you often see” : A study concerning gender coding of musical instruments amongst pupils in upper secondary school.Lundberg, Lisa January 2017 (has links)
The focus of this study lies within gender coded musical instruments. This concept concerns the unconscious thought of musical instruments as either feminine or masculine. The aim of this study is to acknowledge the existence of this concept amongst pupils in Swedish upper secondary schools, and to demonstrate how it can affect pupils’ choice of main instrument. The study presents earlier studies that show signs of the existence of gender coded instruments. It also explores the concept of sex and gender in different situations. These include the use of gender in the Swedish language, in music, and in an ensemble situation. Fifteen pupils from two different schools were interviewed, and the conclusion is that the gender coding exists. When the pupils were asked to categorise different instruments as feminine or masculine, the results revealed that singing and piano were considered feminine. Some of the masculine instruments were electric guitar, electric bass, and drums. According to the pupils, the reasons behind their main choice of musical instrument can be put into five different categories. These are; Role models, Parents’ influense, the accessibility of musical instruments, Norms and Biological stereotypes. Three of these categories; Role models, Norms and Biological stereotypes are also what they believe are reasons behind gender coded musical instruments. The aim of this study is to raise awareness of this situation as the Swedish school values show that schools and their teachers are supposed to work towards equality between the genders, a task which cannot be done if this continues.
|
84 |
Ternary coding and triangular modulationAbdelaziz, Mahmoud Karem Mahmoud 16 August 2017 (has links)
Adaptive modulation is widely employed to improve spectral e ciency. To date,
square signal constellations have been used for adaptive modulation. In this disser-
tation, triangular constellations are considered for this purpose. Triangle quadrature
amplitude modulation (TQAM) for both power-of-two and non-power-of-two mod-
ulation orders is examined. A technique for TQAM mapping is presented which is
better than existing approaches. A new type of TQAM called semi-regular TQAM
(S-TQAM) is introduced. Bit error rate expressions for TQAM are derived, and the
detection complexity of S-TQAM is compared with that of regular TQAM (R-TQAM)
and irregular TQAM (I-TQAM). The performance of S-TQAM over additive white
Gaussian noise and Rayleigh fading channels is compared with that of R-TQAM and
I-TQAM.
The construction of ternary convolutional codes (TCCs) for ternary phase shift
keying (TPSK) modulation is considered. Tables of non-recursive non-systematic
TCCs with maximum free distance are given for rates 1=2, 1=3 and 1=4. The conver-
sion from binary data to ternary symbols is investigated. The performance of TCCs
with binary to ternary conversion using TPSK is compared with the best BCCs using
binary phase shift keying (BPSK). / Graduate
|
85 |
Distance-preserving mappings and trellis codes with permutation sequencesSwart, Theo G. 27 June 2008 (has links)
Our research is focused on mapping binary sequences to permutation sequences. It is established that an upper bound on the sum of the Hamming distance for all mappings exists, and this sum is used as a criterion to ascertain how good previously known mappings are. We further make use of permutation trellis codes to investigate the performance of certain permutation mappings in a power-line communications system, where background noise, narrow band noise and wide band noise are present. A new multilevel construction is presented next that maps binary sequences to permutation sequences, creating new mappings for which the sum of Hamming distances are greater than previous known mappings. It also proved that for certain lengths of sequences, the new construction can attain our new upper bound on the sum of Hamming distances. We further extend the multilevel construction by showing how it can be applied to other mappings, such as permutations with repeating symbols and mappings with nonbinary inputs. We also show that a subset of the new construction yields permutation sequences that are able to correct insertion and deletion errors as well. Finally, we show that long binary sequences, formed by concatenating the columns of binary permutation matrices, are subsets of the Levenshtein insertion/deletion correcting codes. / Prof. H. C. Ferreira
|
86 |
Art Juridified: Legality in Contemporary Art WorkingsJanuary 2018 (has links)
abstract: Art and law have a troubled relationship that is defined by steep hierarchies placing art subject to law. But beyond the interplay of transgressions and regulations, manifest in a number of high-profile cases, there are more intricate connections between the two disciplines. By expanding the notion of law into the concept of a hybrid collectif of legality, the hierarchies flatten and unfamiliar forms of possible interactions emerge. Legality, the quality of something being legal, serves as a model to show the capricious workings of law outside of its own profession. New juridical actors—such as algorithms—already challenge traditional regulatory powers and art could assume a similar role. This thesis offers a point of departure for the involvement of art in shaping emergent legalities that transcend existent jurisdictions through computer code. / Dissertation/Thesis / Masters Thesis Art History 2018
|
87 |
Development of a portable gamma camera for accurate 3-D localization of radioactive hotspots / Dévelοppement d'une caméra gamma pοrtable pοur la lοcalisatiοn précise en trois dimensiοns de pοints chauds radiοactifsParadiso, Vincenzo 31 March 2017 (has links)
Le présent travail a pour but le développement d’une caméra gamma à masque codé permettant d’estimer la position tridimensionnelle (3D) des sources radioactives. Cela est d’un intérêt considérable dans le cadre d’un grand nombre d'applications, de la reconstruction de la forme 3D des objets radioactifs aux systèmes de réalité augmentée appliqués à la radioprotection. Les caméras gamma portables actuelles ne fournissent que la position angulaire relative des sources gamma à localiser, c'est-à-dire qu'aucune information métrique concernant les sources n’est disponible, comme par exemple leur distance par rapport à la caméra. Dans cette thèse, nous proposons principalement deux approches permettant d’estimer la position 3D des sources. La première approche consiste à étalonner la caméra gamma avec un capteur de profondeur à lumière structurée. La seconde approche permet d'estimer la distance source-détecteur par une méthode d’imagerie gamma stéréoscopique. Pour aligner géométriquement les images obtenues par la caméra gamma, le capteur de profondeur, et la caméra optique, une procédure d'étalonnage n’utilisant qu’une seule source ponctuelle radioactive a été conçue et mise en œuvre. Les résultats expérimentaux démontrent que les approches proposées permettent d'obtenir une précision inférieure au pixel, tant pour l’erreur de reprojection que pour la superposition des images gamma et optiques. Ces travaux présentent également une analyse quantitative de la précision et de la résolution relatives à l’estimation de la distance source-détecteur. De plus, les résultats obtenus ont validé le choix de la géométrie du modèle sténopé pour les caméras gamma à masque codé. / A coded aperture gamma camera for retrieving the three-dimensional (3-D) position of radioactive sources is presented. This is of considerable interest for a wide number of applications, ranging from the reconstruction of the 3-D shape of radioactive objects to augmented reality systems. Current portable γ-cameras only provide the relative angular position of the hotspots within their field of view. That is, they do not provide any metric information concerning the located sources. In this study, we propose two approaches to estimate the distance of the surrounding hotspots, and to autonomously determine if they are occluded by an object. The first consists in combining and accurately calibrating the gamma camera with a structured-light depth sensor. The second approach allows the estimation of the source-detector distance by means of stereo gamma imaging. To geometrically align the images obtained by the gamma, depth, and optical cameras used, a versatile calibration procedure has been designed and carried out. Such procedure uses a calibration phantom intentionally easy to build and inexpensive, allowing the procedure to be performed with only one radioactive point source. Experimental results showed that our calibration procedure yields to sub-pixel accuracy both in the re-projection error and the overlay of radiation and optical images. A quantitative analysis concerning the accuracy and resolution of the retrieved source-detector distance is also provided, along with an insight into the respective most influential factors. Moreover, the results obtained validated the choice of the geometry of the pinhole model for a coded aperture gamma camera.
|
88 |
Optimal Network Coding Under Some Less-Restrictive Network ModelsChih-Hua Chang (10214267) 12 March 2021 (has links)
Network Coding is a critical technique when designing next-generation network systems, since the use of network coding can significantly improve the throughput and performance (delay/reliability) of the system. In the traditional design paradigm without network coding, different information flows are transported in a similar way like commodity flows such that the flows are kept separated while being forwarded in the network. However, network coding allows nodes in the network to not only forward the packet but also process the incoming information messages with the goal of either improving the throughput, reducing delay, or increasing the reliability. Specifically, network coding is a critical tool when designing absolute Shannon-capacity-achieving schemes for various broadcasting and multi-casting applications. In this thesis, we study the optimal network schemes for some applications with less restrictive network models. A common component of the models/approaches is how to use network coding to take advantage of a broadcast communication channel.<div><br></div><div>In the first part of the thesis, we consider the system of one server transmitting K information flows, one for each of K users (destinations), through a broadcast packet erasure channels with ACK/NACK. The capacity region of 1-to-K broadcast packet erasure channels with ACK/NACK is known for some scenarios, e.g., K<=3, etc. However, existing achievability schemes with network coding either require knowing the target rate in advance, and/or have a complicated description of the achievable rate region that is difficult to prove whether it matches the capacity or not. In this part, we propose a new network coding protocol with the following features: (i) Its achievable rate region is identical to the capacity region for all the scenarios in which the capacity is known; (ii) Its achievable rate region is much more tractable and has been used to derive new capacity rate vectors; (iii) It employs sequential encoding that naturally handles dynamic packet arrivals; (iv) It automatically adapts to unknown packet arrival rates; (v) It is based on GF(q) with q>=K. Numerically, for K=4, it admits an average control overhead 1.1% (assuming each packet has 1000 bytes), average encoding memory usage 48.5 packets, and average per-packet delay 513.6 time slots, when operating at 95% of the capacity.</div><div><br></div><div>In the second part, we focus on the coded caching system of one server and K users, each user k has cache memory size M<sub>k</sub> and demand a file among the N files currently stored at server. The coded caching system consists of two phases: Phase 1, the placement phase: Each user accesses the N files and fills its cache memory during off-peak hours; and Phase 2, the delivery phase: During the peak hours, each user submits his/her own file request and the server broadcasts a set of packet simultaneously to K users with the goal of successfully delivering the desired packets to each user. Due to the high complexity of coded caching problem with heterogeneous file size and heterogeneous cache memory size for arbitrary N and K, prior works focus on solving the optimal worst-case rate with homogeneous file size and mostly focus on designing order-optimal coded caching schemes with user-homogeneous file popularity that attain the lower bound within a constant factor. In this part, we derive the average rate capacity for microscopic 2-user/2-file (N=K=2) coded caching problem with heterogeneous files size, cache memory size, and user-dependent heterogeneous file popularity. The study will shed some further insights on the complexity and optimal scheme design of general coded caching problem with full heterogeneity.<br></div><div><br></div><div>In the third part, we further study the coded caching system of one server, K= 2 users, and N>=2 files and focus on the user-dependent file popularity of the two users. In order to approach the exactly optimal uniform average rate of the system, we simplify the file demand popularity to binary outputs, i.e., each user either has no interest (with probability 0) or positive uniform interest (with a constant probability) to each of the N file. Under this model, the file popularity of each user is characterized by his/her file demand set of positive interest in the N files. Specifically, we analyze the case of two user (K=2). We show the exact capacity results of one overlapped file of the two file demand sets for arbitrary N and two overlapped files of the two file demand sets for N = 3. To investigate the performance of large overlapped files we also present the average rate capacity under the constraint of selfish and uncoded prefetching with explicit prefetching schemes that achieve those capacities. All the results allow for arbitrary (and not necessarily identical) users' cache capacities and number of files in each file demand set.<br></div>
|
89 |
Reduced and coded sensing methods for x-ray based securitySun, Zachary Z. 05 November 2016 (has links)
Current x-ray technologies provide security personnel with non-invasive sub-surface imaging and contraband detection in various portal screening applications such as checked and carry-on baggage as well as cargo. Computed tomography (CT) scanners generate detailed 3D imagery in checked bags; however, these scanners often require significant power, cost, and space. These tomography machines are impractical for many applications where space and power are often limited such as checkpoint areas. Reducing the amount of data acquired would help reduce the physical demands of these systems. Unfortunately this leads to the formation of artifacts in various applications, thus presenting significant challenges in reconstruction and classification. As a result, the goal is to maintain a certain level of image quality but reduce the amount of data gathered. For the security domain this would allow for faster and cheaper screening in existing systems or allow for previously infeasible screening options due to other operational constraints. While our focus is predominantly on security applications, many of the techniques can be extended to other fields such as the medical domain where a reduction of dose can allow for safer and more frequent examinations.
This dissertation aims to advance data reduction algorithms for security motivated x-ray imaging in three main areas: (i) development of a sensing aware dimensionality reduction framework, (ii) creation of linear motion tomographic method of object scanning and associated reconstruction algorithms for carry-on baggage screening, and (iii) the application of coded aperture techniques to improve and extend imaging performance of nuclear resonance fluorescence in cargo screening. The sensing aware dimensionality reduction framework extends existing dimensionality reduction methods to include knowledge of an underlying sensing mechanism of a latent variable. This method provides an improved classification rate over classical methods on both a synthetic case and a popular face classification dataset. The linear tomographic method is based on non-rotational scanning of baggage moved by a conveyor belt, and can thus be simpler, smaller, and more reliable than existing rotational tomography systems at the expense of more challenging image formation problems that require special model-based methods. The reconstructions for this approach are comparable to existing tomographic systems. Finally our coded aperture extension of existing nuclear resonance fluorescence cargo scanning provides improved observation signal-to-noise ratios. We analyze, discuss, and demonstrate the strengths and challenges of using coded aperture techniques in this application and provide guidance on regimes where these methods can yield gains over conventional methods.
|
90 |
Generalization of Signal Point Target CodeBillah, Md Munibun 01 August 2019 (has links)
Detecting and correcting errors occurring in the transmitted data through a channel is a task of great importance in digital communication. In Error Correction Coding (ECC), some redundant data is added with the original data while transmitting. By exploiting the properties of the redundant data, the errors occurring in the data from the transmission can be detected and corrected. In this thesis, a new coding algorithm named Signal Point Target Code has been studied and various properties of the proposed code have been extended.
Signal Point Target Code (SPTC) uses a predefined shape within a given signal constellation to generate a parity symbol. In this thesis, the relation between the employed shape and the performance of the proposed code have been studied and an extension of the SPTC are presented.
This research presents simulation results to compare the performances of the proposed codes. The results have been simulated using different programming languages, and a comparison between those programming languages is provided. The performance of the codes are analyzed and possible future research areas have been indicated.
|
Page generated in 0.0267 seconds