1 |
Model-based coding of imagesWelsh, Bill January 1991 (has links)
No description available.
|
2 |
Asymmetric encryption for wiretap channelsAl-Hassan, Salah Yousif Radhi January 2015 (has links)
Since the definition of the wiretap channel by Wyner in 1975, there has been much research to investigate the communication security of this channel. This thesis presents some further investigations into the wiretap channel which improve the reliability of the communication security. The main results include the construction of best known equivocation codes which leads to an increase in the ambiguity of the wiretap channel by using different techniques based on syndrome coding. Best known codes (BKC) have been investigated, and two new design models which includes an inner code and outer code have been implemented. It is shown that best results are obtained when the outer code employs a syndrome coding scheme based on the (23; 12; 7) binary Golay code and the inner code employs the McEliece cryptosystem technique based on BKC0s. Three techniques of construction of best known equivocation codes (BEqC) for syndrome coding scheme are presented. Firstly, a code design technique to produce new (BEqC) codes which have better secrecy than the best error correcting codes is presented. Code examples (some 50 codes) are given for the case where the number of parity bits of the code is equal to 15. Secondly, a new code design technique is presented, which is based on the production of a new (BEqC) by adding two best columns to the parity check matrix(H) of a good (BEqC), [n; k] code. The highest minimum Hamming distance of a linear code is an important parameter which indicates the capability of detecting and correcting errors by the code. In general, (BEqC) have a respectable minimum Hamming distance, but are sometimes not as good as the best known codes with the same code parameters. This interesting point led to the production of a new code design technique which produces a (BEqC) code with the highest minimum Hamming distance for syndrome coding which has better secrecy than the corresponding (BKC). As many as 207 new best known equivocation codes which have the highest minimum distance have been found so far using this design technique.
|
3 |
Internalized Weight Bias and its Association with Short-Term Weight Loss Outcomes In Adults Utilizing an Online Weight Loss PlatformJanuary 2015 (has links)
abstract: There are multivariate factors that not only play a role in an individual's ability to lose weight, but may create barriers to his or her success. One such factor is internalized weight bias (IWB), which is inversely associated with weight loss outcomes and body satisfaction, and directly associated with psychosocial maladjustments such as depression and binge eating. This study examined the relationship between internalized weight bias and weight loss outcomes using a coding scheme developed for an online weight loss forum to see whether results would be consistent with self-administered surveys that measure IWB. The coding scheme was developed using an exploratory factor analysis of a survey composed of existing measures of IWB. Participants' posts within an online weight loss forum were coded and participants given a weekly IWB score that was compared to weekly weight loss using mixed model analysis. No significance was found between IWB and weight loss outcomes in this study, however, the coding scheme developed is a novel approach to measuring IWB, and the categories identified from latent constructs of IWB may be used in the future to determine the dimensions that exist within it. Ultimately, a better understanding of IWB could lead to the development of targeted weight loss interventions that address the beliefs and attitudes held by individuals who experience it. / Dissertation/Thesis / Masters Thesis Nutrition 2015
|
4 |
Role of Students’ Participation on Learning Physics in Active Learning ClassesNainabasti, Binod 10 October 2016 (has links)
Students’ interactions can be an influential component of students’ success in an interactive learning environment. From a participation perspective, learning is viewed in terms of how students transform their participation. However, many of the seminal papers discussing the participationist framework are vague on specific details about what student participation really looks like on a fine-grained scale. As part of a large project to understand the role of student participation in learning, this study gathered data that quantified students’ participation in three broad areas of two student-centered introductory calculus-based physics classes structured around the Investigative Science Learning Environment (ISLE) philosophy. These three broad areas of classes were in-class learning activities, class review sessions that happened at the beginning of every class, and the informal learning community that formed outside of class time.
Using video data, classroom observations, and students’ self-reported social network data, this study quantified students’ participation in these three aspects of the class throughout two semesters. The relationship between behaviors of students’ engagement in various settings of an active learning environment and (a) their conceptual understanding (measured by FCI gain) and (b) academic success in the courses as measured by exam scores and scores on out-of-class assignments were investigated. The results from the analysis of the student interaction in the learning process show that three class components, viz. the Review Session, Learning Activities, and Informal Learning Community, play distinct roles in learning. Students who come in the class with better content knowledge do not necessarily participate more in the learning activities of active learning classrooms. Learning Communities serve as a “support network” for students to finish assignments and help students to pass the course. Group discussions, which are facilitated by students themselves, better help students in gaining conceptual understanding. Since patterns of students’ participation do not change significantly over time, instructors should try to ensure greater participation by incorporating different learning activities in the active learning classroom.
|
5 |
A Methodology for Developing a Nursing Education Minimum DatasetRajab, Aziza A 10 November 2005 (has links)
Globally, health care professionals, administrators, educators, researchers, and informatics experts have found that minimum dataset and taxonomies can solve the problem of data standardization required in building an information system to advance disciplines body of knowledge. Disciplines continuously gather complex data, but data collected without an organizational context does not increase the knowledge-base. Therefore, a demand exists for developing minimum dataset, controlled vocabularies, taxonomies, and classification systems. To fulfill nursings needs for standardized comparable data, two minimum dataset are used in nursing for organizing, classifying, processing, and managing information for decision-making and advancing clinical nursing knowledge.
No minimum dataset in nursing education currently exists. With common definitions and taxonomy of nomenclature related to nursing education, research findings on similar topics can aggregate data across studies and settings to observe overall patterns. Understanding patterns will allow educators, researchers, and administrators to interpret and compare findings, facilitate evidence-based changes, and draw significant conclusions about nursing education programs, schools, and educational experiences.
This study proposes a generic methodology for building a Nursing Education Minimum Dataset (NEMDS) by exploring experiences of developing various minimum dataset. This study adapted the systems model as the conceptual framework for building the taxonomy and classification system of nursing education essential data elements to guide the analysis of structure, process, and outcome in nursing education. The study suggested using focus groups, an online Delphi survey, and the statistical techniques of Multidimensional Scaling, and kappa. The study presented these steps: identifying educational concepts and data elements; defining data elements as nursing education terminologies; building the taxonomy; conducting an empirical and theoretical validation; disseminating and aggregating the data in national dataset.
The proposed methodology to build an NEMDS meets the criteria of having a nursing education dataset that is mutually exclusive, exhaustive, and consistent with the concepts that help nursing educators and researchers to describe, explain, and predict outcomes in the discipline of nursing education. It can help the transformation of simple information into a meaningful knowledge that can be used and compared by the school, state or country to advance nursing education research and practice nationally or internationally.
|
6 |
Beyond the turning point of activation : Describing the characteristics and changes of Active Labour Market Policy in Sweden between 1991-2017.Assadi, Sam January 2018 (has links)
This paper tries to contribute to the collective knowledge on Bonoli’s (2010) concept “The Activation Turn “, both as a phenomenon and a turning point for ALMP in Sweden. It is argued that the Activation Turn has been legitimized in four phases in Sweden (Identification, First organisation, Second organisation and Stabilisation phase, between years of 1991-2017). This thesis argues that the blueprint for understanding and exploring the Activation Turn as a phenomonenon is to capture and compare the discourse and how it has developed within the state during these four phases. Using two guiding research questions: How can we describe the characteristics of ALMP during each phase? And how have ALMP have changed in Sweden since the beginning of the 90’s? This paper has tried to answer these two questions by doing a content analysis that captures the dominating characteristics of ALMP during each phase and how they have changed. The analysis has been done with a help of a coding scheme, which is derived from a theoretical framework on the three elements of institutional legitimacy: regulative, normative, and cognitive element. After counting the number of coded references from 38 state documents, and then analysing and discussing the results, we came to two overall conclusions. First, there has not been an Activation Turn, shift or transformation of ALMP, since the beginning of the 90’s within the state discourse. Second, the development of ALMP in Sweden can be characterized as fairly stable and resilient against changes.
|
7 |
Counterfactual and Causal Analysis for AI-based Modulation and Coding Scheme Selection / Kontrafaktisk och orsaksanalys för AI-baserad modulerings- och kodningsvalHao, Kun January 2023 (has links)
Artificial Intelligence (AI) has emerged as a transformative force in wireless communications, driving innovation to address the complex challenges faced by communication systems. In this context, the optimization of limited radio resources plays a crucial role, and one important aspect is the Modulation and Coding Scheme (MCS) selection. AI solutions for MCS selection have been predominantly characterized as black-box models, which suffer from limited explainability and consequently hinder trust in these algorithms. Moreover, the majority of existing research primarily emphasizes enhancing explainability without concurrently improving the model’s performance which makes performance and explainability a trade-off. This work aims to address these issues by employing eXplainable AI (XAI), particularly counterfactual and causal analysis, to increase the explainability and trustworthiness of black-box models. We propose CounterFactual Retrain (CF-Retrain), the first method that utilizes counterfactual explanations to improve model performance and make the process of performance enhancement more explainable. Additionally, we conduct a causal analysis and compare the results with those obtained from an analysis based on the SHapley Additive exPlanations (SHAP) value feature importance. This comparison leads to the proposal of novel hypotheses and insights for model optimization in future research. Our results show that employing CF-Retrain can reduce the Mean Absolute Error (MAE) of the black-box model by 4% while utilizing only 14% of the training data. Moreover, increasing the amount of training data yields even more pronounced improvements in MAE, providing a certain level of explainability. This performance enhancement is comparable to or even superior to using a more complex model. Furthermore, by introducing causal analysis to the mainstream SHAP value feature importance, we provide a novel hypothesis and explanation of feature importance based on causal analysis. This approach can serve as an evaluation criterion for assessing the model’s performance. / Artificiell intelligens (AI) har dykt upp som en transformativ kraft inom trådlös kommunikation, vilket driver innovation för att möta de komplexa utmaningar som kommunikationssystem står inför. I detta sammanhang spelar optimeringen av begränsade radioresurser en avgörande roll, och en viktig aspekt är valet av Modulation and Coding Scheme (MCS). AI-lösningar för val av modulering och kodningsschema har övervägande karaktäriserats som black-box-modeller, som lider av begränsad tolkningsbarhet och följaktligen hindrar förtroendet för dessa algoritmer. Dessutom betonar majoriteten av befintlig forskning i första hand att förbättra förklaringsbarheten utan att samtidigt förbättra modellens prestanda, vilket gör prestanda och tolkningsbarhet till en kompromiss. Detta arbete syftar till att ta itu med dessa problem genom att använda XAI, särskilt kontrafaktisk och kausal analys, för att öka tolkningsbarheten och pålitligheten hos svarta-box-modeller. Vi föreslår CF-Retrain, den första metoden som använder kontrafaktiska förklaringar för att förbättra modellens prestanda och göra processen med prestandaförbättring mer tolkningsbar. Dessutom gör vi en orsaksanalys och jämför resultaten med de som erhålls från en analys baserad på värdeegenskapens betydelse. Denna jämförelse leder till förslaget av nya hypoteser och insikter för modelloptimering i framtida forskning. Våra resultat visar att användning av CF-Retrain kan minska det genomsnittliga absoluta felet för black-box-modellen med 4% samtidigt som man använder endast 14% av träningsdata. Dessutom ger en ökning av mängden träningsdata ännu mer uttalade förbättringar av Mean Absolute Error (MAE), vilket ger en viss grad av tolkningsbarhet. Denna prestandaförbättring är jämförbar med eller till och med överlägsen att använda en mer komplex modell. Dessutom, genom att introducera kausal analys till de vanliga Shapley-tillsatsförklaringarna värdesätter egenskapens betydelse, ger vi en ny hypotes och tolkning av egenskapens betydelse baserad på kausalanalys. Detta tillvägagångssätt kan fungera som ett utvärderingskriterium för att bedöma modellens prestanda.
|
8 |
Système M2M/IoT par satellite pour l'hybridation d'un réseau NB-IoT via une constellation LEO / M2M/IoT satellite system for the hybridization of a NB-IoT network via a LEO constellationCluzel, Sylvain 07 March 2019 (has links)
Le but de cette thèse est d'étudier la mise en œuvre de services de type Internet of Thing (IoT) ou Machine to Machine (M2M) par satellite. Ce type de système pose une double problématique: d'une part au niveau couche physique : les contraintes liées au terminal (limité en puissance, énergie, taille d'antenne), au canal (potentiellement avec masquage et multitrajet) et au segment spatial impliquent la mise en œuvre de différentes techniques (entrelacement, suppression d'interférents, ...) permettant d'assurer le bilan de liaison adéquat pour le service. D'autre part, le besoin d'offrir un accès à la ressource à un grand nombre de terminaux demandant un débit faible implique l'utilisation de techniques d'accès à contention optimisées, ainsi que la prise en compte au niveau accès des problématiques d'économie d'énergie. Cette couche accès doit également être en mesure de s'interfacer avec des architectures réseaux plus vastes. On peut citer par exemple les architectures Internet afin de supporter des services IP pour l'IoT, avec des notions de services intermittents, telles qu'on les retrouve dans les réseaux DTN, ou bien les architectures 4G/5G pour la mise en œuvre de services mobiles. Cette thèse va investiguer deux approches systèmes innovantes ainsi que différentes techniques aussi bien couche physique que couche accès (potentiellement couplée) permettant leur mise en œuvre. Le premier scénario système consiste à l'utilisation d'un terminal satellite relais très bas débit (contrairement au cas classique traité dans la littérature reposant sur des terminaux broadband), s'interfaçant avec des capteurs en technologie accès terrestres. Des techniques innovantes de gestion des ressources et d'économie d'énergie au travers d'une couche accès dédiée (non DVB) pourraient permettre de supporter le nombre très important de terminaux dans ce type de système. Le second scénario repose sur une communication directe avec des capteurs/objets via une constellation satellite. Cette approche pose le problème de l'efficacité de la forme d'onde pour des services extrêmement sporadique et de la fiabilisation de la communication. Il existe de nombreux travaux coté DLR sur ce type de forme d'onde avec notamment la définition de S-MIM. Néanmoins, cette solution semble complexe et de nombreuses optimisations pourraient être apportées. Coté accès, E-SSA (communication asynchrone à spectre étalé avec SIC) défini par l'ESA est également une piste de travail intéressante même si sa mise en œuvre au niveau système et sa complexité doivent être consolidées. / The aim of this thesis is to study the implementation of Internet-based services of Thing (IoT) and Machine to Machine (M2M) through a satellite link. This type of system have to deal with two issues: first the physical layer level: terminal related constraints (limited in power, energy, and antenna size), channel (potentially with masking and multipath) and the space segment involve the implementation of different techniques (interleaving, interference cancellation,) to ensure proper link budget allowing the communication. On the other hand , the need to provide access to the resource to a large number of terminals requiring low throughput involves the use of optimized contention access techniques , as well as taking into account the level of access issues energy saving. The access layer should also be able to interface with larger networks architectures. Internet architectures for example include supporting IP services for Iota, with sporadic services, such as the ones found in the DTN networks, or 4G architectures / 5G for the implementation of mobile services. This thesis will investigate two innovative approaches and different techniques as well as physical layer access layer (potentially coupled) to their implementation. The first scenario involves the use of a very low throughput satellite relay terminal (unlike in the conventional case found in the literature based on broadband terminals), interfacing with terrestrial access technology sensors. Innovative resource management and energy saving techniques through a dedicated access layer (not DVB) could absorb the large number of terminals in this type of architecture. The second scenario is based on direct communication with sensors / objects via satellite constellation. This approach raises the question of the efficiency of the waveform for extremely sporadic services and the reliability of communication. DLR works on this type of waveform including the definition of S -MIM. However, this solution seems to be complex and many optimizations can be made. From the access layer point of view, E -SSA (asynchronous spread spectrum communication with SIC) defined by the ESA is also interesting even if its implementation to the system and its complexity level should be consolidated.
|
9 |
Lattice Codes for Secure Communication and Secret Key GenerationVatedka, Shashank January 2017 (has links) (PDF)
In this work, we study two problems in information-theoretic security. Firstly, we study a wireless network where two nodes want to securely exchange messages via an honest-but-curious bidirectional relay. There is no direct link between the user nodes, and all communication must take place through the relay. The relay behaves like a passive eavesdropper, but otherwise follows the protocol it is assigned. Our objective is to design a scheme where the user nodes can reliably exchange messages such that the relay gets no information about the individual messages. We first describe a perfectly secure scheme using nested lattices, and show that our scheme achieves secrecy regardless of the distribution of the additive noise, and even if this distribution is unknown to the user nodes. Our scheme is explicit, in the sense that for any pair of nested lattices, we give the distribution used for randomization at the encoders to guarantee security. We then give a strongly secure lattice coding scheme, and we characterize the performance of both these schemes in the presence of Gaussian noise. We then extend our perfectly-secure and strongly-secure schemes to obtain a protocol that guarantees end-to-end secrecy in a multichip line network. We also briefly study the robustness of our bidirectional relaying schemes to channel imperfections.
In the second problem, we consider the scenario where multiple terminals have access to private correlated Gaussian sources and a public noiseless communication channel. The objective is to generate a group secret key using their sources and public communication in a way that an eavesdropper having access to the public communication can obtain no information about the key. We give a nested lattice-based protocol for generating strongly secure secret keys from independent and identically distributed copies of the correlated random variables. Under certain assumptions on the joint distribution of the sources, we derive achievable secret key rates.
The tools used in designing protocols for both these problems are nested lattice codes, which have been widely used in several problems of communication and security. In this thesis, we also study lattice constructions that permit polynomial-time encoding and decoding. In this regard, we first look at a class of lattices obtained from low-density parity-check (LDPC) codes, called Low-density Construction-A (LDA) lattices. We show that high-dimensional LDA lattices have several “goodness” properties that are desirable in many problems of communication and security. We also present a new class of low-complexity lattice coding schemes that achieve the capacity of the AWGN channel. Codes in this class are obtained by concatenating an inner Construction-A lattice code with an outer Reed-Solomon code or an expander code. We show that this class of codes can achieve the capacity of the AWGN channel with polynomial encoding and decoding complexities. Furthermore, the probability of error decays exponentially in the block length for a fixed transmission rate R that is strictly less than the capacity. To the best of our knowledge, this is the first capacity-achieving coding scheme for the AWGN channel which has an exponentially decaying probability of error and polynomial encoding/decoding complexities.
|
10 |
Performance Analysis of Opportunistic Selection and Rate Adaptation in Time Varying ChannelsKona, Rupesh Kumar January 2016 (has links) (PDF)
Opportunistic selection and rate adaptation play a vital role in improving the spectral and power efficiency of current multi-node wireless systems. However, time-variations in wireless channels affect the performance of opportunistic selection and rate-adaptation in the following ways. Firstly, the selected node can become sub-optimal by the time data transmission commences. Secondly, the choice of transmission parameters such as rate and power for the selected node become sub-optimal. Lastly, the channel changes during data transmission.
In this thesis, we develop a comprehensive and tractable analytical framework that accurately accounts for these effects. It differs from the extensive existing literature that primarily focuses on time-variations until the data transmission starts. Firstly, we develop a novel concept of a time-invariant effective signal-to-noise ratio (TIESNR), which tractably and accurately captures the time-variations during the data transmission phase with partial channel state information available at the receiver. Secondly, we model the joint distribution of the signal-to-noise ratio at the time of selection and TIESNR during the data transmission using generalized bivariate gamma distribution.
The above analytical steps facilitate the analysis of the outage probability and average packet error rate (PER) for a given modulation and coding scheme and average throughput with rate adaptation. We also present extensive numerical results to verify the accuracy of each step of our approach and show that ignoring the correlated time variations during the data transmission phase can significantly underestimate the outage probability and average PER, whereas it overestimates the average throughput even for packet durations as low as 1 msec.
|
Page generated in 0.1016 seconds