941 |
Students' Experiences During Democratic Activities at a Canadian Free School: A Case StudyPrud'homme, Marc-Alexandre 09 February 2011 (has links)
While the challenge of improving young North Americans’ civic engagement seems to lie in the hands of schools, studying alternative ways of teaching citizenship education could benefit the current educational system. In this context, free schools (i.e., schools run democratically by students and teachers), guided by a philosophy that aims at engaging students civically through the democratic activities that they support, offer a relatively unexplored ground for research. The present inquiry is a case study using tools of ethnography and drawing upon some principles of complexity thinking. It aims at understanding students’ citizenship education experiences during democratic activities in a Canadian free school. It describes many experiences that can arise from these activities. They occurred within a school that operated democratically based on a consensus-model. More precisely, they took place during two kinds of democratic activities: class meetings, which regulated the social life of the school, and judicial committees, whose function was to solve conflicts at the school. During these activities, students mostly experienced a combination of feelings of appreciation, concernment and empowerment. While experiencing these feelings, they predominantly engaged in decision-making and conflict resolution processes. During these processes, students modified their conflict resolutions skills, various conceptions, and their participation in democratic activities and in the school. Based on these findings, the study concludes that students can develop certain skills and attitude associated to citizenship education during these activities and become active from a citizenship perspective. Hence, these democratic activities represent alternative strategies that can assist educators in teaching about citizenship.
|
942 |
On Peer Networks and Group FormationBallester Pla, Coralio 23 June 2005 (has links)
En el artículo "NP-completeness in Hedonic Games", identificamos algunas limitaciones significativas de los modelos estándar de juegos cooperativos: A menudo, es imposible alcanzar una organización estable de una sociedad en una cantidad de tiempo razonable. Las implicaciones básicas de estos resultados son las siguientes, Primero, desde un punto de vista positivo, las sociedades están "condenadas" a evolucionar constantemente, más que a alcanzar un estadio de equilibrio en el corto plazo. Segundo, desde una perspectiva normativa, un hipotético organizador de la sociedad debería tomar en consideración las limitaciones prácticas de tiempo a la hora de implementar un orden social estable.Para obtener nuestros resultados, utilizamos el concepto de NP-completitud, que es un modelo bien establecido de complejidad temporal en Ciencias de la Computación. En concreto, nos concentramos en estabilidad grupal y estabilidad individual en juegos hedónicos. Los juegos hedónicos son una clase simple de juegos cooperativos en los que la utilidad de cada individuo viene totalmente determinada por el grupo laboral al que pertenece. Nuestros resultados referentes a la complejidad, expresados en términos de NP-completitud, cubren un amplio espectro de dominios de las preferencias individuales, incluyendo preferencias estrictas, indiferencias en las preferencias o preferencias libres sobre el tamaño de los grupos. Dichos resultados también se cumplen si nos restringimos al caso en el que el tamaño máximo de los grupos es pequeño (dos o tres jugadores)En el artículo "Who is Who in Networks. Wanted: The Key Player" (junto con Antoni Calvó Armengol e Yves Zenou), analizamos un modelo de efectos de grupo en el que los agentes interactúan en un juego de influencias bilaterales. Los juegos no cooperativos con población finita y utilidades linales-cuadráticas, en los cuales cada jugador decide cuánto esfuerzo ejercer, pueden ser interpretados como juegos en red con complementariedades en los pagos, junto con un componente de susitucion global y uniforme, y un efecto de concavidad propia.Para dichos juegos, la acción de cada jugador en un equilibrio de Nash es proporcional a su centralidad de Bonacich en la red de complementariedades, estableciendo así un puente con la literatura de redes sociales. Dicho vínculo entre Bonacich y Nash implica que el equilibrio agregado aumenta con el tamaño y la densidad de la red. También analizamos una política que consiste en seleccionar al jugador clave, ésto es, el jugador que, una vez eliminado del juego, induce un cambio óptimo en la actividad agregada. Proveemos una caracterización geométrica del jugador clave, identificada con una medida de inter-centralidad, la cual toma en cuenta tanto la centralidad de cada jugador como su contribución a la centralidad de los otros.En el artículo "Optimal Targets in Peer Networks" (junto con Antoni Calvó Armengol e Yves Zenou), nos centramos en las consecuencias y limitaciones prácticas que se derivan del modelo de decisiones sobre delincuencia. Las principales metas que aborda el trabajo son las siguientes. Primero, la elección se extiende el concepto de delincuente clave en una red al de grupo clave. En dicha situación se trata de seleccionar de modo óptimo al conjunto de delincuentes a eliminar/neutralizar, dadas las restricciones presupuestarias para aplicar medidas. Dicho problema presenta una inherente complejidad computacional que solo puede salvarse mediante el uso de procedimientos aproximados, "voraces" o probabilísticos. Por otro lado, tratamos el problema del delincuente clave en el contexto de redes dinámicas, en las que, inicialmente, los individuos deciden acerca de su futuro como delincuentes o como ciudadanos que obtienen un salario fijo en el mercado. En dicha situación, la elección del delincuente clave es más compleja, ya que el objetivo de disminuir la delincuencia debe tener en cuenta los efectos en cadena que pueda traer consigo la desaparición de uno o varios delincuentes. Por último, estudiamos la complejidad computacional del problema de elección óptima y explotamos la propiedad de submodularidad de la intercentralidad de grupo, lo cual nos permite acotar el error relativo de la aproximación basada en un algoritmo voraz. / The aim of this thesis work is to contribute to the analysis of the interaction of agents in social networks and groups.In the chapter "NP-completeness in Hedonic Games", we identify some significant limitations in standard models of cooperation in games: It is often impossible to achieve a stable organization of a society in a reasonable amount of time. The main implications of these results are the following. First, from a positive point of view, societies are bound to evolve permanently, rather than reach a steady state configuration rapidly. Second, from a normative perspective, a planner should take into account practical time limitations in order to implement a stable social order.In order to obtain our results, we use the notion of NP-completeness, a well-established model of time complexity in Computer Science. In particular, we concentrate on group stability and individual stability in hedonic games. Hedonic games are a simple class of cooperative games in which each individual's utility is entirely determined by her group. Our complexity results, phrased in terms of NP-completeness, cover a wide spectrum of preference domains, including strict preferences, indifference in preferences or undemanding preferences over sizes of groups. They also hold if we restrict the maximum size of groups to be very small (two or three players).The last two chapters deal with the interaction of agents in the social setting. It focuses on games played by agents who interact among them. The actions of each player generate consequences that spread to all other players throughout a complex pattern of bilateral influences. In "Who is Who in Networks. Wanted: The Key Player" (joint with Antoni Calvó-Armengol and Yves Zenou), we analyze a model peer effects where agents interact in a game of bilateral influences. Finite population non-cooperative games with linear-quadratic utilities, where each player decides how much action she exerts, can be interpreted as a network game with local payoff complementarities, together with a globally uniform payoff substitutability component and an own-concavity effect.For these games, the Nash equilibrium action of each player is proportional to her Bonacich centrality in the network of local complementarities, thus establishing a bridge with the sociology literature on social networks. This Bonacich-Nash linkage implies that aggregate equilibrium increases with network size and density. We then analyze a policy that consists in targeting the key player, that is, the player who, once removed, leads to the optimal change in aggregate activity. We provide a geometric characterization of the key player identified with an inter-centrality measure, which takes into account both a player's centrality and her contribution to the centrality of the others.Finally, in the last chapter, "Optimal Targets in Peer Networks" (joint with Antoni Calvó-Armengol and Yves Zenou), we analyze the previous model in depth and study the properties and the applicability of network design policies.In particular, the key group is the optimal choice for a planner who wishes to maximally reduce aggregate activity. We show that this problem is computationally hard and that a simple greedy algorithm used for maximizing submodular set functions can be used to find an approximation. We also endogeneize the participation in the game and describe some of the properties of the key group. The use of greedy heuristics can be extended to other related problems, like the removal or addition of new links in the network.
|
943 |
HW/SW mechanisms for instruction fusion, issue and commit in modern u-processorsDeb, Abhishek 03 May 2012 (has links)
In this thesis we have explored the co-designed paradigm to show alternative processor design points. Specifically, we have provided HW/SW mechanisms for instruction fusion, issue and commit for modern processors. We have implemented a co-designed virtual machine monitor that binary translates x86 instructions into RISC like micro-ops. Moreover, the translations are stored as superblocks, which are a trace of basic blocks. These superblocks are further optimized using speculative and non-speculative optimizations. Hardware mechanisms exists in-order to take corrective action in case of misspeculations. During the course of this PhD we have made following contributions.
Firstly, we have provided a novel Programmable Functional unit, in-order to speed up general-purpose applications. The PFU consists of a grid of functional units, similar to CCA, and a distributed internal register file. The inputs of the macro-op are brought from the Physical Register File to the internal register file using a set of moves and a set of loads. A macro-op fusion algorithm fuses micro-ops at runtime. The fusion algorithm is based on a scheduling step that indicates whether the current fused instruction is beneficial or not. The micro-ops corresponding to the macro-ops are stored as control signals in a configuration. The macro-op consists of a configuration ID which helps in locating the configurations. A small configuration cache is present inside the Programmable Functional unit, that holds these configurations. In case of a miss in the configuration cache configurations are loaded from I-Cache. Moreover, in-order to support bulk commit of atomic superblocks that are larger
than the ROB we have proposed a speculative commit mechanism. For this we have proposed a Speculative commit register map table that holds the mappings of the speculatively committed instructions. When all the instructions of the superblock have committed the speculative state is copied to Backend Register Rename Table.
Secondly, we proposed a co-designed in-order processor with with two kinds of accelerators. These FU based accelerators run a pair of fused instructions. We have considered two kinds of instruction fusion. First, we fused a pair of independent loads together into vector loads and execute them on vector load units. For the second kind of instruction fusion we have fused a pair of dependent simple ALU instructions and execute them in Interlock Collapsing ALUs (ICALU). Moreover, we have evaluated performance of various code optimizations such as list-scheduling, load-store telescoping and load hoisting among others. We have compared our co-designed processor with small instruction window out-of-order processors.
Thirdly, we have proposed a co-designed out-of-order processor. Specifically we have reduced complexity in two areas. First
of all, we have co-designed the commit mechanism, that enable bulk commit of atomic superblocks. In this solution we got rid of the conventional ROB, instead we introduce the Superblock Ordering Buffer (SOB). SOB ensures program order is maintained at the granularity of the superblock, by bulk committing the program state. The program state consists of the register state and the memory state. The register state is held in a per superblock register map table, whereas the memory state is held in gated store buffer and updated in bulk. Furthermore, we have tackled the complexity of Out-of-Order issue logic by using FIFOs. We have proposed an enhanced steering heuristic that fixes the inefficiencies of the existing dependence-based heuristic. Moreover, a mechanism to release the FIFO entries earlier is also proposed that further improves the performance of the steering heuristic. / En aquesta tesis hem explorat el paradigma de les màquines issue i commit per processadors actuals. Hem implementat una màquina virtual que tradueix binaris x86 a micro-ops de tipus RISC. Aquestes traduccions es guarden com a superblocks, que en realitat no és més que una traça de virtuals co-dissenyades. En particular, hem proposat mecanismes hw/sw per a la fusió d’instruccions, blocs bàsics. Aquests superblocks s’optimitzen utilitzant optimizacions especualtives i d’altres no speculatives. En cas de les optimizations especulatives es consideren mecanismes per a la gestió de errades en l’especulació. Al llarg d’aquesta tesis s’han fet les següents contribucions:
Primer, hem proposat una nova unitat functional programmable (PFU) per tal de millorar l’execució d’aplicacions de proposit general. La PFU està formada per un conjunt d’unitats funcionals, similar al CCA, amb un banc de registres intern a la PFU distribuït a les unitats funcionals que la composen. Les entrades de la macro-operació que s’executa en la PFU es mouen del banc de registres físic convencional al intern fent servir un conjunt de moves i loads. Un algorisme de fusió combina més micro-operacions en temps d’execució. Aquest algorisme es basa en un pas de planificació que mesura el benefici de les decisions de fusió. Les micro operacions corresponents a la macro operació s’emmagatzemen com a senyals de control en una configuració. Les macro-operacions tenen associat un identificador de configuració que ajuda a localitzar d’aquestes. Una petita cache de configuracions està present dintre de la PFU per tal de guardar-les. En cas de que la configuració no estigui a la cache, les configuracions es carreguen de la cache d’instruccions. Per altre banda, per tal de donar support al commit atòmic dels superblocks que sobrepassen el tamany del ROB s’ha proposat un mecanisme de commit especulatiu. Per aquest mecanisme hem proposat una taula de mapeig especulativa dels registres, que es copia a la taula no especulativa quan totes les instruccions del superblock han comitejat.
Segon, hem proposat un processador en order co-dissenyat que combina dos tipus d’acceleradors. Aquests acceleradors executen un parell d’instruccions fusionades. S’han considerat dos tipus de fusió d’instructions. Primer, combinem un parell de loads independents formant loads vectorials i els executem en una unitat vectorial. Segon, fusionem parells d’instruccions simples d’alu que són dependents i que s’executaran en una Interlock Collapsing ALU (ICALU). Per altra aquestes tecniques les hem evaluat conjuntament amb diverses optimizacions com list scheduling, load-store telescoping i hoisting de loads, entre d’altres. Aquesta proposta ha estat comparada amb un processador fora d’ordre.
Tercer, hem proposat un processador fora d’ordre co-dissenyat efficient reduint-ne la complexitat en dos areas principals. En primer lloc, hem co-disenyat el mecanisme de commit per tal de permetre un eficient commit atòmic del superblocks. En aquesta solució hem substituït el ROB convencional, i en lloc hem introduït el Superblock Ordering Buffer (SOB). El SOB manté l’odre de programa a granularitat de superblock. L’estat del programa consisteix en registres i memòria. L’estat dels registres es manté en una taula per superblock, mentre que l’estat de memòria es guarda en un buffer i s’actulitza atòmicament. La segona gran area de reducció de complexitat considerarada és l’ús de FIFOs a la lògica d’issue. En aquest últim àmbit hem proposat una heurística de distribució que solventa les ineficiències de l’heurística basada en dependències anteriorment proposada. Finalment, i junt amb les FIFOs, s’ha proposat un mecanisme per alliberar les entrades de la FIFO anticipadament.
|
944 |
A Parameterized Algorithm for Upward Planarity Testing of Biconnected GraphsChan, Hubert January 2003 (has links)
We can visualize a graph by producing a geometric representation of the graph in which each node is represented by a single point on the plane, and each edge is represented by a curve that connects its two endpoints.
Directed graphs are often used to model hierarchical structures; in order to visualize the hierarchy represented by such a graph, it is desirable that a drawing of the graph reflects this hierarchy. This can be achieved by drawing all the edges in the graph such that they all point in an upwards direction. A graph that has a drawing in which all edges point in an upwards direction and in which no edges cross is known as an upward planar graph. Unfortunately, testing if a graph is upward planar is NP-complete.
Parameterized complexity is a technique used to find efficient algorithms for hard problems, and in particular, NP-complete problems. The main idea is that the complexity of an algorithm can be constrained, for the most part, to a parameter that describes some aspect of the problem. If the parameter is fixed, the algorithm will run in polynomial time.
In this thesis, we investigate contracting an edge in an upward planar graph that has a specified embedding, and show that we can determine whether or not the resulting embedding is upward planar given the orientation of the clockwise and counterclockwise neighbours of the given edge. Using this result, we then show that under certain conditions, we can join two upward planar graphs at a vertex and obtain a new upward planar graph. These two results expand on work done by Hutton and Lubiw.
Finally, we show that a biconnected graph has at most <i>k</i>!8<sup><i>k</i>-1</sup> planar embeddings, where <i>k</i> is the number of triconnected components. By using an algorithm by Bertolazzi et al. that tests whether a given embedding is upward planar, we obtain a parameterized algorithm, where the parameter is the number of triconnected components, for testing the upward planarity of a biconnected graph. This algorithm runs in <i>O</i>(<i>k</i>!8<sup><i>k</i></sup><i>n</i><sup>3</sup>) time.
|
945 |
Complexity Reduced Behavioral Models for Radio Frequency Power Amplifiers’ Modeling and LinearizationFares, Marie-Claude January 2009 (has links)
Radio frequency (RF) communications are limited to a number of frequency bands scattered over the radio spectrum. Applications over such bands increasingly require more versatile, data extensive wireless communications that leads to the necessity of high bandwidth efficient interfaces, operating over wideband frequency ranges. Whether for a base station or mobile device, the regulations and adequate transmission of such schemes place stringent requirements on the design of transmitter front-ends. Increasingly strenuous and challenging hardware design criteria are to be met, especially so in the design of power amplifiers (PA), the bottle neck of the transmitter’s design tradeoff between linearity and power efficiency. The power amplifier exhibits a nonideal behavior, characterized by both nonlinearity and memory effects, heavily affecting that tradeoff, and therefore requiring an effective linearization technique, namely Digital Predistortion (DPD). The effectiveness of the DPD is highly dependent on the modeling scheme used to compensate for the PA’s nonideal behavior. In fact, its viability is determined by the scheme’s accuracy and implementation complexity. Generic behavioral models for nonlinear systems with memory have been used, considering the PA as a black box, and requiring RF designers to perform extensive testing to determine the minimal complexity structure that achieves satisfactory results. This thesis first proposes a direct systematic approach based on the parallel Hammerstein structure to determine the exact number of coefficients needed in a DPD. Then a physical explanation of memory effects is detailed, which leads to a close-form expression for the characteristic behavior of the PA entirely based on circuit properties. The physical expression is implemented and tested as a modeling scheme. Moreover, a link between this formulation and the proven behavioral models is explored, namely the Volterra series and Memory Polynomial. The formulation shows the correlation between parameters of generic behavioral modeling schemes when applied to RF PAs and demonstrates redundancy based on the physical existence or absence of modeling terms, detailed for the proven Memory polynomial modeling and linearization scheme.
|
946 |
How incentive contracts and task complexity influence and facilitate long-term performanceBerger, Leslie 10 July 2009 (has links)
The purpose of this study is to investigate how different incentive contracts that include forward-looking and contemporaneous goals motivate managers to make decisions consistent with the organization’s long-term objectives, in tasks of varying complexity. Two research questions are addressed. First, in a long-term horizon setting, how do incentive contracts based on various combinations of forward-looking and contemporaneous measures influence decisions? Second, how does task complexity influence the expected effect of various incentive contracts on management decisions?
I address my research questions using a multi-period experiment where I compare the effects of three different incentive structure types and two different levels of task complexity. Results show that in a low complexity task, individuals perform better when only contemporaneous goal attainment is rewarded in the incentive contract than when both forward-looking and contemporaneous goal attainment is rewarded. In a high complexity task, individuals perform better when both contemporaneous and forward-looking goal attainment is rewarded, but only when the contemporaneous goal attainment is weighted more heavily in the incentive contract.
My research contributes to the existing literature in two ways. First, this is the first study of which I am aware that compares the performance effects of long-term incentive contracts that reward forward-looking and contemporaneous goal attainment. Second, this study is the first of which I am aware to experimentally test incentive contracts, for employees with a long-term horizon, that incorporate various weightings of forward-looking measures in the contract. In addition, this study will be amongst the first to examine the impact of task complexity on incentive contract effectiveness.
|
947 |
Quantum Strategies and Local OperationsGutoski, Gustav January 2009 (has links)
This thesis is divided into two parts.
In Part I we introduce a new formalism for quantum strategies, which specify the actions of one party in any multi-party interaction involving the exchange of multiple quantum messages among the parties.
This formalism associates with each strategy a single positive semidefinite operator acting only upon the tensor product of the input and output message spaces for the strategy.
We establish three fundamental properties of this new representation for quantum strategies and we list several applications, including a quantum version of von Neumann's celebrated 1928 Min-Max Theorem for zero-sum games and an efficient algorithm for computing the value of such a game.
In Part II we establish several properties of a class of quantum operations that can be implemented locally with shared quantum entanglement or classical randomness.
In particular, we establish the existence of a ball of local operations with shared randomness lying within the space spanned by the no-signaling operations and centred at the completely noisy channel.
The existence of this ball is employed to prove that the weak membership problem for local operations with shared entanglement is strongly NP-hard.
We also provide characterizations of local operations in terms of linear functionals that are positive and "completely" positive on a certain cone of Hermitian operators, under a natural notion of complete positivity appropriate to that cone.
We end the thesis with a discussion of the properties of no-signaling quantum operations.
|
948 |
Language Profile and Performances on Math Assessments for Children with Mild Intellectual DisabilitiesRhodes, Katherine T. 02 May 2012 (has links)
It has been assumed that mathematics testing indicates the development of mathematics concepts, but the linguistic demands of assessment have not been evaluated, especially for children with mild intellectual disabilities. 244 children (grades 2 – 5) were recruited from a larger reading intervention study. Using a multilevel longitudinal SEM model, baseline and post-intervention time points were examined for the contribution of item linguistic complexity, child language skills, and their potential interaction in predicting item level mathematics assessment performance. Item linguistic complexity was an important, stable, and negative predictor of mathematics achievement with children’s language skills significantly and positively predicting mathematics achievement. The interaction between item linguistic complexity and language skills was significant though not stable across time. Following intervention, children with higher language skills performed better on linguistically complex mathematics items. Mathematics achievement may be related to an interaction between children’s language skills and the linguistic demands of the tests themselves.
|
949 |
The Effect of Aleks on Students' Mathematics Achievement in an Online Learning Environment and the Cognitive Complexity of the Initial and Final AssessmentsNwaogu, Eze 11 May 2012 (has links)
For many courses, mathematics included, there is an associated interactive e-learning system that provides assessment and tutoring. Some of these systems are classified as Intelligent Tutoring Systems. MyMathLab, Mathzone, and Assessment of LEarning in Knowledge Space (ALEKS) are just a few of the interactive e-learning systems in mathematics. In ALEKS, assessment and tutoring are based on the Knowledge Space Theory. Previous studies in a traditional learning environment have shown ALEKS users to perform equally or better in mathematics achievement than the group who did not use ALEKS.
The purpose of this research was to investigate the effect of ALEKS on students’ achievement in mathematics in an online learning environment and to determine the cognitive complexity of mathematical tasks enacted by ALEKS’s initial (pretest) and final (posttest) assessments. The targeted population for this study was undergraduate students in College Mathematics I, in an online course at a private university in the southwestern United States. The study used a quasi-experimental One-Group non-randomized pretest and posttest design.
Five methods of analysis and one model were used in analyzing data: t-test, correctional analysis, simple and multiple regression analysis, Cronbach’s Alpha reliability test and Webb’s depth of knowledge model. A t-test showed a difference between the pretest and posttest reports, meaning ALEKS had a significant effect on students’ mathematics achievement. The correlation analysis showed a significant positive linear relationship between the concept mastery reports and the formative and summative assessments reports meaning there is a direct relationship between the ALEKS concept mastery and the assessments. The regression equation showed a better model for predicting mathematics achievement with ALEKS when the time spent learning in ALEKS and the concept mastery scores are used as part of the model.
According to Webb’s depth of knowledge model, the cognitive complexity of the pretest and posttest question items used by ALEKS were as follows: 50.5% required application of skills and concepts, 37.1% required recall of information, and 12.4% required strategic thinking: None of the questions items required extended thinking or complex reasoning, implying ALEKS is appropriate for skills and concepts building at this level of mathematics.
|
950 |
Protein functional features extracted from primary sequences. A focus on primary sequences.Pietrosemoli, Natalia 16 September 2013 (has links)
In this thesis we implement an ensemble of sequence analysis strategies aimed at identifying functional and structural protein features. The first part of this work was dedicated to two case studies of specific proteins analyzed to provide candidate functional positions for experimental validation: the protein alpha-synuclein (αsyn) and the alanine racemases protein family. In the case of αsyn, the objective was to predict its aggregation prone regions. For the alanine racemase protein family, the scope was to predict sites responsible for substrate specificity. In these two studies, computational predictions allowed systematically exploring potentially functionally relevant protein sites in an efficient manner that may not be possible to implement with traditional experimental approaches. Our strategy provided a powerful forecasting tool for the selection of candidate sites to be later verified experimentally.
In the second part, we analyze the role of intrinsic disorder (ID) as a modulator of protein function in different organisms and cellular processes, which is largely unexplored. As key components of the diverse cellular pathways, disordered proteins are often involved in many diseases, including cancer and neurodegenerative diseases. Thus, there is an impeding need to unveil the general principles underlying the role of ID in proteins. We provide a multi-scale analysis of the involvement of ID in protein function starting with a large-scale analysis at genomic level of the role of ID in Arabidopsis, zooming in into the specific processes of vesicular trafficking in Human and yeast, and finally focusing on specific proteins of diverse organisms.
The results of this thesis provide a better understanding of the functional roles mediated by ID in different organisms and biological processes, such as acting as flexible linkers connecting structured domains, mediating protein-protein interactions, and assisting the quick assembly of large macromolecular complexes. In addition, we present evidence of the use of ID as a mechanism to increase the complexity of protein and biological networks, and as a means to increase the adaptability of proteins in specific processes. Thus, our results contribute to elucidating the relationship between network and organismal complexity and ID, while they also provide evidence of the evolutionary advantages offered by ID.
|
Page generated in 0.0554 seconds