11 |
Joint source-channel-network coding in wireless mesh networks with temporal reuseLuus, Francois Pierre Sarel 21 October 2011 (has links)
Technological innovation that empowers tiny low-cost transceivers to operate with a high degree of utilisation efficiency in multihop wireless mesh networks is contributed in this dissertation. Transmission scheduling and joint source-channel-network coding are two of the main aspects that are addressed. This work focuses on integrating recent enhancements such as wireless network coding and temporal reuse into a cross-layer optimisation framework, and to design a joint coding scheme that allows for space-optimal transceiver implementations. Link-assigned transmission schedules with timeslot reuse by multiple links in both the space and time domains are investigated for quasi-stationary multihop wireless mesh networks with both rate and power adaptivity. Specifically, predefined cross-layer optimised schedules with proportionally fair end-to-end flow rates and network coding capability are constructed for networks operating under the physical interference model with single-path minimum hop routing. Extending transmission rights in a link-assigned schedule allows for network coding and temporal reuse, which increases timeslot usage efficiency when a scheduled link experiences packet depletion. The schedules that suffer from packet depletion are characterised and a generic temporal reuse-aware achievable rate region is derived. Extensive computational experiments show improved schedule capacity, quality of service, power efficiency and benefit from opportunistic bidirectional network coding accrued with schedules optimised in the proposed temporal reuse-aware convex capacity region. The application of joint source-channel coding, based on fountain codes, in the broadcast timeslot of wireless two-way network coding is also investigated. A computationally efficient subroutine is contributed to the implementation of the fountain compressor, and an error analysis is done. Motivated to develop a true joint source-channel-network code that compresses, adds robustness against channel noise and network codes two packets on a single bipartite graph and iteratively decodes the intended packet on the same Tanner graph, an adaptation of the fountain compressor is presented. The proposed code is shown to outperform a separated joint source-channel and network code in high source entropy and high channel noise regions, in anticipated support of dense networks that employ intelligent signalling. AFRIKAANS : Tegnologiese innovasie wat klein lae-koste kommunikasie toestelle bemagtig om met ’n hoë mate van benuttings doeltreffendheid te werk word bygedra in hierdie proefskrif. Transmissie-skedulering en gesamentlike bron-kanaal-netwerk kodering is twee van die belangrike aspekte wat aangespreek word. Hierdie werk fokus op die integrasie van onlangse verbeteringe soos draadlose netwerk kodering en temporêre herwinning in ’n tussen-laag optimaliserings raamwerk, en om ’n gesamentlike kodering skema te ontwerp wat voorsiening maak vir spasie-optimale toestel implementerings. Skakel-toegekende transmissie skedules met tydgleuf herwinning deur veelvuldige skakels in beide die ruimte en tyd domeine word ondersoek vir kwasi-stilstaande, veelvuldige-sprong draadlose rooster netwerke met beide transmissie-spoed en krag aanpassings. Om spesifiek te wees, word vooraf bepaalde tussen-laag geoptimiseerde skedules met verhoudings-regverdige punt-tot-punt vloei tempo’s en netwerk kodering vermoë saamgestel vir netwerke wat bedryf word onder die fisiese inmengings-model met enkel-pad minimale sprong roetering. Die uitbreiding van transmissie-regte in ’n skakel-toegekende skedule maak voorsiening vir netwerk kodering en temporêre herwinning, wat tydgleuf gebruiks-doeltreffendheid verhoog wanneer ’n geskeduleerde skakel pakkie-uitputting ervaar. Die skedules wat ly aan pakkie-uitputting word gekenmerk en ’n generiese temporêre herwinnings-bewuste haalbare transmissie-spoed gebied word afgelei. Omvattende berekenings-eksperimente toon verbeterde skedulerings kapasiteit, diensgehalte, krag doeltreffendheid asook verbeterde voordeel wat getrek word uit opportunistiese tweerigting netwerk kodering met die skedules wat geoptimiseer word in die temporêre herwinnings-bewuste konvekse transmissie-spoed gebied. Die toepassing van gesamentlike bron-kanaal kodering, gebaseer op fontein kodes, in die uitsaai-tydgleuf van draadlose tweerigting netwerk kodering word ook ondersoek. ’n Berekenings-effektiewe subroetine word bygedra in die implementering van die fontein kompressor, en ’n foutanalise word gedoen. Gemotiveer om ’n ware gesamentlike bron-kanaal-netwerk kode te ontwikkel, wat robuustheid byvoeg teen kanaal geraas en twee pakkies netwerk kodeer op ’n enkele bipartiete grafiek en die beoogde pakkie iteratief dekodeer op dieselfde Tanner grafiek, word ’n aanpassing van die fontein kompressor aangebied. Dit word getoon dat die voorgestelde kode ’n geskeide gesamentlike bron-kanaal en netwerk kode in hoë bron-entropie en ho¨e kanaal-geraas gebiede oortref in verwagte ondersteuning van digte netwerke wat van intelligente sein-metodes gebruik maak. / Dissertation (MEng)--University of Pretoria, 2011. / Electrical, Electronic and Computer Engineering / unrestricted
|
12 |
Comparing generalized additive neural networks with multilayer perceptrons / Johannes Christiaan GoosenGoosen, Johannes Christiaan January 2011 (has links)
In this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied
and compared as prediction techniques. MLPs are the most widely used type of artificial neural network
(ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori
method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that
are either heuristic or based on simulations that are derived from limited experiments. A modified version of
the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and
utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs
are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less
complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual
plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer.
Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and
subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction
algorithm for GANNs was created and implemented in the SAS R
statistical language. This system was called
AutoGANN and is used to create good GANN models.
A number of experiments are conducted on five publicly available data sets to gain insight into the similarities
and differences between GANN and MLP models. The data sets include regression and classification tasks.
In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the
average validation error as model selection criterion are performed. The models created are compared in terms
of predictive accuracy, model complexity, comprehensibility, ease of construction and utility.
The results show that the choice of model is highly dependent on the problem, as no single model always
outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability
of the results is important. The time taken to construct good MLP models by the modified N2C2S
algorithm may be shorter than the time to build good GANN models by the automated construction algorithm / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
|
13 |
Comparing generalized additive neural networks with multilayer perceptrons / Johannes Christiaan GoosenGoosen, Johannes Christiaan January 2011 (has links)
In this dissertation, generalized additive neural networks (GANNs) and multilayer perceptrons (MLPs) are studied
and compared as prediction techniques. MLPs are the most widely used type of artificial neural network
(ANN), but are considered black boxes with regard to interpretability. There is currently no simple a priori
method to determine the number of hidden neurons in each of the hidden layers of ANNs. Guidelines exist that
are either heuristic or based on simulations that are derived from limited experiments. A modified version of
the neural network construction with cross–validation samples (N2C2S) algorithm is therefore implemented and
utilized to construct good MLP models. This algorithm enables the comparison with GANN models. GANNs
are a relatively new type of ANN, based on the generalized additive model. The architecture of a GANN is less
complex compared to MLPs and results can be interpreted with a graphical method, called the partial residual
plot. A GANN consists of an input layer where each of the input nodes has its own MLP with one hidden layer.
Originally, GANNs were constructed by interpreting partial residual plots. This method is time consuming and
subjective, which may lead to the creation of suboptimal models. Consequently, an automated construction
algorithm for GANNs was created and implemented in the SAS R
statistical language. This system was called
AutoGANN and is used to create good GANN models.
A number of experiments are conducted on five publicly available data sets to gain insight into the similarities
and differences between GANN and MLP models. The data sets include regression and classification tasks.
In–sample model selection with the SBC model selection criterion and out–of–sample model selection with the
average validation error as model selection criterion are performed. The models created are compared in terms
of predictive accuracy, model complexity, comprehensibility, ease of construction and utility.
The results show that the choice of model is highly dependent on the problem, as no single model always
outperforms the other in terms of predictive accuracy. GANNs may be suggested for problems where interpretability
of the results is important. The time taken to construct good MLP models by the modified N2C2S
algorithm may be shorter than the time to build good GANN models by the automated construction algorithm / Thesis (M.Sc. (Computer Science))--North-West University, Potchefstroom Campus, 2011.
|
14 |
The file fragment classification problem : a combined neural network and linear programming discriminant model approach / Erich Feodor WilgenbusWilgenbus, Erich Feodor January 2013 (has links)
The increased use of digital media to store legal, as well as illegal data, has created the need
for specialized tools that can monitor, control and even recover this data. An important task
in computer forensics and security is to identify the true le type to which a computer le
or computer le fragment belongs. File type identi cation is traditionally done by means
of metadata, such as le extensions and le header and footer signatures. As a result,
traditional metadata-based le object type identi cation techniques work well in cases where
the required metadata is available and unaltered. However, traditional approaches are not
reliable when the integrity of metadata is not guaranteed or metadata is unavailable. As
an alternative, any pattern in the content of a le object can be used to determine the
associated le type. This is called content-based le object type identi cation.
Supervised learning techniques can be used to infer a le object type classi er by exploiting
some unique pattern that underlies a le type's common le structure. This study builds
on existing literature regarding the use of supervised learning techniques for content-based
le object type identi cation, and explores the combined use of multilayer perceptron neural
network classi ers and linear programming-based discriminant classi ers as a solution to the
multiple class le fragment type identi cation problem.
The purpose of this study was to investigate and compare the use of a single multilayer
perceptron neural network classi er, a single linear programming-based discriminant classi-
er and a combined ensemble of these classi ers in the eld of le type identi cation. The
ability of each individual classi er and the ensemble of these classi ers to accurately predict
the le type to which a le fragment belongs were tested empirically.
The study found that both a multilayer perceptron neural network and a linear programming-
based discriminant classi er (used in a round robin) seemed to perform well in solving
the multiple class le fragment type identi cation problem. The results of combining
multilayer perceptron neural network classi ers and linear programming-based discriminant
classi ers in an ensemble were not better than those of the single optimized classi ers. / MSc (Computer Science), North-West University, Potchefstroom Campus, 2013
|
15 |
The file fragment classification problem : a combined neural network and linear programming discriminant model approach / Erich Feodor WilgenbusWilgenbus, Erich Feodor January 2013 (has links)
The increased use of digital media to store legal, as well as illegal data, has created the need
for specialized tools that can monitor, control and even recover this data. An important task
in computer forensics and security is to identify the true le type to which a computer le
or computer le fragment belongs. File type identi cation is traditionally done by means
of metadata, such as le extensions and le header and footer signatures. As a result,
traditional metadata-based le object type identi cation techniques work well in cases where
the required metadata is available and unaltered. However, traditional approaches are not
reliable when the integrity of metadata is not guaranteed or metadata is unavailable. As
an alternative, any pattern in the content of a le object can be used to determine the
associated le type. This is called content-based le object type identi cation.
Supervised learning techniques can be used to infer a le object type classi er by exploiting
some unique pattern that underlies a le type's common le structure. This study builds
on existing literature regarding the use of supervised learning techniques for content-based
le object type identi cation, and explores the combined use of multilayer perceptron neural
network classi ers and linear programming-based discriminant classi ers as a solution to the
multiple class le fragment type identi cation problem.
The purpose of this study was to investigate and compare the use of a single multilayer
perceptron neural network classi er, a single linear programming-based discriminant classi-
er and a combined ensemble of these classi ers in the eld of le type identi cation. The
ability of each individual classi er and the ensemble of these classi ers to accurately predict
the le type to which a le fragment belongs were tested empirically.
The study found that both a multilayer perceptron neural network and a linear programming-
based discriminant classi er (used in a round robin) seemed to perform well in solving
the multiple class le fragment type identi cation problem. The results of combining
multilayer perceptron neural network classi ers and linear programming-based discriminant
classi ers in an ensemble were not better than those of the single optimized classi ers. / MSc (Computer Science), North-West University, Potchefstroom Campus, 2013
|
16 |
Non-binary LDPC coded STF-MIMO-OFDM with an iterative joint receiver structureLouw, Daniel Johannes 20 September 2010 (has links)
The aim of the dissertation was to design a realistic, low-complexity non-binary (NB) low density parity check (LDPC) coded space-time-frequency (STF) coded multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) system with an iterative joint decoder and detector structure at the receiver. The goal of the first part of the dissertation was to compare the performance of different design procedures for NB-LDPC codes on an additive white Gaussian noise (AWGN) channel, taking into account the constraint on the code length. The effect of quantisation on the performance of the code was also analysed. Different methods for choosing the NB elements in the parity check matrix were compared. For the STF coding, a class of universal STF codes was used. These codes use linear pre-coding and a layering approach based on Diophantine numbers to achieve full diversity and a transmission rate (in symbols per channel use per frequency) equal to the number of transmitter antennas. The study of the system considers a comparative performance analysis of di erent ST, SF and STF codes. The simulations of the system were performed on a triply selective block fading channel. Thus, there was selectivity in the fading over time, space and frequency. The effect of quantisation at the receiver on the achievable diversity of linearly pre-coded systems (such as the STF codes used) was mathematically derived and verified with simulations. A sphere decoder (SD) was used as a MIMO detector. The standard method used to create a soft-input soft output (SISO) SD uses a hard-to-soft process and the max-log-map approximation. A new approach was developed which combines a Hopfield network with the SD. This SD-Hopfield detector was connected with the fast Fourier transform belief propagation (FFT-BP) algorithm in an iterative structure. This iterative system was able to achieve the same bit error rate (BER) performance as the original SISO-SD at a reduced complexity. The use of the iterative Hopfield-SD and FFT-BP decoder system also allows performance to be traded off for complexity by varying the number of decoding iterations. The complete system employs a NB-LDPC code concatenated with an STF code at the transmitter with a SISO-SD and FFT-BP decoder connected in an iterative structure at the receiver. The system was analysed in varying channel conditions taking into account the effect of correlation and quantisation. The performance of different SF and STF codes were compared and analysed in the system. An analysis comparing different numbers of FFT-BP and outer iterations was also done. AFRIKAANS : Die doel van die verhandeling was om ’n realistiese, lae-kompleksiteit nie-binˆere (NB) LDPC gekodeerde ruimte-tyd-frekwensie-gekodeerde MIMO-OFDM-sisteem met iteratiewe gesamentlike dekodeerder- en detektorstrukture by die ontvanger te ontwerp. Die eerstem deel van die verhandeling was om die werkverrigting van verskillende ontwerpprosedures vir NB-LDPC kodes op ’n gesommeerde wit Gausruiskanaal te vergelyk met inagneming van die beperking op die lengte van die kode. Verskillende metodes om die nie-bineêre elemente in die pariteitstoetsmatriks te kies, is gebruik. Vir die ruimte-tyd-frekwensiekodering is ’n klas universele ruimte-tyd-frekwensiekodes gebruik. Hierdie kodes gebruik lineêre pre-kodering en ’n laagbenadering gebaseer op Diofantiese syfers om volle diversiteit te bereik en ’n oordragtempo (in simbole per kanaalgebruik per frekwensie) gelyk aan die aantal senderantennes. Die studie van die sisteem oorweeg ’n vergelykende werkverrigtinganalisie van verskillende ruimte-tyd-, ruimte-freksensie- en ruimte-tyd-frekwensiekodes. Die simulasies van die sisteem is gedoen op ’n drievoudig selektiewe blokwegsterwingskanaal. Daar was dus selektiwiteit in die wegsterwing oor tyd, ruimte en frekwensie. Die effek van kwantisering by die ontvanger op die bereikbare diversiteit van lineêr pre-gekodeerde sisteme (soos die ruimte-tyd-frekwensiekodes wat gebruik is) is matematies afgelei en bevestig deur simulasies. ’n Sfeerdekodeerder (SD) is gebruik as ’n MIMO-detektor. Die standaardmetode wat gebuik is om ’n sagte-inset-sagte-uitset (SISO) SD te skep, gebruik ’n harde-na-sagte proses en die maksimum logaritmiese afbeelding-benadering. ’n Nuwe benadering wat ’n Hopfield-netwerk met die SD kombineer, is ontwikkel. Hierdie SD-Hopfield-detektor is verbind met die FFT-BP-algoritme in iteratiewe strukture. Hierdie iteratiewe sisteem was in staat om dieselfde bisfouttempo te bereik as die oorspronklike SISO-SD, met laer kompleksiteit. Die gebruik van die iteratiewe Hopfield-SD en FFT-BP-dekodeerdersisteem maak ook daarvoor voorsiening dat werkverrigting opgeweeg kan word teen kompleksiteit deur die aantal dekodering-iterasies te varieer. Die volledige sisteem maak gebruik van ’n QC-NB-LDPC-kode wat met ’n ruimte-tyd-frekwensiekode by die sender aaneengeskakel is met ’n SISO-SD en FFT-BP-dekodeerder wat in ’n iteratiewe struktuur by die ontvanger gekoppel is. Die sisteem is onder ’n verskeidenheid kanaalkondisies ge-analiseer met inagneming van die effek van korrelasie en kwantisering. Die werkverrigting van verskillende ruimte-frekwensie- en ruimte-tyd-frekwensiekodes is vergelyk en in die sisteem ge-analiseer. ’n Analise om ’n wisselende aantal FFT-BP en buite-iterasies te vergelyk, is ook gedoen. Copyright / Dissertation (MEng)--University of Pretoria, 2010. / Electrical, Electronic and Computer Engineering / unrestricted
|
17 |
A local network neighbourhood artificial immune systemGraaff, A.J. (Alexander Jakobus) 17 October 2011 (has links)
As information is becoming more available online and will forevermore be part of any business, the true value of the large amounts of stored data is in the discovery of hidden and unknown relations and connections or traits in the data. The acquisition of these hidden relations can influence strategic decisions which have an impact on the success of a business. Data clustering is one of many methods to partition data into different groups in such a way that data patterns within the same group share some common trait compared to patterns across different groups. This thesis proposes a new artificial immune model for the problem of data clustering. The new model is inspired by the network theory of immunology and differs from its network based predecessor models in its formation of artificial lymphocyte networks. The proposed model is first applied to data clustering problems in stationary environments. Two different techniques are then proposed which enhances the proposed artificial immune model to dynamically determine the number of clusters in a data set with minimal to no user interference. A technique to generate synthetic data sets for data clustering of non-stationary environments is then proposed. Lastly, the original proposed artificial immune model and the enhanced version to dynamically determine the number of clusters are then applied to generated synthetic non-stationary data clustering problems. The influence of the parameters on the clustering performance is investigated for all versions of the proposed artificial immune model and supported by empirical results and statistical hypothesis tests. AFRIKAANS: Soos wat inligting meer aanlyn toeganglik raak en vir altyd meer deel vorm van enige besigheid, is die eintlike waarde van groot hoeveelhede data in die ontdekking van verskuilde en onbekende verwantskappe en konneksies of eienskappe in die data. Die verkryging van sulke verskuilde verwantskappe kan die strategiese besluitneming van ’n besigheid beinvloed, wat weer ’n impak het op die sukses van ’n besigheid. Data groepering is een van baie metodes om data op so ’n manier te groepeer dat data patrone wat deel vorm van dieselfde groep ’n gemeenskaplike eienskap deel in vergelyking met patrone wat verspreid is in ander groepe. Hierdie tesis stel ’n nuwe kunsmatige immuun model voor vir die probleem van data groepering. Die nuwe model is geinspireer deur die netwerk teorie in immunologie en verskil van vorige netwerk gebaseerde modelle deur die model se formasie van kunsmatige limfosiet netwerke. Die voorgestelde model word eers toegepas op data groeperingsprobleme in statiese omgewings. Twee verskillende tegnieke word dan voorgestel wat die voorgestelde kunsmatige immuun model op so ’n manier verbeter dat die model die aantal groepe in ’n data stel dinamies kan bepaal met minimum tot geen gebruiker invloed. ’n Tegniek om kunsmatige data stelle te genereer vir data groepering in dinamiese omgewings word dan voorgestel. Laastens word die oorspronklik voorgestelde model sowel as die verbeterde model wat dinamies die aantal groepe in ’n data stel kan bepaal toegepas op kunsmatig genereerde dinamiese data groeperingsprobleme. Die invloed van die parameters op die groepering prestasie is ondersoek vir alle weergawes van die voorgestelde kunsmatige immuun model en word toegelig deur empiriese resultate en statistiese hipotese toetse. / Thesis (PhD)--University of Pretoria, 2011. / Computer Science / unrestricted
|
18 |
Web 3L: Informationssuche und -verteilung mittels sozialer, semantischer NetzeLangen, Manfred, Kammergruber, Walter C., Ehms, Karsten 30 May 2014 (has links) (PDF)
No description available.
|
19 |
Aspekte van regsbeheer in die konteks van die Internet / Aspects of legal regulation in the context of the InternetGordon, Barrie James 06 1900 (has links)
Die wêreld soos dit vandag bestaan, is gebaseer op die Internasionaalregtelike
konsep van soewereiniteit. State het die bevoegdheid om hulle eie sake
te reël, maar die ontwikkeling van die Internet as ’n netwerk wat globaal
verspreid is, het hierdie beginsel verontagsaam. Dit wou voorkom asof die
Internet die einde van soewereiniteit en staatskap sou beteken.
’n Geskiedkundige oorsig toon dat reguleerders aanvanklik onseker was
oor hoe hierdie nuwe medium hanteer moes word. Dit het geblyk dat nuwe
tegnologieë wat fragmentasie van die Internet bewerkstellig, gebruik kon
word om staatsgebonde regsreëls af te dwing. Verskeie state van die wêreld
het uiteenlopende metodologieë gevolg om die Internet op staatsvlak te
probeer reguleer, en dit het tot die lukraak-wyse waarop die Internet tans
gereguleer word, aanleiding gegee.
Hierdie studie bespreek verskeie aspekte van regsbeheer in die konteks
van die Internet, en bepaal daardeur hoe die Internet tans gereguleer word.
Toepaslike wetgewing van verskeie state word regdeur die studie bespreek.
Vier prominente state, wat verskeie belangrike ingrepe ten aansien van
Internetregulering gemaak het, word verder uitgelig. Dit is die Verenigde
State van Amerika, die Volksrepubliek van Sjina, die Europese Unie as
verteenwoordiger van Europese state, en Suid-Afrika. Aspekte wat op
Internasionaalregtelike vlak aangespreek moet word, soos internasionale
organisasies en internasionale regsteorieë ten aansien van die regulering
van die Internet, word ook onder die loep geneem.
Die bevindings wat uit die studie volg, word gebruik om verskeie
aanbevelings te maak, en die aanbevelings word uiteindelik in ’n nuwe
model saamgevoegom’n sinvoller wyse van regulering van die Internet voor
te stel.
Aangesien die huidige studie in die konteks van die Internasionale
reg onderneem word, word die studie afgesluit met ’n bespreking van
kubersoewereiniteit, wat ’n uiteensetting is van hoe soewereiniteit ten
aansien van die Internet toegepas behoort te word. Die gevolgtrekking is
insiggewend — die ontwikkeling van die Internet het nie die einde van
soewereiniteit beteken nie, maar het dit juis bevestig. / The world is currently structured in different states, and this is premised
on the International law concept of sovereignty. States have the capacity
to structure their own affairs, but the development of the Internet as a
globally distributed network has violated this principle. It would seem that
the development of the Internet would mean the end of sovereignty and
statehood.
A historical overview shows that regulators were initially unsure of how
this new medium should be dealt with. It appeared that new technologies
that could fragment the Internet, could be used to enforce state bound
law. Several states of the world have used different methodologies trying to
regulate the Internet at state level, and this led to the random way in which
the Internet is currently regulated.
This study examines various aspects of legal regulation in the context
of the Internet, and determines how the Internet is currently regulated.
Appropriate legislation of several states are discussed throughout the
study. Four prominent states, which made several important interventions
regarding the regulation of the Internet, are highlighted further. It is the
United States, the People’s Republic of China, the European Union as the
representative of European countries, and South Africa. Aspects that need to
be addressed on International law level, such as international organizations
and international legal theories regarding the regulation of the Internet, are
also discussed.
The findings that follow from this study are used to make several
recommendations, which in turn are used to construct a new model for a
more meaningful way in which the Internet could be regulated.
Since the present study is undertaken in the context of the International
law, the study is concluded with a discussion of cyber sovereignty, which
is a discussion of how sovereignty should be applied with regards to the
Internet. The conclusion is enlightening—the development of the Internet
does not indicate the end of sovereignty, but rather confirms it. / Criminal and Procedural Law / LLD
|
20 |
Web 3L: Informationssuche und -verteilung mittels sozialer, semantischer NetzeLangen, Manfred, Kammergruber, Walter C., Ehms, Karsten January 2011 (has links)
No description available.
|
Page generated in 0.0453 seconds