• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 82
  • 17
  • 10
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 125
  • 59
  • 32
  • 31
  • 30
  • 29
  • 27
  • 26
  • 24
  • 23
  • 23
  • 21
  • 21
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

On-demand virtual laboratory environments for Internetworking e-learning : A first step using docker containers

Kokkalis, Andreas January 2018 (has links)
Learning Management Systems (LMSs) are widely used in higher education to improve the learning, teaching, and administrative tasks for both students and instructors. Such systems enrich the educational experience by integrating a wide range of services, such as on-demand course material and training, thus empowering students to achieve their learning outcomes at their own pace. Courses in various sub-fields of Computer Science that seek to provide rich electronic learning (e-learning) experience depend on exercise material being offered in the forms of quizzes, programming exercises, laboratories, simulations, etc. Providing hands on experience in courses such as Internetworking could be facilitated by providing laboratory exercises based on virtual machine environments where the student studies the performance of different internet protocols under different conditions (such as different throughput bounds, error rates, and patterns of changes in these conditions). Unfortunately, the integration of such exercises and their tailored virtual environments is not yet very popular in LMSs. This thesis project investigates the generation of on-demand virtual exercise environments using cloud infrastructures and integration with an LMS to provide a rich e-learning in an Internetworking course. The software deliverable of this project enables instructors to dynamically instantiate virtual laboratories without incurring the overhead of running and maintaining their own physical infrastructure. This sets the foundations for a virtual classroom that can scale in response to higher system utilization during specific periods of the academic calendar. / Lärplattformar (eng. Learning Management Systems (LMS)) används i stor utsträckning för högre utbildning för att förbättra lärande, undervisning och administrativa uppgifter för både studenter och instruktörer. Sådana system berikar den pedagogiska erfarenheten genom att integrera ett brett utbud av tjänster, såsom on-demand kursmaterial och träning, vilket ger studenterna möjlighet att uppnå sina lärandemål i egen takt. Kurser inom olika delområden av datavetenskap som syftar till att ge en bred erfarenhet av elektroniskt lärande (e-learning) har träningsmaterial i form av frågesporter, programmeringsövningar, laboratorier, simuleringar etc. Praktiskt erfarenhet i kurser som Internetworking kan underlättas genom att tillhandahålla laboratorieövningar baserade på virtuella maskinmiljöer där studenten studerar prestanda för olika internetprotokoll under olika förhållanden (t.ex. olika gränsvärden, felfrekvenser och förändringsmönster under dessa förhållanden). Tyvärr är integrationen av sådana övningar och deras skräddarsydda virtuella miljöer ännu inte populär i LMSs. Detta examensarbete undersöker generering av virtuella träningsmiljöer på begäran med hjälp av molninfrastruktur och integration med en LMS för att ge ett rikt e-lärande i en Internetworking-kurs. Programvaran som levereras av detta projekt gör det möjligt för instruktörer att dynamiskt instansera virtuella laboratorier utan att behöva hantera sin egen fysiska infrastruktur. Detta sätter grunden för ett virtuellt klassrum som kan skala med högre systemutnyttjande under specifika perioder av den akademiska kalendern.
112

ENHANCING SECURITY IN DOCKER WEB SERVERS USING APPARMOR AND BPFTRACE

Avigyan Mukherjee (15306883) 19 April 2023 (has links)
<p>Dockerizing web servers has gained significant popularity due to its lightweight containerization approach, enabling rapid and efficient deployment of web services. However, the security of web server containers remains a critical concern. This study proposes a novel approach to enhance the security of Docker-based web servers using bpftrace to trace Nginx and Apache containers under attack, identifying abnormal syscalls, connections, shared library calls, and file accesses from normal ones. The gathered metrics are used to generate tailored AppArmor profiles for improved mandatory access control policies and enhanced container security. BPFtrace is a high-level tracing language allowing for real-time analysis of system events. This research introduces an innovative method for generating AppArmor profiles by utilizing BPFtrace to monitor system alerts, creating customized security policies tailored to the specific needs of Docker-based web servers. Once the profiles are generated, the web server container is redeployed with enhanced security measures in place. This approach increases security by providing granular control and adaptability to address potential threats. The evaluation of the proposed method is conducted using CVE’s found in the open source literature affecting nginx and apache web servers that correspond to the classification system that was created. The Apache and Nginx containers was attacked with Metasploit, and benchmark tests including ltrace evaluation in accordance with existing literature were conducted. The results demonstrate the effectiveness of the proposed approach in mitigating security risks and strengthening the overall security posture of Docker-based web servers. This is achieved by limiting memcpy and memset shared library calls identified using bpftrace and applying rlimits in 9 AppArmor to limit their rate to normal levels (as gauged during testing) and deny other harmful file accesses and syscalls. The study’s findings contribute to the growing body of knowledge on container security and offer valuable insights for practitioners aiming to develop more secure web server deployments using Docker. </p>
113

PXI Communication in a virtual environment : Using containers and VMs for communication with a PXI / PXI Kommunication i en virtuel miljö : Använding av containers och VMs för kommunikation med PXI

Dahlberg, Carl January 2022 (has links)
This thesis investigates the possibility of communication with a PCI eXtensions for Instrumentation (PXI) system from inside of a container or a virtual machine (VM). While the usage of virtual environments with PCI have been established, it was unknown whether it is possible to have an application running inside of a virtual environment and have it communicate with a PXI system outside this environment. Should it be possible to have communication with a PXI system from inside of a virtual environment, this would make it possible to have a virtual environment prepared with all the necessary software for the PXI and this virtual environment could be transferred and installed into other computers without the need to change any of the software.The investigation was done by creating several different test environments to better understand how both the PXI drivers and the virtual environment work and to see how they interact with each other.While it turned out not to be possible to realize such a virtual environment using the equipment described in this thesis, it was learned that it was theoretically possible to make use of VM for communication with a PXI system, although doing this in practice is dependant on the specific PXI modules involved. / Denna avhandling undersöker möjligheten för kommunikation med ett PCI eXtensions for Instrumentation (PXI) system från inuti en container eller en virtuell maskin (VM). Medan användandet av virtuella miljöer med Peripheral Component Interconnect (PCI) är etablerat, det är i nuläget inte känt om det är möjligt att ha en applikation körandes inuti en virtuell miljö och ha den kommunicera med ett PXI system utanför denna miljö. Om det vore möjligt att ha kommunikation från inuti den virtuella miljö, skulle det vara möjligt att ha en virtuell miljö förbered med all nödvändig mjukvara för PXI som kan flyttas och installeras i nya fysiska platser utan att behöva göra ändringar i mjukvaran. Denna undersökningen gjordes genom att skapa flera olika test miljöer för att att ska en bättre förståelse för hur både PXI drivare och den virtuella miljön fungerar och hur de interagerar med varandra. Trots att det visade sig inte var möjligt med den utrustning som beskrivs i denna avhandling, det vissade sig dock vara teoretiskt möjligt att använda VM för kommunikation med PXI system, men det är då starkt beroende på vad för PXI moduler involverade.
114

Predictive vertical CPU autoscaling in Kubernetes based on time-series forecasting with Holt-Winters exponential smoothing and long short-term memory / Prediktiv vertikal CPU-autoskalning i Kubernetes baserat på tidsserieprediktion med Holt-Winters exponentiell utjämning och långt korttidsminne

Wang, Thomas January 2021 (has links)
Private and public clouds require users to specify requests for resources such as CPU and memory (RAM) to be provisioned for their applications. The values of these requests do not necessarily relate to the application’s run-time requirements, but only help the cloud infrastructure resource manager to map requested virtual resources to physical resources. If an application exceeds these values, it might be throttled or even terminated. Consequently, requested values are often overestimated, resulting in poor resource utilization in the cloud infrastructure. Autoscaling is a technique used to overcome these problems. In this research, we formulated two new predictive CPU autoscaling strategies forKubernetes containerized applications, using time-series analysis, based on Holt-Winters exponential smoothing and long short-term memory (LSTM) artificial recurrent neural networks. The two approaches were analyzed, and their performances were compared to that of the default Kubernetes Vertical Pod Autoscaler (VPA). Efficiency was evaluated in terms of CPU resource wastage, and insufficient CPU percentage and amount for container workloads from Alibaba Cluster Trace 2018, and others. In our experiments, we observed that Kubernetes Vertical Pod Autoscaler (VPA) tended to perform poorly on workloads that periodically change. Our results showed that compared to VPA, predictive methods based on Holt- Winters exponential smoothing (HW) and Long Short-Term Memory (LSTM) can decrease CPU wastage by over 40% while avoiding CPU insufficiency for various CPU workloads. Furthermore, LSTM has been shown to generate stabler predictions compared to that of HW, which allowed for more robust scaling decisions. / Privata och offentliga moln kräver att användare begär mängden CPU och minne (RAM) som ska fördelas till sina applikationer. Mängden resurser är inte nödvändigtvis relaterat till applikationernas körtidskrav, utan är till för att molninfrastrukturresurshanteraren ska kunna kartlägga begärda virtuella resurser till fysiska resurser. Om en applikation överskrider dessa värden kan den saktas ner eller till och med krascha. För att undvika störningar överskattas begärda värden oftast, vilket kan resultera i ineffektiv resursutnyttjande i molninfrastrukturen. Autoskalning är en teknik som används för att överkomma dessa problem. I denna forskning formulerade vi två nya prediktiva CPU autoskalningsstrategier för containeriserade applikationer i Kubernetes, med hjälp av tidsserieanalys baserad på metoderna Holt-Winters exponentiell utjämning och långt korttidsminne (LSTM) återkommande neurala nätverk. De två metoderna analyserades, och deras prestationer jämfördes med Kubernetes Vertical Pod Autoscaler (VPA). Prestation utvärderades genom att observera under- och överutilisering av CPU-resurser, för diverse containerarbetsbelastningar från bl. a. Alibaba Cluster Trace 2018. Vi observerade att Kubernetes Vertical Pod Autoscaler (VPA) i våra experiment tenderade att prestera dåligt på arbetsbelastningar som förändras periodvist. Våra resultat visar att jämfört med VPA kan prediktiva metoder baserade på Holt-Winters exponentiell utjämning (HW) och långt korttidsminne (LSTM) minska överflödig CPU-användning med över 40 % samtidigt som de undviker CPU-brist för olika arbetsbelastningar. Ytterligare visade sig LSTM generera stabilare prediktioner jämfört med HW, vilket ledde till mer robusta autoskalningsbeslut.
115

Cloud application platform - Virtualization vs Containerization : A comparison between application containers and virtual machines

Vestman, Simon January 2017 (has links)
Context. As the number of organizations using cloud application platforms to host their applications increases, the priority of distributing physical resources within those platforms is increasing simultaneously. The goal is to host a higher quantity of applications per physical server, while at the same time retain a satisfying rate of performance combined with certain scalability. The modern needs of customers occasionally also imply an assurance of certain privacy for their applications. Objectives. In this study two types of instances for hosting applications in cloud application platforms, virtual machines and application containers, are comparatively analyzed. This investigation has the goal to expose advantages and disadvantages between the instances in order to determine which is more appropriate for being used in cloud application platforms, in terms of performance, scalability and user isolation. Methods. The comparison is done on a server running Linux Ubuntu 16.04. The virtual machine is created using Devstack, a development environment of Openstack, while the application container is hosted by Docker. Each instance is running an apache web server for handling HTTP requests. The comparison is done by using different benchmark tools for different key usage scenarios and simultaneously observing the resource usage in respective instance. Results. The results are produced by investigating the user isolation and resource occupation of respective instance, by examining the file system, active process handling and resource allocation after creation. Benchmark tools are executed locally on respective instance, for a performance comparison of the usage of physical resources. The amount of CPU operations executed within a given time is measured in order determine the processor performance, while the speed of read and write operations to the main memory is measured in order to determine the RAM performance. A file is also transmitted between host server and application in order to compare the network performance between respective instance, by examining the transfer speed of the file. Lastly a set of benchmark tools are executed on the host server to measure the HTTP server request handling performance and scalability of each instance. The amount of requests handled per second is observed, but also the resource usage for the request handling at an increasing rate of served requests and clients. Conclusions. The virtual machine is a better choice for applications where privacy is a higher priority, due to the complete isolation and abstraction from the rest of the physical server. Virtual machines perform better in handling a higher quantity of requests per second, while application containers is faster in transferring files through network. The container requires a significantly lower amount of resources than the virtual machine in order to run and execute tasks, such as responding to HTTP requests. When it comes to scalability the prefered type of instance depends on the priority of key usage scenarios. Virtual machines have quicker response time for HTTP requests but application containers occupy less physical resources, which makes it logically possible to run a higher quantity of containers than virtual machines simultaneously on the same physical server.
116

Container Hosts as Virtual Machines : A performance study

Aspernäs, Andreas, Nensén, Mattias January 2016 (has links)
Virtualization is a technique used to abstract the operating system from the hardware. The primary gains of virtualization is increased server consolidation, leading to greater hardware utilization and infrastructure manageability. Another technology that can be used to achieve similar goals is containerization. Containerization is an operating-system level virtualization technique which allows applications to run in partial isolation on the same hardware. Containerized applications share the same Linux kernel but run in packaged containers which includes just enough binaries and libraries for the application to function. In recent years it has become more common to see hardware virtualization beneath the container host operating systems. An upcoming technology to further this development is VMware’s vSphere Integrated Containers which aims to integrate management of Linux Containers with the vSphere (a hardware virtualization platform by VMware) management interface. With these technologies as background we set out to measure the impact of hardware virtualization on Linux Container performance by running a suite of macro-benchmarks on a LAMP-application stack. We perform the macro-benchmarks on three different operating systems (CentOS, CoreOS and Photon OS) in order to see if the choice of container host affects the performance. Our results show a decrease in performance when comparing a hardware virtualized container host to a container hosts running directly on the hardware. However, the impact on containerized application performance can vary depending on the actual application, the choice of operating system and even the type of operation performed. It is therefore important to consider these three items before implementing container hosts as virtual machines.
117

More tools for Canvas : Realizing a Digital Form with Dynamically Presented Questions and Alternatives

Sarwar, Reshad, Manzi, Nathan January 2019 (has links)
At KTH, students who want to start their degree project must complete a paper form called “UT-EXAR: Ansökan om examensarbete/application for degree project”. The form is used to determine students’ eligibility to start a degree project, as well as potential examiners for the project. After the form is filled in and signed by multiple parties, a student can initiate his or her degree project. However, due to the excessively time-consuming process of completing the form, an alternative solution was proposed: a survey in the Canvas Learning Management System (LMS) that replace s the UT-EXAR form. Although the survey reduces the time required by students to provide information and find examiners, it is by no means the most efficient solution. The survey suffers from multiple flaws, such as asking students to answer unnecessary questions, and for certain questions, presenting students with more alternatives than necessary. The survey also fails to automatically organize the data collected from the students’ answers; hence administrators must manually enter the data into a spreadsheet or other record. This thesis proposes an optimized solution to the problem by introducing a dynamic survey. Moreover, this dynamic survey uses the Canvas Representational State Transfer (REST) API to access students’ program-specific data. Additionally, this survey can use data provided by students when answering the survey questions to dynamically construct questions for each individual student as well as using information from other KTH systems to dynamically construct customized alternatives for each individual student. This solution effectively prevents the survey from presenting students with questions and choices that are irrelevant to their individual case. Furthermore, the proposed solution directly inserts the data collected from the students into a Canvas Gradebook. In order to implement and test the proposed solution, a version of the Canvas LMS was created by virtualizing each Canvas-based microservice inside of a Docker container and allowing the containers to communicate over a network. Furthermore, the survey itself used the Learning Tools Interoperability (LTI) standard. When testing the solution, it was seen that the survey has not only successfully managed to filter the questions and alternative answers based on the user’s data, but also showed great potential to be more efficient than a survey with statically-presented data. The survey effectively automates the insertion of the data into the gradebook. / På KTH, studenter som skall påbörja sitt examensarbete måste fylla i en blankett som kallas “UT-EXAR: Ansökan om examensarbete/application for degree project”. Blanketten används för att bestämma studenters behörighet för att göra examensarbete, samt potentiella examinator för projektet. Efter att blanketten är fylld och undertecknad av flera parter kan en student påbörja sitt examensarbete. Emellertid, på grund av den alltför tidskrävande processen med att fylla blanketten, var en alternativ lösning föreslås: en särskild undersökning i Canvas Lärplattform (eng. Learning Management System(LMS)) som fungerar som ersättare för UT-EXAR-formulär. Trots att undersökningen har lyckats minska den tid som krävs av studetenter för att ge information och hitta examinator, det är inte den mest effektiva lösningen. Undersökningen lider av flera brister, såsom att få studenterna att svara på fler frågor än vad som behövs, och för vissa frågor, presenterar studenter med fler svarsalternativ än nödvändigt. Undersökningen inte heller automatiskt med att organisera data som samlats in från studenters svar. Som ett resultat skulle en administratör behöva organisera data manuellt i ett kalkylblad. Detta examensarbete föreslår en mer optimerad lösning på problemet: omskrivning av undersökningens funktionaliteter för att använda Representational State Transfer(REST) API för att komma åt studenters programspecifika data i back-end, såväl att använda speciella haschar för att hålla referenser till uppgifter som lämnas av studenterna när de svarar på frågorna i undersökningen, så att undersökningen inte bara kan använda dessa data för att dynamiskt konstruera frågor för varje enskild student, men också dynamiskt konstruera svarsalternativ för varje enskild student. Denna lösning förhindrar effektivt undersökningen från att presentera studenter med frågor och valbara svarsalternativ som är helt irrelevanta för var och en av deras individuella fall. Med den föreslagna lösningen kommer undersökningen dessutom att kunna organisera de data som samlats in från Studenterna till ett speciellt Canvas-baserat kalkyllblad, kallas som Betygsbok. För att genomföra och testa den förslagna lösningen skapades en testbar version av Canvas LMS genom att virtualisera varje Canvas-baserad mikroservice inuti en dockercontainer och tillåter containers att kommunicera över ett nätverk. Dessutom var undersökningen själv konfigurerad för att använda Lärverktyg Interoperability (LTI) standard. Vid testning av lösningen, det visade sig att undersökningen på ett sätt effektivt har lyckats använda vissa uppgifter från en testanvändare att bara endast svara på de relevanta frågorna, men också presentera användaren med en mer kondenserad lista svarsalternativ över baserat på data.&lt;p&gt;
118

An Empirical Study on AI Workflow Automation for Positioning / En empirisk undersökning om automatiserat arbetsflöde inom AI för positionering

Jämtner, Hannes, Brynielsson, Stefan January 2022 (has links)
The maturing capabilities of Artificial Intelligence (AI) and Machine Learning (ML) have resulted in increased attention in research and development on adopting AI and ML in 5G and future networks. With the increased maturity, the usage of AI/ML models in production is becoming more widespread, and maintaining these systems is more complex and likely to incur technical debt when compared to standard software. This is due to inheriting all the complexities of traditional software in addition to ML-specific ones. To handle these complexities the field of ML Operations (MLOps) has emerged. The goal of MLOps is to extend DevOps to AI/ML and therefore speed up development and ease maintenance of AI/ML-based software, by, for example, supporting automatic deployment, monitoring, and continuous re-training of models. This thesis investigates how to construct an MLOps workflow by selecting a number of tools and using these to implement a workflow. Additionally, different approaches for triggering re-training are implemented and evaluated, resulting in a comparison of the triggers with regards to execution time, memory and CPU consumption, and the average performance of the Machine learning model.
119

Jämförelse av cache-tjänster: WSUS Och LanCache / Comparison of cache services: WSUS and LanCache

Shammaa, Mohammad Hamdi, Aldrea, Sumaia January 2023 (has links)
Inom nätverkstekniken och datakommunikationen råder idag en tro på tekniken nätverkscache som kan spara data för att senare kunna hämta hem det snabbare. Tekniken har genom åren visat att den effektivt kan skicka den önskade data till sina klienter. Det finns flera cache-tjänster som använder tekniken för Windows-uppdateringar. Bland dessa finns Windows Server Update Services (WSUS) och LanCache. På uppdrag från företaget TNS Gaming AB jämförs dessa tjänster med varandra under examensarbetet. Nätverkscache är ett intressant forskningsområde för framtida kommunikationssystem och nätverk tack vare sina fördelar. Likaså är uppgiften om att jämföra cache-tjänsterna WSUS och LanCache intressant i och med det öppnar upp insikt om vilken tjänst är bättre för företaget eller andra intressenter. Både forskningsområdet och uppgiften är viktiga och intressanta när användare vill effektivisera användningen av internetanslutningen och bespara nätverksresurser. Därmed kan tekniken minska nedladdningstiden. Till det här arbetet besvaras frågor om vilken nätverksprestanda, resursanvändning och administrationstid respektive cache-tjänst har, och vilken cache-tjänst som lämpar sig bättre för företagets behov. I arbetet genomförs experiment, som omfattar tre huvudmättningar, och följs av en enfallstudie. Syftet med arbetet är att med hjälp av experimentets mätningar få en jämförelse mellan WSUS och LanCache. Resultatet av arbetet utgör sedan ett underlag för det framtida lösningsvalet. Resultaten består av två delar. Den första visar att båda cache-tjänsterna bidrar till kortare nedladdningstider. Den andra är att LanCache är bättre än WSUS när det gäller nätverksprestanda och resursanvändning, samt mindre administrationstid jämfört med WSUS. Givet resultat dras slutsatsen att LanCache är cache-tjänsten som är mest lämpad i det här fallet. / In the field of network technology and data communication, there is a current belief in the technology of network caching, which can store data to later retrieve it more quickly. Over the years, this technology has proven its ability to efficiently deliver the desired data to its clients. There are several caching services that utilize this technology for Windows updates, among them are Windows Server Update Services (WSUS) and LanCache. On behalf of the company TNS Gaming AB, these services are compared to each other in this thesis. Network caching is an interesting area of research for future communication systems and networks due to its benefits. Likewise, the task of comparing the cache services WSUS and LanCache is interesting as it provides insights into which service is better suited for the company or other stakeholders. Both the research area and the task are important and intriguing when users seek to streamline the use of their internet connection and conserve network resources. Thus, the technology can reduce download times. For this work, questions about the network performance, resource usage, and administration time of each cache service are answered, as well as which cache service that is better suited to the company's needs. The work involves conducting experiments, including three main measurements, followed by a single case study. The purpose of the work is to compare WSUS and LanCache using the measurements from the experiment. The outcome of the work then forms a basis for future solution choice. The results consist of two parts. The first shows that both cache services contribute to shorter download times. The second is that LanCache outperforms WSUS in terms of network performance and resource usage, and also requires less administration time than WSUS. Given the results, the conclusion is drawn that LanCache is the most suitable caching service in this case.
120

Engagement et militantisme dans le Docker Noir (1956), les Bouts de bois de Dieu (1960) et Xala (1973) de Sembène Ousmane / Commitment and militancy in the Black Docker (1956), God's bits of wood (1960) and Xala (1973) by Sembène Ousmane

Babatunde, Samuel Olufemi 04 1900 (has links)
Text in French / Member of the union of black workers in the port of Marseille, in France, and an eyewitness to the misery of black workers in the European environment, Sembène Ousmane, in 1956, wrote, using his personal experiences, his first book entitled The Black Docker. In this novel, he describes the sufferings of the working class, the struggle between colonisers and colonised. In 1960, he uses as a pretext the strike of the Senegalese railway workers in 1937 to write a book entitled God's Bits of Wood. In this story where two forces clashed, on one hand, the colonised struggling against the colonial system and want, at all costs, to improve their living conditions, and on the other hand, the colonisers that are in support of their colonialist ideals and refuse the changes, the author tells the epic story of strikers in Senegal and their relentless struggles against the colonisers to change their living conditions for better. In 1973, an eyewitness of the daily realities of his native country, Senegal, after gaining national sovereignty, Sembène Ousmane wrote and published a book entitled Xala. In this book, he describes the evils of neo-colonialism and criticises the new African middle class, born after independence. After reading these novels, one notes that Sembène Ousmane, a defender of freedom, denounces the injustices done to the blacks, both in the colonial era as well as in the post colonial period. This is why from a book to another, he continues tirelessly his struggle against colonialism and neo-colonialism, evoking the sufferings and tragedies endured by the Africans. It occurs constantly in his imaginary creations, a theme, or better still a dialectical; commitment and militancy. What does he mean by « commitment » and « militancy » ? How do these two concepts manifest themselves in the works of the Senegalese writer? What strategy does he propose to the oppressed in the struggle against the oppressors? What means has he put at the disposal of the disinherited struggling to break the yoke of oppression and exploitation in order to achieve freedom and equality? / Membre du syndicat des travailleurs noirs, au port de Marseille, en France, et témoin oculaire de la misère vécue par les ouvriers noirs dans ce milieu européen, Sembène Ousmane, en 1956, écrit, en se servant de ses expériences personnelles, son premier ouvrage intitulé Le Docker noir. Dans ce roman, il décrit la souffrance de la classe ouvrière, la lutte entre colonisateurs et colonisés. En 1960, il se sert d’un prétexte, la grève des ouvriers sénégalais en 1937, pour écrire un ouvrage intitulé Les Bouts de bois de Dieu. Dans ce récit, où s’affrontent deux forces, d’une part les colonisés qui luttent contre le système colonial et veulent, à tout prix, l’amélioration de leurs conditions de vie, et d’autre part, les colonisateurs qui soutiennent les idéaux colonialistes et refusent le changement, l’auteur relate l’histoire épique des grévistes au Sénégal, et la lutte implacable qu’ils mènent contre les colonisateurs pour le changement de leurs conditions de vie. En 1973, témoin oculaire des réalités quotidiennes de son pays natal, le Sénégal, après son accession à la souveraineté nationale, Sembène Ousmane écrit et publie, un ouvrage intitulé Xala. Dans ce livre, il décrit les méfaits du néocolonialisme et critique la nouvelle classe bourgeoise africaine, née après l’indépendance. Après lecture des trois romans, on constate que Sembène Ousmane, défenseur de la liberté, dénonce les injustices faites aux Noirs, aussi bien à l’époque coloniale qu’à la période postcoloniale. C’est pourquoi, d’un ouvrage à l’autre, il continue, inlassablement, sa lutte contre le colonialisme et le néocolonialisme, en évoquant les souffrances et les drames endurés par les Africains. Il revient, constamment, dans ses créations imaginaires, à une thématique, ou mieux une dialectique, l’engagement et le militantisme. Qu’entend-il par « engagement » et « militantisme »? Comment ces deux lexèmes se manifestent-ils dans les écrits de cet écrivain sénégalais? Quelles stratégies propose-t-il aux opprimés dans la lutte qui les oppose aux oppresseurs? Quels moyens met-il a la disposition des déshérités en lutte pour briser le joug de l’oppression et celui de l’exploitation afin d’obtenir la liberté et l’égalité? / Linguistics and Modern Languages / D. Litt. et Phil. (French)

Page generated in 0.0302 seconds