131 |
Data Driven Video Source Camera IdentificationHopkins, Nicholas Christian 15 May 2023 (has links)
No description available.
|
132 |
Влияние активности компании на рынке слияний и поглощений на глубину ее цифровизации : магистерская диссертация / The impact of a company's M&A activity on the depth of its digitalizationПопов, В. В., Popov, V. V. January 2022 (has links)
В течение последнего десятилетия значительная доля сделок по слияниям и поглощениям была обусловлена быстрым развитием передовых технологий. Передовые технологии – и в их числе цифровые технологии – стали движущей силой возможностей неорганического роста для компаний. Однако до сих пор остаётся неизученным то, как сделки слияний и поглощений в целом (не только в высокотехнологичных отраслях, или сделки с целями, имеющими цифровые технологии) влияют на цифровизацию компаний и на уровень её цифровой трансформации. Целью данной работы является оценка влияния активности компании на рынке сделок слияний и поглощений на глубину её цифровизации. Объектом исследования послужила выборка из 1242 российских промышленных компаний. Выбор компаний был обусловлен тем, что каждая из них обладает некими цифровыми компетенциями (технологиями), которые являются основой данного исследования. Предметом исследования является механизм взаимосвязи между активностью компании на рынке сделок слияний и поглощений (в виде факта участи в сделке M&A) и глубиной цифровой трансформации этой компании. Новизна исследования заключается в том, что было проведено первое подобное исследование по крупным российским компаниям, оперирующим в 20 отраслях промышленности, с использование эконометрическое модели, которая одновременно рассматривает множество факторов, влияющих на глубину цифровой трансформации. Практическая значимость исследования состоит в том, что его результаты и представленные рекомендации могут быть полезны для собственников компаний, планирующих сделку слияний и поглощений в эпоху цифровизации, а также для органов государственной власти, активно поддерживающих и развивающих цифровизацию в нашей стране. / Over the past decade, a significant share of M&A has been driven by rapid advances in advanced technology. Advanced technologies - and digital technologies among them - have become the driving force behind inorganic growth opportunities for companies. However, how M&A deals in general (not only in high-tech industries, or deals with targets with digital technology) affect the digitalization of companies and its level of digital transformation remains unexplored until now. The purpose of this paper is to assess the impact of a company's activity in the M&A market on the depth of its digitalization. A sample of 1,242 Russian industrial companies was used as an object of research. The choice of companies was due to the fact that each of them has some digital competencies (technologies), which are the basis of this study. The subject of the study is the mechanism of relationship between the activity of the company in the M&A market (in the form of the fact of participation in the M&A transaction) and the depth of digital transformation of this company. The novelty of the study lies in the fact that the first such study was conducted on large Russian companies operating in 20 industries, using an econometric model that simultaneously considers many factors affecting the depth of digital transformation. The practical significance of the study lies in the fact that its results and the recommendations presented may be useful to company owners planning M&A deals in the era of digitalization, as well as to government authorities that are actively supporting and developing digitalization in our country.
|
133 |
Candidate generation for relocation of black box applications in mobile edge computing environments / Kandidat generering för omlokalisering av applikationer i mobile edge computing-miljöerWalden, Love January 2022 (has links)
Applications today are generally deployed in public cloud environments such as Azure, AWS etc. Mobile edge computing (MEC) enables these applications to be relocated to edge nodes which are located in close proximity to the end user, thereby allowing the application to serve the user at lower latency. However, these edge nodes have limited capacity and hence a problem arises of when to relocate an application to an edge. This thesis project attempts to tackle the problem of detecting when an application’s quality of experience is degraded, and how to use this information in order to generate candidates for relocation to edge nodes. The assumption for this thesis project is there is no insight to the application itself, meaning the applications are treated as blackboxes. To detect quality of experience degradation we chose to capture network packets and inspect protocol-level information. We chose WebRTC and HTTP as communication protocols because they were the most common protocols used by the target environment. We developed two application prototypes. The first prototype was a rudimentary server based on HTTP and the second prototype was a video streaming application based on WebRTC. The prototypes were used to study the possibility of breaking down latency components and obtaining quality of service parameters. We then developed a recommendation engine to use this information in order to generate relocation candidates. The recommendation engine was evaluated by placing the WebRTC prototype under quality of experience affecting scenarios and measuring the time taken to generate a relocation candidate of the application. The result of this project show it is possible in some cases to break down latency components for HTTP based applications. However, for WebRTC based applications our approach was not sufficient enough to break down latency components. Instead, we had to rely on quality of service parameters to generate relocation candidates. Based on the outcomes of the project, we conclude detecting quality of experience degradation for blackbox applications have three generalizations. Firstly, the underlying transport and communication protocol has an impact on available approaches and obtainable information. Secondly, the implementation of the communication protocol also has an impact on obtainable information. Lastly, the underlying infrastructure can matter for the approaches used in this project. / Applikationer idag produktionssätts allmänhet i offentliga molntjänster som Azure, AWS etc. Mobile edge computing (MEC) gör att dessa applikationer kan flyttas till gränsnoder som är placerade i närheten av slutanvändaren, vilket gör att applikationen kan erbjuda användaren lägre latens. Dessa gränsnoder har emellertid begränsad kapacitet och därför uppstår ett problem om när en applikation ska flyttas till en gränsnod. Detta examensarbete försöker ta itu med problemet med att upptäcka när en applikations upplevelsekvalitet försämras, och hur man använder denna information för att generera kandidater för omlokalisering till gränsnoder. Antagandet för detta examensarbete är att det inte finns någon insikt i själva applikationen, vilket innebär att applikationer behandlas som svarta lådor. För att upptäcka försämring av upplevelsekvalitet valde vi att fånga nätverkspaket och inspektera information på protokollnivå. Vi valde WebRTC och HTTP som kommunikationsprotokoll eftersom de var de vanligaste protokollen som användes i målmiljön. Vi utvecklade två applikationsprototyper. Den första prototypen var en rudimentär server baserad på HTTPoch den andra prototypen var en videoströmningsapplikation baserad på WebRTC. Prototyperna användes för att studera möjligheten att bryta ned latenskomponenter och erhålla tjänstekvalitetsparametrar. Vi utvecklade sedan en rekommendationsmotor för att använda denna information till att generera omplaceringskandidater. Rekommendationsmotorn utvärderades genom att placera WebRTC-prototypen under scenarion som påverkar upplevelsekvaliten, och sedan mäta tiden det tog att generera en omlokaliseringskandidat av applikationen. Resultatet av detta projekt visar att det i vissa fall är möjligt att bryta ned latenskomponenter för HTTP-baserade applikationer. Dock för WebRTCbaserade applikationer var vårt tillvägagångssätt inte tillräckligt för att bryta ned latenskomponenter. Istället var vi tvungna att förlita oss på kvalitetsparametrar för tjänsten för att generera omlokaliseringskandidater. Baserat på resultaten av projektet drar vi slutsatsen att upptäcka kvalitetsförsämring av erfarenheter för blackbox-applikationer har tre generaliseringar. För det första har det underliggande transport- och kommunikationsprotokollet en inverkan på tillgängliga tillvägagångssätt och tillgänglig information. För det andra har implementeringen av kommunikationsprotokollet också en inverkan på tillgänglig information. Slutligen kan den underliggande infrastrukturen ha betydelse för de tillvägagångssätt som används i detta projekt.
|
134 |
Characterization of Structure-Borne Tire Noise Using Virtual SensingNouri, Arash 27 January 2021 (has links)
Various improvements which have been made to the vehicle (reduced engine noise, reducedaerodynamic related NVH), have resulted in tire road noise as the dominant source of thevehicle interior noise. Generally, vehicle interior noise has two main sources, 1) travellinglow frequency excitation below 800 Hz from road surface through a structure- borne pathand 2) the high frequency (above 800 Hz) air-borne noise that is caused by air- pumpingnoise caused by tread pattern.The structure-borne waves of the circumference of the tire are generated by excitation atthe contact patch due to the road surface texture and characteristics. These vibrations arethen transferred from the sidewalls of the tire to the rim and then are transmitted throughthe spindle-wheel interface, resulting in high frequency vibration of vehicle body panels andwindows.The focus of this study is to develop several statistical-based models for analyzing the roadsurface and using them to predict the tire-road noise structure-borne component. In order todo this, a new methodology for sensing the road characteristics, such as asperities and roadsurface condition, were developed using virtual sensing and intelligent tire technology. In ad-dition, the spindle forces were used as an indicator to the structure-borne noise of the vehicle.Several data mining and multivariate analysis-based methods were developed to extractfeatures and to develop an empirical model to predict the power of structure-borne noiseunder different operational and road conditions. Finally, multiple data driven models-basedmodels were developed to classify the road types, and conditions and use them for the noisefrequency spectrum prediction. / Doctor of Philosophy / Multiple data driven models were developed in this study to use the vibration of the tirecontact patch as an input to sense some characteristics of road such as asperity, surface type,and the surface condition, and use them to predict the structure-borne noise power. Also,instead of measuring the noise using microphones, forces at wheel spindle were measuredas a metric for the noise power. In other words, a statistical model was developed that bysensing the road, and using the data along with other inputs, one can predict forces at thewheel spindle.
|
135 |
TASK-AWARE VIDEO COMPRESSION AND QUALITY ESTIMATION IN PRACTICAL VIDEO ANALYTICS SYSTEMSPraneet Singh (20797433) 28 February 2025 (has links)
<p dir="ltr">Practical video analytics systems that perform computer vision tasks are widely used in critical real-world scenarios such as autonomous driving and public safety. These end-to-end systems sequentially perform tasks like object detection, segmentation, and recognition such that the performance of each analytics task depends on how well the previous tasks are performed. Typically, these systems are deployed in resources and bandwidth-constrained environments, so video compression algorithms like HEVC are necessary to minimize transmission bandwidth at the expense of input quality. Furthermore, to optimize resource utilization of these systems, the analytics tasks should be executed solely on inputs that may provide valuable insights on task performance. Hence, it is essential to understand the impact of compression and input data quality on the overall performance of end-to-end video analytics systems, using meaningfully curated datasets and interpretable evaluation procedures. This information is crucial for the overall improvement of system performance. Thus, in this thesis we focus on:</p><ol><li>Understanding the effects of compression on the performance of video analytics systems that perform tasks such as pedestrian detection, face detection, and face recognition. With this, we develop a task-aware video encoding strategy for HEVC that improves system performance under compression.</li><li>Designing methodologies to perform a meaningful and interpretable evaluation of an end-to-end system that sequentially performs face detection, alignment, and recognition. This involves balancing datasets, creating consistent ground truths, and capturing the performance interdependence between the various tasks of the system.</li><li>Estimating how image quality is linked to task performance in end-to-end face analytics systems. Here, we design novel task-aware image Quality Estimators (QEs) that determine the suitability of images for face detection. We also propose systematic evaluation protocols to showcase the efficacy of our novel face detection QEs and existing face recognition QEs. </li></ol><p dir="ltr"><br></p>
|
136 |
Spatial information and end-to-end learning for visual recognition / Informations spatiales et apprentissage bout-en-bout pour la reconnaissance visuelleJiu, Mingyuan 03 April 2014 (has links)
Dans cette thèse nous étudions les algorithmes d'apprentissage automatique pour la reconnaissance visuelle. Un accent particulier est mis sur l'apprentissage automatique de représentations, c.à.d. l'apprentissage automatique d'extracteurs de caractéristiques; nous insistons également sur l'apprentissage conjoint de ces dernières avec le modèle de prédiction des problèmes traités, tels que la reconnaissance d'objets, la reconnaissance d'activités humaines, ou la segmentation d'objets. Dans ce contexte, nous proposons plusieurs contributions : Une première contribution concerne les modèles de type bags of words (BoW), où le dictionnaire est classiquement appris de manière non supervisée et de manière autonome. Nous proposons d'apprendre le dictionnaire de manière supervisée, c.à.d. en intégrant les étiquettes de classes issues de la base d'apprentissage. Pour cela, l'extraction de caractéristiques et la prédiction de la classe sont formulées en un seul modèle global de type réseau de neurones (end-to-end training). Deux algorithmes d'apprentissage différents sont proposés pour ce modèle : le premier est basé sur la retro-propagation du gradient de l'erreur, et le second procède par des mises à jour dans le diagramme de Voronoi calculé dans l'espace des caractéristiques. Une deuxième contribution concerne l'intégration d'informations géométriques dans l'apprentissage supervisé et non-supervisé. Elle se place dans le cadre d'applications nécessitant une segmentation d'un objet en un ensemble de régions avec des relations de voisinage définies a priori. Un exemple est la segmentation du corps humain en parties ou la segmentation d'objets spécifiques. Nous proposons une nouvelle approche intégrant les relations spatiales dans l'algorithme d'apprentissage du modèle de prédication. Contrairement aux méthodes existantes, les relations spatiales sont uniquement utilisées lors de la phase d'apprentissage. Les algorithmes de classification restent inchangés, ce qui permet d'obtenir une amélioration du taux de classification sans augmentation de la complexité de calcul lors de la phase de test. Nous proposons trois algorithmes différents intégrant ce principe dans trois modèles : - l'apprentissage du modèle de prédiction des forêts aléatoires, - l'apprentissage du modèle de prédiction des réseaux de neurones (et de la régression logistique), - l'apprentissage faiblement supervisé de caractéristiques visuelles à l'aide de réseaux de neurones convolutionnels. / In this thesis, we present our research on visual recognition and machine learning. Two types of visual recognition problems are investigated: action recognition and human body part segmentation problem. Our objective is to combine spatial information such as label configuration in feature space, or spatial layout of labels into an end-to-end framework to improve recognition performance. For human action recognition, we apply the bag-of-words model and reformulate it as a neural network for end-to-end learning. We propose two algorithms to make use of label configuration in feature space to optimize the codebook. One is based on classical error backpropagation. The codewords are adjusted by using gradient descent algorithm. The other is based on cluster reassignments, where the cluster labels are reassigned for all the feature vectors in a Voronoi diagram. As a result, the codebook is learned in a supervised way. We demonstrate the effectiveness of the proposed algorithms on the standard KTH human action dataset. For human body part segmentation, we treat the segmentation problem as classification problem, where a classifier acts on each pixel. Two machine learning frameworks are adopted: randomized decision forests and convolutional neural networks. We integrate a priori information on the spatial part layout in terms of pairs of labels or pairs of pixels into both frameworks in the training procedure to make the classifier more discriminative, but pixelwise classification is still performed in the testing stage. Three algorithms are proposed: (i) Spatial part layout is integrated into randomized decision forest training procedure; (ii) Spatial pre-training is proposed for the feature learning in the ConvNets; (iii) Spatial learning is proposed in the logistical regression (LR) or multilayer perceptron (MLP) for classification.
|
137 |
在預算限制下分配隨機數位網路最佳頻寬之研究 / Analysis of bandwidth allocation on End-to-End QoS networks under budget control王嘉宏, Wang, Chia Hung Unknown Date (has links)
本論文針對隨機數位網路提出一套可行的計算機制,以提供網路管理者進行資源分配與壅塞管理的分析工具。我們研究兩種利潤最佳化模型,探討在預算控制下的頻寬分配方式。因為資源有限,網路管理者無法隨時提供足夠頻寬以滿足隨機的網路需求,而量測網路連結成功與否的阻塞機率(Blocking Probability)為評估此風險之一種指標。我們利用頻寬分配、網路需求量和虛擬端對端路徑的數量等變數,推導阻塞機率函數,並證明阻塞機率的單調性(Monotonicity)和凸性(Convexity)等數學性質。在不失一般性之假設下,我們驗證阻塞機率是(1)隨頻寬增加而變小;(2)在特定的頻寬分配區間內呈凸性;(3)隨網路需求量增加而變大;(4)隨虛擬路徑的數量增加而變小。
本研究探討頻寬分配與阻塞機率之關係,藉由推導單調性和凸性等性質,提供此兩種利潤模型解的最適條件與求解演算法。同時,我們引用經濟學的彈性概念,提出三種模型參數對阻塞機率變化量的彈性定義,並分別進行頻寬分配、網路需求量和虛擬路徑數量對邊際利潤函數的敏感度分析。當網路上的虛擬路徑數量非常大時,阻塞機率的計算將變得複雜難解,因此我們利用高負荷極限理論(Heavy-Traffic Limit Theorem)提供阻塞機率的估計式,並分析其漸近行為(Asymptotic Behavior)。本論文的主要貢獻是分析頻寬分配與阻塞機率之間的關係及其數學性質。網路管理者可應用本研究提出的分析工具,在總預算限制下規劃寬頻網路的資源分配,並根據阻塞機率進行網路參數的調控。 / This thesis considers the problem of bandwidth allocation on communication networks with multiple traffic classes, where bandwidth is determined under the budget constraint.
Due to the limited budget, there exists a risk that the network service providers can not assert a 100% guaranteed availability for the stochastic traffic demand at all times.
We derive the blocking probabilities of connections as a function of bandwidth, traffic demand and the available number of virtual end-to-end paths for all service classes.
Under general assumptions, we prove that the blocking probability is directionally (i) decreasing in bandwidth, (ii) convex in bandwidth for specific regions, (iii) increasing in traffic demand, and (iv) decreasing in the number of virtual paths. We also demonstrate the monotone and convex relations among those model parameters and the expected path occupancy. As the number of virtual paths is huge, we derive a heavy-traffic queueing model, and provide a diffusion approximation and its asymptotic analysis for the blocking probability, where the traffic intensity increases to one from below.
Taking the blocking probability into account, two revenue management schemes are introduced to allocate bandwidth under budget control. The revenue/profit functions are studied in this thesis through the monotonicity and convexity of the blocking probability and expected path occupancy. Optimality conditions are derived to obtain an optimal bandwidth allocation for two revenue management schemes, and a solution algorithm is developed to allocate limited budget among competing traffic classes. In addition, we present three elasticities of the blocking probability to study the effect of changing model parameters on the average revenue in analysis of economic models. The sensitivity analysis and economic elasticity notions are proposed to investigate the marginal revenue
for a given traffic class by changing bandwidth, traffic demand and the number of virtual paths, respectively.
The main contribution of the present work is to prove the relationship between the blocking probability and allocated bandwidth under the budget constraint. Those results are also verified with numerical examples interpreting the blocking probability, utilization level, average revenue, etc. The relationship between blocking probability and bandwidth allocation can be applied in the design and provision of broadband communication networks by optimally choosing model parameters under budget control for sharing bandwidth in terms of blocking/congestion costs.
|
138 |
Application of Information Theory and Learning to Network and Biological TomographyNarasimha, Rajesh 08 November 2007 (has links)
Studying the internal characteristics of a network using measurements obtained from endhosts is known as network tomography. The foremost challenge in measurement-based approaches is the large size of a network, where only a subset of measurements can be obtained because of the inaccessibility of the entire network. As the network becomes larger, a question arises as to how rapidly the monitoring resources (number of measurements or number of samples) must grow to obtain a desired monitoring accuracy. Our work studies the scalability of the measurements with respect to the size of the network. We investigate the issues of scalability and performance evaluation in IP networks, specifically focusing on fault and congestion diagnosis. We formulate network monitoring as a machine learning problem using probabilistic graphical models that infer network states using path-based measurements. We consider the theoretical and practical management resources needed to reliably diagnose congested/faulty network elements and provide fundamental limits on the relationships between the number of probe packets, the size of the network, and the ability to accurately diagnose such network elements. We derive lower bounds on the average number of probes per edge using the variational inference technique proposed in the context of graphical models under noisy probe measurements, and then propose an entropy lower (EL) bound by drawing similarities between the coding problem over a binary symmetric channel and the diagnosis problem. Our investigation is supported by simulation results. For the congestion diagnosis case, we propose a solution based on decoding linear error control codes on a binary symmetric channel for various probing experiments. To identify the congested nodes, we construct a graphical model, and infer congestion using the belief propagation algorithm. In the second part of the work, we focus on the development of methods to automatically analyze the information contained in electron tomograms, which is a major challenge since tomograms are extremely noisy. Advances in automated data acquisition in electron tomography have led to an explosion in the amount of data that can be obtained about the spatial architecture of a variety of biologically and medically relevant objects with sizes in the range of 10-1000 nm A fundamental step in the statistical inference of large amounts of data is to segment relevant 3D features in cellular tomograms. Procedures for segmentation must work robustly and rapidly in spite of the low signal-to-noise ratios inherent in biological electron microscopy. This work evaluates various denoising techniques and then extracts relevant features of biological interest in tomograms of HIV-1 in infected human macrophages and Bdellovibrio bacterial tomograms recorded at room and cryogenic temperatures. Our approach represents an important step in automating the efficient extraction of useful information from large datasets in biological tomography and in speeding up the process of reducing gigabyte-sized tomograms to relevant byte-sized data. Next, we investigate automatic techniques for segmentation and quantitative analysis of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscope, and tomograms of Liposomal Doxorubicin formulations (Doxil), an anticancer nanodrug, imaged at cryogenic temperatures. A machine learning approach is formulated that exploits texture features, and joint image block-wise classification and segmentation is performed by histogram matching using a nearest neighbor classifier and chi-squared statistic as a distance measure.
|
139 |
Towards an end-to-end multiband OFDM system analysisSaleem, Rashid January 2012 (has links)
Ultra Wideband (UWB) communication has recently drawn considerable attention from academia and industry. This is mainly owing to the ultra high speeds and cognitive features it could offer. The employability of UWB in numerous areas including but not limited to Wireless Personal Area Networks, WPAN's, Body Area Networks, BAN's, radar and medical imaging etc. has opened several avenues of research and development. However, still there is a disagreement on the standardization of UWB. Two contesting radios for UWB are Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) and DS-UWB (Direct Sequence Ultra Wideband). As nearly all of the reported research on UWB hasbeen about a very narrow/specific area of the communication system, this thesis looks at the end-to-end performance of an MB-OFDM approach. The overall aim of this project has been to first focus on three different aspects i.e. interference, antenna and propagation aspects of an MB-OFDM system individually and then present a holistic or an end-to-end system analysis finally. In the first phase of the project the author investigated the performance of MB-OFDM system under the effect of his proposed generic or technology non-specific interference. Avoiding the conventional Gaussian approximation, the author has employed an advanced stochastic method. A total of two approaches have been presented in this phase of the project. The first approach is an indirect one which involves the Moment Generating Functions (MGF's) of the Signal-to-Interference-plus-Noise-Ratio (SINR) and the Probability Density Function (pdf) of the SINR to calculate the Average Probabilities of Error of an MB-OFDM system under the influence of proposed generic interference. This approach assumed a specific two-dimensional Poisson spatial/geometric placement of interferers around the victim MB-OFDM receiver. The second approach is a direct approach and extends the first approach by employing a wider class of generic interference. In the second phase of the work the author designed, simulated, prototyped and tested novel compact monopole planar antennas for UWB application. In this phase of the research, compact antennas for the UWB application are presented. These designs employ low-loss Rogers duroid substrates and are fed by Copla-nar Waveguides. The antennas have a proposed feed-line to the main radiating element transition region. This transition region is formed by a special step-generating function-set called the "Inverse Parabolic Step Sequence" or IPSS. These IPSS-based antennas are simulated, prototyped and then tested in the ane-choic chamber. An empirical approach, aimed to further miniaturize IPSS-based antennas, was also derived in this phase of the project. The empirical approach has been applied to derive the design of a further miniaturized antenna. More-over, an electrical miniaturization limit has been concluded for the IPSS-based antennas. The third phase of the project has investigated the effect of the indoor furnishing on the distribution of the elevation Angle-of-Arrival (AOA) of the rays at the receiver. Previously, constant distributions for the AOA of the rays in the elevation direction had been reported. This phase of the research has proposed that the AOA distribution is not fixed. It is established by the author that the indoor elevation AOA distributions depend on the discrete levels of furnishing. A joint time-angle-furnishing channel model is presented in this research phase. In addition, this phase of the thesis proposes two vectorial or any direction AOA distributions for the UWB indoor environments. Finally, the last phase of this thesis is presented. As stated earlier, the overall aim of the project has been to look at three individual aspects of an MB-OFDM system, initially, and then look at the holistic system, finally. Therefore, this final phase of the research presents an end-to-end MB-OFDM system analysis. The interference analysis of the first phase of the project is revisited to re-calculate the probability of bit error with realistic/measured path loss exponents which have been reported in the existing literature. In this method, Gaussian Quadrature Rule based approximations are computed for the average probability of bit error. Last but not the least, an end-to-end or comprehensive system equation/impulse response is presented. The proposed system equation covers more aspects of an indoor UWB system than reported in the existing literature.
|
140 |
Optimizing the Supply Chain Performance at Ericsson AB : A Study of Lead Time Reduction and Service Level Improvement / Optimering av försörjningskedjans prestanda hos Ericsson AB : En studie om ledtidsreducering och förbättrad servicenivåStenberg, Marcus, Larsson, Jesper January 2016 (has links)
Ericsson has recently experienced difficulties to meet the customer demand, which has led to lost market shares. This is mainly due to the long and unpredictable lead times within their supply chains. Therefore, Ericsson seeks to increase their ability to meet the customer demand by reducing the customer order lead time. A shorter lead time would imply a greater responsiveness and improved service level towards the customers. A directive from the company was to base the study on the supply chain for the customer Algeria Telecom Mobile. The purpose of the study is to give recommendations for improvements that reduce the total lead time in a supply chain perspective in order to improve the customer service level. To be able to fulfill the purpose, four objectives were distinguished and supported with existing frameworks for analyzing supply chains. The first step was to create a current state map, which was achieved by conducting 24 interviews with people working within the supply chain. The second step was to identify potentials for lead time reduction. This was done by categorizing the supply chain parts and the problems that were gathered during the current state mapping into meaningful groups, and thereafter prioritize the categories with the greatest potential. The third step was to generate alternative solutions by conducting a second literature review based on the potentials that was identified during the prior step. The general solutions were later modified in order to fit the current supply chain. It resulted in eight Ericsson specific solutions. The fourth step was to evaluate these solutions in combination, which led to a recommended combination of solutions that provided the greatest lead time reduction. Also the requirements for implementing these solutions were presented in this step. The recommendation for Ericsson is to rearrange their current supply chain for the studied customer and use two different supply chains; the Regional supply chain and the Alternative supply chain. The two arrangements will both be based on the implementation of a supply hub, which implies a movement of the customer order decoupling point closer to the customer. The Regional supply chain will cover the main flow and be used when the customer orders products from a product portfolio that has been agreed within the region. The Alternative supply chain will act as a complement and cover the flow of products outside the regional product portfolio. The estimated customer order lead time for the Regional supply chain is 17 days, which is a reduction of 80 % in the normal case for the studied supply chain. The lead time for the Alternative supply chain is more difficult to estimate precisely, but it will be reduced in comparison with the current situation. Moreover, the service level towards the customer will be increased for both the Regional and the Alternative supply chain. To summarize the recommendations that are forwarded to Ericsson, they are listed below: <li data-listid="34" data-aria-posinset="15" data-aria-level="1">Implement a regional supply hub <li data-listid="34" data-aria-posinset="15" data-aria-level="1">Agree on a regional product portfolio <li data-listid="34" data-aria-posinset="15" data-aria-level="1">Implement time slots for inbound flows <li data-listid="34" data-aria-posinset="15" data-aria-level="1">Use BPO as a payment method instead of Letter of Credit <li data-listid="34" data-aria-posinset="15" data-aria-level="1">Use a CIP, DAP or DAT Incoterm <li data-listid="34" data-aria-posinset="15" data-aria-level="1">Implement a product configurator and let the customer place orders on commercial descriptions or a solution id. <li data-listid="34" data-aria-posinset="15" data-aria-level="1">Integrate processes and activities throughout the supply chain and establish a greater information exchange.
|
Page generated in 0.0231 seconds