• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 11
  • 9
  • 7
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 162
  • 162
  • 34
  • 33
  • 30
  • 28
  • 27
  • 24
  • 23
  • 20
  • 20
  • 19
  • 18
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Синтез информационной инфраструктуры предприятия на основе архитектурного подхода : магистерская диссертация / Synthesis of the enterprise information infrastructure based on the architectural approach

Бабаян, В. А., Babayan, V. A. January 2022 (has links)
В магистерской диссертации рассмотрены современные проблемы управления информационной инфраструктурой. Изучена деятельность Государственного Бюджетного Учреждения «Научно-технического центра инноваций и технологий» и составлена модель информационной инфраструктуры разработан проект для повышения эффективности работы предприятия, а также применен системно-динамический подход для оценки внедрения ЦОД. / The master's thesis deals with modern problems of information infrastructure management. The activities of the State Budgetary Institution "Scientific and Technical Center for Innovation and Technology" were studied and an information infrastructure model was drawn up, a project was developed to improve the efficiency of the enterprise, and a system-dynamic approach was applied to assess the implementation of the data center.
152

Prototyping and manufacturing of air-controlled damper unit to improve cooling system operating efficiency for data centers

Nilsson, Peter January 2023 (has links)
More and more people are using the internet for data processing, transfer, and storage. With it comes a higher demand for computational power from data servers. Unsurprisingly, the data center industry is becoming an increasingly large industry that is important for people’s daily lives. Data centers cover 2 % of the world’s total electrical consumption and this number is expected to become higher. Running data centers with optimal performance while operating efficiently and as sustainably as possible is a task that is of utmost importance.The way data centers are cooled today is through a CRAH unit that features cooling coils and a fan, the fan blows air over cold coils to prevent damage to server components. Another task for this fan is to create a high differential pressure over the servers using this air, to ensure the air flows in the right direction. The air is uniformly distributed over the servers. With dynamic air-handling measures, it is possible to match the cooling for individual servers, because all servers have different workloads. They generate different amounts of heat. This thesis investigates manual redistribution between servers and how an air-handling damper unit, that sits on the server, is designed to investigate how it can reduce total power draw. Different tests are run in a wind tunnel which houses room for six servers whereas three prototypes are mounted on three of the servers. The main idea to test is that instead of running an even amount of stress on six servers, the same amount of stress is redistributed on only three servers. The ones now running idle have a damper unit blocking the server's rear side. That way the CRAH fan is using less power to create the same differential pressure. Also, the total power draw to all servers is reduced as well. One of the tests was the conventional way of cooling servers today and it had a total power draw of 1362 watts. The test with both redistribution, dampers closed at the rear and turned off servers had a power draw of 951 watts. That is a 30% decrease.
153

Pathways to servers of the future

Lehner, Wolfgang, Nagel, Wolfgang, Fettweis, Gerhard 11 January 2023 (has links)
The Special Session on “Pathways to Servers of the Future” outlines a new research program set up at Technische Universität Dresden addressing the increasing energy demand of global internet usage and the resulting ecological impact of it. The program pursues a novel holistic approach that considers hardware as well as software adaptivity to significantly increase energy efficiency, while suitably addressing application demands. The session presents the research challenges and industry perspective.
154

Design and Performance Evaluation of Resource Allocation Mechanisms in Optical Data Center Networks

Vikrant, Nikam January 2016 (has links)
A datacenter hosts hundreds of thousands of servers and a huge amount of bandwidth is required to accommodate communication between thousands of servers. Several packet switched based datacenter architectures are proposed to cater the high bandwidth requirement using multilayer network topologies, however at the cost of increased network complexity and high power consumption. In recent years, the focus has shifted from packet switching to optical circuit switching to build the data center networks as it can support on demand connectivity and high bit rates with low power consumption. On the other hand, with the advent of Software Defined Networking (SDN) and Network Function Virtualization (NFV), the role of datacenters has become more crucial. It has increased the need of dynamicity and flexibility within a datacenter adding more complexity to datacenter networking. With NFV, service chaining can be achieved in a datacenter where virtualized network functions (VNFs) running on commodity servers in a datacenter are instantiated/terminated dynamically. A datacenter also needs to cater large capacity requirement as service chaining involves steering of large aggregated flows. Use of optical circuit switching in data center networks is quite promising to meet such dynamic and high capacity traffic requirements. In this thesis work, a novel and modular optical data center network (DCN) architecture that uses multi-directional wavelength switches (MD-WSS) is introduced. VNF service chaining use case is considered for evaluation of this DCN and the end-to-end service chaining problem is formulated as three inter-connected sub-problems: multiplexing of VNF service chains, VNFs placement in the datacenter and routing and wavelength assignment. This thesis presents integer linear programming (ILP) formulation and heuristics for solving these problems, and numerically evaluate them. / Ett datacenter inrymmer hundratusentals servrar och en stor mängd bandbredd krävs för att skicka data mellan tusentals servrar. Flera datacenter baserade på paketförmedlande arkitekturer föreslås för att tillgodose kravet på hög bandbredd med hjälp av flerskiktsnätverkstopologier, men på bekostnad av ökad komplexitet i nätverken och hög energiförbrukning. Under de senaste åren har fokus skiftat från paketförmedling till optisk kretsomkoppling for att bygga datacenternätverk som kan stödja på-begäran-anslutningar och höga bithastigheter med låg strömförbrukning. Å andra sidan, med tillkomsten av Software Defined Networking (SDN) och nätverksfunktionen Virtualisering (NFV), har betydelsen av datacenter blivit mer avgörande. Det har ökat behovet av dynamik och flexibilitet inom ett datacenter, vilket leder till storre komplexitet i datacenternätverken. Med NFV kan tjänstekedjor åstadkommas i ett datacenter, där virtualiserade nätverksfunktioner (VNFs) som körs på servrar i ett datacenter kan instansieras och avslutas dynamiskt. Ett datacenter måste också tillgodose kravet på stor kapacitet eftersom tjänstekedjan innebär styrning av stora aggregerade flöden. Användningen av optisk kretsomkoppling i datacenternätverk ser ganska lovande ut for att uppfylla sådana trafikkrav dynamik och hög kapacitet. I detta examensarbete, har en ny och modulär optisk datacenternätverksarkitektur (DCN) som använder flerriktningvåglängdsswitchar (MD-WSS) införs. Ett användningsfall av VNF-tjänstekedjor noga övervägd för utvärdering av denna DCN och end-to-end-servicekedjans problem formuleras som tre sammankopplade delproblem: multiplexering av VNF-servicekedjor, VNF placering i datacentret och routing och våglängd uppdrag. Denna avhandling presenterar heltalsprogrammering (ILP) formulering och heuristik för att lösa dessa problem och numeriskt utvärdera dem.
155

Theoretical Analysis and Design for the Series-Resonator Buck Converter

Tu, Cong 03 February 2023 (has links)
High step-down dc/dc converters are widely adopted in a variety of areas such as industrial, automotive, and telecommunication. The 48 V power delivery system becomes increasingly popular for powering high-current and low-voltage chips. The Series-Capacitor Buck (SCB) converter doubles the duty ratio and equalizes the current between the two phases. Hard switching has hindered efforts to reduce volume via increased switching frequency, although a monolithically integrated SCB converter has boosted current density. A Series-Resonator Buck (SRB) converter is realized by adding a resonant tank in series with the series capacitor Cs. All switches turn on at zero-voltage (ZVOn), and the low-side switches turn off at zero-current (ZCOff). The design of the SRB converter includes characterizing the design variables' impacts on the converter performances and designing low-loss resonant components as the series resonator. The Series-Resonator Buck converter belongs to the class of quasi-resonant converters. Its resonant frequency is higher than the switching frequency, and its waveforms are quasi-sinusoidal. This work develops a steady-state model of the SRB converter to calculate voltage gain, component peak voltages, and resonant inductor peak current. Each switching cycle is modeled based on the concept of generalized state-space averaging. The soft-switching condition of the high-side switches is derived. The ZVS condition depends on the normalized control variable and the load condition. The gain equation models the load-dependent characteristic and the peak gain boundary. The theoretical peak voltage gain of the SRB converter is smaller than the maximum gain of the SCB converter. A smaller normalized load condition results in a larger peak voltage gain of the SRB converter. The large-signal model of the SRB converter characterizes the low-frequency behavior of the low-pass filters with the series capacitor and the high-frequency behavior of the resonant elements. A design recommendation of t_off f_r<0.5 is suggested to avoid the oscillation between the series capacitor Cs and the output inductors Lo. In other words, the off-duration of the low-side switches is less than half of 1/fr, and therefore the negative damping effect from the parallel resonant tank to the vCs response is reduced. The transfer functions of the SRB converter are presented and compared with those of the SCB converter. The series resonator brings in an extra damping effect to the response of output capacitor voltage. The understanding of the analytical relationships among the resonant tank energy, voltage gain, and component stresses was utilized to guide the converter design of the converter's parameters. A normalized load condition at √2 minimizes the stresses of the series resonator by balancing the peak energy in the resonant elements Lr and Cr. The f_s variation with voltage gain M is less than 10%. The non-resonant components C_s, L_oa, and L_ob are designed according to the specified switching ripples. The ac winding loss complicates the winding design of a resonant inductor. This work replaces the rectangular window with a rhombic window to reduce the eddy current loss caused by the fringing effect. The window ratio k_y is added as a design variable. The impacts of the design variables on the inductance, core loss, and winding loss are discussed. The air-gap length l_g is designed to control the inductance. A larger k_y design results in a short inductor length l_c and a smaller winding loss. The disadvantages include a smaller energy density design and a larger core loss due to the smaller cross-sectional area. In the design example presented in the thesis, the presence of the rhombic shape increases the gap-to-winding distance by two times, and also reduces the y-component of the magnetic field by a factor of two. The total inductor loss is reduced by 56% compared to a conventional design with a rectangular winding window while keeping the same inductance and the same inductor volume. This dissertation implements a resonator, replacing the series capacitor, in an SCB converter. The resultant SRB converter shows a 30% reduction in loss and a 50% increase in power density. The root cause of the divergence issue is identified by modeling the negative damping effect caused by resonant elements. The presented transient design guideline clears the barriers to closed-loop regulation and commercialization of the SRB converter. This work also reshapes winding windows from rectangle to rhombus which is a low-cost change that reduces magnetic loss by half. The theoretical analysis and design procedures are demonstrated in a 200 W prototype with 7% peak efficiency increase compared to the commonly used 30 W commercial SCB product. / Doctor of Philosophy / High step-down dc/dc converters are widely adopted in a variety of areas such as industrial, automotive, and telecommunication areas. The 48 V power delivery system becomes increasingly popular for powering high-current and low-voltage chips. The Series-Capacitor Buck (SCB) converter doubles the duty ratio and equalizes the current between the two phases. Hard switching has hindered efforts to reduce volume via increased switching frequency although a monolithically integrated SCB converter has boosted current density. A Series-Resonator Buck (SRB) converter is realized by adding a resonant tank in series with the series capacitor Cs. All switches turn on at zero-voltage (ZVOn), and the low-side switches turn off at zero-current (ZCOff). The challenges to designing the SRB converter include characterizing the design variables' impacts on the converter performances and designing low-loss resonant components as the series resonator. The resultant SRB converter shows a 30% reduction in loss and a 50% increase in power density. The root cause of the divergence issue is identified by modeling the negative damping effect caused by the resonant elements. The presented transient design guideline clears the barriers of closed-loop regulation and commercialization of the SRB converter. This work also reshapes winding windows from rectangle to rhombus, which is a low-cost change that reduces magnetic loss by half. The theoretical analysis and design procedures are demonstrated in a 200 W prototype with 7% peak efficiency increase compared to the commonly used 30 W commercial SCB product.
156

企業資訊生命週期管理策略之研究 / Enterprise Information Life Cycle Management Strategy Research

黃順安, Huang, Shun An Unknown Date (has links)
近年來由於網際網路的普及,資訊成爆炸性的成長,無論是企業e化、電子商務的應用服務,或是數位家庭的興起,加上網路應用服務的創新,出現如影音部落格等。這些資訊除透過網路傳遞流通外,不管是個人或企業,是使用者或提供服務的業者,都需面對管理如此龐大的資訊儲存服務。 隨著數位資訊的快速成長,檔案的體積與數量日漸增加,雖然資訊科技的進步讓儲存媒體的種類更加多元化且容量越來越大,例如一顆SATA磁碟就有500GB的容量、藍光光碟一片容量達100GB,但根據IDC公佈調查指出,2006年全球資訊量大爆炸,全年的照片、影音檔、電子郵件、網頁、即時通訊與行動電話等數位資料量,高達1610億GB,所以儲存容量的提升似乎永遠趕不上資訊的成長速度。 企業目前分散在各分支機構的IT機房,面臨人員設備的重複投資及分散管理不易,隨著寬頻網路的來臨,企業將IT基礎設施集中化,建置企業的資料中心已成趨勢,我國政府的資訊改造就規劃機房共構成13+1個資料中心,如何建構一個資料中心,應用集中化、虛擬化的趨勢讓儲存系統集中化,同時企業的資訊也集中化,大量的資訊與儲存,更需對資訊做有效的儲存管理。 根據SNIA統計,儲存系統上的資訊,30天內沒有被存取過的大約占80%不常用、不重要的資訊不只造成儲存空間的浪費,也間接影響資訊存取沒有效率,所以在有限的高階線上儲存空間下,將較少用到的資訊搬到較低階的儲存系統,不用的資訊歸檔保存。資訊也有生命週期的演變,本研究將資訊生命週期分四個階段,分別為資訊建立導入新生期、資訊使用黃金成熟期、資訊參考使用衰老期、資訊處置歸檔終老期,透過資訊價值的分類,區分資訊對企業的重要程度,融合資訊生命週期的演進,制定資訊生命週期管理策略,協助企業從資訊的建立、存取、備份、複製、資安、歸檔保存到刪除,使得資訊的儲存保護與存取效率能達到最佳化,確保資訊服務不中斷,獲得最好的儲存投資效益。 / Due to the prevalence of Internet in recent years, information grows explosively. No matter it is e-enable, the service that electronic commerce offers, or the spring up of digital family, in addition to the innovation of the application service that the Internet offers, they all enabled the appearance of products such as Vlog. These information not only circulate through the Internet, no matter it is personal or companies, users or dealers who offer service, all of them have to face the problem of managing such huge information storage service. With the rapid growth of digital information, the volume and the amount of files are getting larger and larger. The advance in information technology makes the type of storage media more various and with larger and larger capacity. For instance, a SATA hard drive has the capacity of 500GB, a Blue-ray disk has the capacity of 100GB, but according to the survey of IDC, the information around the world exploded in year 2006. The total digital information such as pictures, video/audio archive, emails, web sites, messengers, and mobile phones in the year is as much as 161 billion GB. So the storage capacity never seems to catch up with the growth of information. Companies now scatter over the IT control room of each branch. They face the difficulties of repeatedly investing in manual and facilities, and separate management. With the appearance of broadband network, companies consolidate the infrastructure of IT, building companies’ data center has become a current. The information engineering step that our government takes is to draw the control room into 13+1data center. How to build a data center? We use the current of consolidation and virtualization to consolidate storage systems. Mean while, the information of companies should be consolidated. Mass amount of information and storage needs a more efficient way of managing and storing information. According to statistic that SNIA shows, there are about 80% of information in storage system will not be accessed within 30 days. Information that are not often used or are not important can be a waste of capacity, and it can indirectly affect the inefficiency of storing information. So, in the limited high level online storage capacity, we should move the information that are not so often used to lower level storage systems, and we will not have to archive the information. There is also a life cycle within information. This research classify the life cycle of information into 4 stages, which include the introduction/emergence stage in the establishment of information, the decline stage during the reference and usage of information, as well as the final stage in the management and filing of information. Through the classify of information value, we classify the importance of the information to the company, integrate the evolution of information life cycle, establish tactics of information life cycle management, assist companies from the establishment of information, to the storage, backup, copy, information security, archiving and then to the delete of the information. This optimizes the storing, accessing efficiency, assures the continuance of information service and acquires most benefit of storage investment.
157

Environmental Performance of the Försäkringskassan IT Infrastructure : A Green-IT case study for the Swedish Social Insurance Agency

Honée, Caspar January 2013 (has links)
This Green IT case study commissioned by Försäkringskassan (FK), the Swedish National Social Insurance Agency, quantifies the environmental performance of the IT infrastructure (IT-IS) in use during 2010 in a lifecycle perspective. Adopting a system view in Green IT analysis can mitigate risks of problem shifts. IT-IS concerns the equipment that enables office automation and external web application services. The size of the FK IT-IS is in the order of 300 branch offices with 14000 pc’s, 2100 printers and a 1 MW data centre hosting 1200 servers, 5 Petabyte of central data storage and serving about 80 key business applications. The carbon footprint of the FK IT-IS in 2010 accounts to 6.5 kiloton CO2-equivalents. The total environmental impact is calculated across 18 themes and expressed as a single indicator eco score amounting to 822.000 ReCiPe points. The contribution of capital goods is large with 44% of the carbon footprint and 47% of the eco score linked to emissions embedded in material equipment. The environmental effects from distributed IT deployed at local office sites, dominate at two thirds of the total FK IT-IS impacts. Important drivers in the local office sites category are the relatively short economic life span of pc equipment and the significant volume of paper consumed in printing activities. Within the data centre category, operational processes dominate the environmental impacts and are linked to intensive power use. In comparison to industry benchmark scores, the data centre infrastructure energy efficiency (DCiE) is relatively low at 57%, or 59% when credited for waste heat utilisation. Airflow containment measures in computer rooms are identified for efficiency improvement. Enhanced airflow controls also act as a prerequisite to better leverage opportunities for free cooling present at the location in northern Europe.  With regards to the data centre hosted IT, environmental impacts linked to storage services dominate and remarkably exceed those of servers. / Denna fallstudie inom Grön IT på uppdrag av Försäkringskassan (FK) kvantifierar IT-infrastrukturens (IT-IS) miljöprestanda i ett livscykelperspektiv under 2010. Att införa ett systemperspektiv inom Grön IT analys kan lindra riskerna av problemväxling. IT-IS avser utrustning som möjliggör kontorsautomatisering och externa webbapplikationer. FK IT-IS omfattar 300 kontor med 14,000 datorer, 2,100 skrivare och ett 1 MW datacenter med 1,200 servrar, 5 Petabyte central datalagring och 80 huvudsakliga applikationer. Koldioxidavtrycket av det totala FK IT-IS utgör 6,5 kiloton CO2-ekvivalenter för 2010 . Den totala miljöpåverkan är beräknad över 18 miljöteman och anger som en enda indikator ekobetyget på 822,000 ReCiPe poäng . Kapitalvaror bidrar stort, med hela 44% av koldioxidutsläppen och 47% av ekobetyget kan härledas till inbäddade utsläpp i material utrustning. Miljöeffekterna av de lokala kontorens IT dominerar med två tredjedelar av den totala FK IT-IS miljöpåverkan. Viktiga faktorer i kategorin lokala kontor är kapitalvarornas relativt korta ekonomiska livslängd samt de betydande volymer av skrivarpapper som används.   Inom datacenterkategorin domineras miljöpåverkan av de operativa processerna som är kopplade till intensiv el förbrukning. I jämförelse med branschstandarden är energieffektiviteten av datacentrets infrastruktur (DCiE) relativt låg, med 57%, alternativt 59% när användandet av spillvärme inräknas. Luftflöde inneslutningsåtgärder i datorsalar identifieras för effektivisering. Förbättrad luftflödesinneslutning i datahallarna är identifierad som en energieffektivisering. Den förbättrade luftflödeskontrollen är också ett krav för att bättre kunna utnyttja möjligheterna för fri kyla som finns i Norra Europa.  Med avseende på datacentrets IT, domineras miljökonsekvenserna kopplade till lagringstjänster och överstiger anmärkningsvärt effekterna från servrarna. / Miljöutredning Grön IT på Försäkringskassan - examensarbeten
158

Energy-Efficient Key/Value Store

Tena, Frezewd Lemma 11 September 2017 (has links) (PDF)
Energy conservation is a major concern in todays data centers, which are the 21st century data processing factories, and where large and complex software systems such as distributed data management stores run and serve billions of users. The two main drivers of this major concern are the pollution impact data centers have on the environment due to their waste heat, and the expensive cost data centers incur due to their enormous energy demand. Among the many subsystems of data centers, the storage system is one of the main sources of energy consumption. Among the many types of storage systems, key/value stores happen to be the widely used in the data centers. In this work, I investigate energy saving techniques that enable a consistent hash based key/value store save energy during low activity times, and whenever there is an opportunity to reuse the waste heat of data centers.
159

Energy-Efficient Key/Value Store

Tena, Frezewd Lemma 29 August 2017 (has links)
Energy conservation is a major concern in todays data centers, which are the 21st century data processing factories, and where large and complex software systems such as distributed data management stores run and serve billions of users. The two main drivers of this major concern are the pollution impact data centers have on the environment due to their waste heat, and the expensive cost data centers incur due to their enormous energy demand. Among the many subsystems of data centers, the storage system is one of the main sources of energy consumption. Among the many types of storage systems, key/value stores happen to be the widely used in the data centers. In this work, I investigate energy saving techniques that enable a consistent hash based key/value store save energy during low activity times, and whenever there is an opportunity to reuse the waste heat of data centers.
160

Resource monitoring in a Network Embedded Cloud : An extension to OSPF-TE

Roozbeh, Amir January 2013 (has links)
The notions of "network embedded cloud", also known as a "network enabled cloud" or a "carrier cloud", is an emerging technology trend aiming to integrate network services while exploiting the on-demand nature of the cloud paradigm. A network embedded cloud is a distributed cloud environment where data centers are distributed at the edge of the operator's network. Distributing data centers or computing resources across the network introduces topological and geographical locality dependency. In the case of a network enabled cloud, in addition to the information regarding available processing, memory, and storage capacity, resource management requires information regarding the network's topology and available bandwidth on the links connecting the different nodes of the distributed cloud. This thesis project designed, implemented, and evaluated the use of open shortest path first with traffic engineering (OSPF-TE) for propagating the resource status in a network enabled cloud. The information carried over OSPF-TE are used for network-aware scheduling of virtual machines. In particular, OSPF-TE was extended to convey virtualization and processing related information to all the nodes in the network enabled cloud. Modeling, emulation, and analysis shows the proposed solution can provide the required data to a cloud management system by sending a data center's resources information in the form of new opaque link-state advertisement with a minimum interval of 5 seconds. In this case, each embedded data centers injects a maximum 38.4 bytes per second of additional traffic in to the network.&lt;p&gt; / Ett "network embedded cloud", även känt som ett "network enabled cloud" eller ett "carrier cloud", är en ny teknik trend som syftar till att tillhandahålla nätverkstjänster medan on-demand egenskapen av moln-paradigmet utnyttjas.  Traditionella telekommunikationsapplikationer bygger ofta på en distributed service model och kan använda ett "network enabled cloud" som dess exekverande plattform. Dock kommer sådana inbäddade servrar av naturliga skäl vara geografiskt utspridda, varför de är beroende av topologisk och geografisk lokalisering. Detta ändrar på resurshanteringsproblemet jämfört med resurshantering i datacentrum. I de fall med ett network enabled cloud, utöver informationen om tillgängliga CPU, RAM och lagring, behöver resursfördelningsfunktionen information om nätverkets topologi och tillgänglig bandbredd på länkarna som förbinder de olika noderna i det distribuerade molnet. Detta examensarbete har utformat, tillämpat och utvärderat ett experiment-orienterad undersökning av användningen av open shortest path first med traffich engineering (OSPF-TE) för resurshantering i det network enabled cloud. I synnerhet utvidgades OSPF-TE till att förmedla virtualisering och behandla relaterad information till alla noder i nätverket. Detta examensarbete utvärderar genomförbarheten och lämpligheten av denna metod, dess flexibilitet och prestanda. Analysen visade att den föreslagna lösningen kan förse nödvändiga uppgifter till cloud management system genom att skicka ett datacenters resursinformation i form av ny opaque LSA (kallat Cloud LSA) med ett minimumintervall av 5 sekunder och maximal nätverksbelastning av 38,4 byte per sekund per inbäddade data center.

Page generated in 0.1302 seconds