1 |
Waste minimization in Hong Kong households and offices how individuals can create less waste in their every day lives and how different organizations can provide implementation support /Lai, Wan-kay, Irene. January 2005 (has links)
Thesis (M. Sc.)--University of Hong Kong, 2005. / Title proper from title frame. Also available in printed format.
|
2 |
Compressed Sensing via Partial L1 MinimizationZhong, Lu 27 April 2017 (has links)
Reconstructing sparse signals from undersampled measurements is a challenging problem that arises in many areas of data science, such as signal processing, circuit design, optical engineering and image processing. The most natural way to formulate such problems is by searching for sparse, or parsimonious, solutions in which the underlying phenomena can be represented using just a few parameters. Accordingly, a natural way to phrase such problems revolves around L0 minimization in which the sparsity of the desired solution is controlled by directly counting the number of non-zero parameters. However, due to the nonconvexity and discontinuity of the L0 norm such optimization problems can be quite difficult. One modern tactic to treat such problems is to leverage convex relaxations, such as exchanging the L0 norm for its convex analog, the L1 norm. However, to guarantee accurate reconstructions for L1 minimization, additional conditions must be imposed, such as the restricted isometry property. Accordingly, in this thesis, we propose a novel extension to current approaches revolving around truncated L1 minimization and demonstrate that such approach can, in important cases, provide a better approximation of L0 minimization. Considering that the nonconvexity of the truncated L1 norm makes truncated l1 minimization unreliable in practice, we further generalize our method to partial L1 minimization to combine the convexity of L1 minimization and the robustness of L0 minimization. In addition, we provide a tractable iterative scheme via the augmented Lagrangian method to solve both optimization problems. Our empirical study on synthetic data and image data shows encouraging results of the proposed partial L1 minimization in comparison to L1 minimization.
|
3 |
Transforming rubbish into nourishment in a no man's land : food wastage and recycling in Hong Kong /Wong, Man-yee, January 2001 (has links)
Thesis (M. Journ.)--University of Hong Kong, 2001. / Includes bibliographical references (leaves 35-36).
|
4 |
Quantifying Losses in Power Systems Using Different Types of FACTS Controllers2013 September 1900 (has links)
This thesis discusses the placement of conventional power flow controllers (namely the Fixed series capacitor (FSC), Phase Angle Regulating Transformer (PAR)) and Flexible AC Transmission System (FACTS) devices (namely the Thyristor Controlled Series Capacitor (TCSC), the Static Synchronous Series Compensator (SSSC), the Unified Power Flow Controllers (UPFC) and the Sen Transformer (ST)) in bulk power systems to minimize transmission losses in the entire system. This firstly resolves line overloading and improves the overall voltage profile of the entire system. Secondly the transmission losses are minimized and also help in reducing the generation, which results in additional dollar savings in terms of the fuel costs.
The sizes of the FACTS devices used were small in order to keep the initial installation costs low for the utility. The reduced FACTS device ratings are mentioned as a benefit, but not included in the overall loss minimization calculations. Various types of FACTS devices were modeled and placed in the power system, and the economic benefits were discussed and compared for different power flow conditions.
The FSC, PAR, and TCSC are the FACTS Devices commonly used in the electric utility industry. In addition to the previous devices, the SSSC and UPFC were also modeled in the popular PSS/E and PSAT software's. The Sen Transformer was modeled using an electromagnetic transient simulation program (PSCAD/EMTDC). A line stability index was used to find the optimum location for placing the FACTS device. This thesis also provides a quantified value for the overall losses with the different FACTS devices, which is not available in the previous research literature.
The Sen Transformer is a new type of a FACTS device that was developed by a former Westinghouse engineer, Dr. Kalyan Sen in 2003. It is based on the same operating principle as a UPFC (i.e. provides independent active and reactive power control) but uses the proven transformer technology instead. The benefit of the SEN transformer is that it would cost approximately only 30% of the UPFC cost. This thesis studies the Sen Transformer for loss minimization. Since the Sen technology uses a mature transformer technology, its maintenance costs are going to be less and therefore the utilities would be more comfortable using such a device instead of UPFC.
A 12 bus test system proposed by FACTS modeling working group was used for validating and testing the FACTS devices in this thesis. This test system is a composite model of Manitoba Hydro, North Dakota, Minnesota, and Chicago area subsystems. This test platform manifests number of operating problems, which the electric utilities typically face. This system has been used for congestion management, voltage support and stability improvement studies with the FACTS devices. The results show that compensating a short transmission line in this system is more effective in minimizing the overall losses and improving the voltage profile compared to a typical approach of compensating long lines. The results also show that the UPFC and Sen Transformer are the most effective in minimizing the overall losses with the Sen Transformer being the most cost effective solution.
|
5 |
Transforming rubbish into nourishment in a no man's land food wastage and recycling in Hong Kong /Wong, Man-yee, January 2001 (has links)
Thesis (M.Journ.)--University of Hong Kong, 2001. / Includes bibliographical references (leaves 35-36). Also available in print.
|
6 |
Effective litter reductionLevin, Elizabeth Morris January 2006 (has links)
Thesis (M.A. )--Kutztown University of Pennsylvania, 2006. / Source: Masters Abstracts International, Volume: 45-06, page: 2924. Typescript. Abstract precedes thesis as 2 leaves (iii-iv). Includes bibliographical references (leaves 99-102).
|
7 |
Compressed Sensing for 3D Laser Radar / Compressed Sensing för 3D LaserradarFall, Erik January 2014 (has links)
High resolution 3D images are of high interest in military operations, where data can be used to classify and identify targets. The Swedish defence research agency (FOI) is interested in the latest research and technologies in this area. A draw- back with normal 3D-laser systems are the lack of high resolution for long range measurements. One technique for high long range resolution laser radar is based on time correlated single photon counting (TCSPC). By repetitively sending out short laser pulses and measure the time of flight (TOF) of single reflected pho- tons, extremely accurate range measurements can be done. A drawback with this method is that it is hard to create single photon detectors with many pixels and high temporal resolution, hence a single detector is used. Scanning an entire scene with one detector is very time consuming and instead, as this thesis is all about, the entire scene can be measured with less measurements than the number of pixels. To do this a technique called compressed sensing (CS) is introduced. CS utilizes that signals normally are compressible and can be represented sparse in some basis representation. CS sets other requirements on the sampling compared to the normal Shannon-Nyquist sampling theorem. With a digital micromirror device (DMD) linear combinations of the scene can be reflected onto the single photon detector, creating scalar intensity values as measurements. This means that fewer DMD-patterns than the number of pixels can reconstruct the entire 3D-scene. In this thesis a computer model of the laser system helps to evaluate different CS reconstruction methods with different scenarios of the laser system and the scene. The results show how many measurements that are required to reconstruct scenes properly and how the DMD-patterns effect the results. CS proves to enable a great reduction, 85 − 95 %, of the required measurements com- pared to pixel-by-pixel scanning system. Total variation minimization proves to be the best choice of reconstruction method. / Högupplösta 3D-bilder är väldigt intressanta i militära operationer där data kan utnyttjas för klassificering och identifiering av mål. Det är av stort intresse hos Totalförsvarets forskningsinstitut (FOI) att undersöka de senaste teknikerna in- om detta område. Ett stort problem med vanliga 3D-lasersystem är att de saknar hög upplösning för långa mätavstånd. En teknik som har hög avståndsupplös- ning är tidskorrelerande enfotonräknare, som kan räkna enstaka fotoner med extremt bra noggrannhet. Ett sådant system belyser en scen med laserljus och mäter sedan reflektionstiden för enstaka fotoner och kan på så sätt mäta avstånd. Problemet med denna metod är att göra detektion av många pixlar när man bara kan använda en detektor. Att skanna en hel scen med en detektor tar väldigt lång tid och istället handlar det här exjobbet om att göra färre mätningar än antalet pixlar, men ändå återskapa hela 3D-scenen. För att åstadkomma detta används en ny teknik kallad Compressed Sensing (CS). CS utnyttjar att mätdata normalt är komprimerbar och skiljer sig från det traditionella Shannon-Nyquists krav på sampling. Med hjälp av ett Digital Micromirror Device (DMD) kan linjärkombi- nationer av scenen speglas ner på enfotondetektorn och med färre DMD-mönster än antalet pixlar kan hela 3D-scenen återskapas. Med hjälp av en egenutvecklad lasermodell evalueras olika CS rekonstruktionsmetoder och olika scenarier av la- sersystemet. Arbetet visar att basrepresentationen avgör hur många mätningar som behövs och hur olika uppbyggnader av DMD-mönstren påverkar resultatet. CS visar sig möjliggöra att 85 − 95 % färre mätningar än antalet pixlar behövs för att avbilda hela 3D-scener. Total variation minimization visar sig var det bästa valet av rekonstruktionsmetod.
|
8 |
Waste minimization in Hong Kong households and offices: how individuals can create less waste in their every daylives and how different organizations can provide implementationsupportLai, Wan-kay, Irene., 黎蘊琪. January 2005 (has links)
published_or_final_version / abstract / Architecture / Master / Master of Science in Interdisciplinary Design and Management
|
9 |
Controllable, non-oscillatory damping for deformable objectsYoung, Herbert David 05 1900 (has links)
This thesis presents a new method for the controllable damping of deformable objects. The method evolves from physically based techniques; however, it allows for non-physical, but visually plausible motion. This flexibility leads to a simple interface, with intuitive control over the behaviour of the material.
This method is particularly suited for strongly damped materials, which account for the majority of objects of interest to animation, since it produces non-oscillatory behaviour. This is similar to critical damping, except that it affects all modes independently. The new method is based on the minimization of a slightly modified version of total energy. This framework can be used to simulate many other physical phenomena, and therefore lends itself to coupling with other simulations.
Implementation details for a simple example are given. Results are shown for varying parameters and compared to those produced by a traditional method.
|
10 |
A robust window-based multi-node minimization technique using Boolean relationsCobb, Jeffrey Lee 15 May 2009 (has links)
Multi-node optimization using Boolean relations is a powerful approach for network
minimization. The approach has been studied in theory, and so far its superiority over single
node optimization techniques has only been conjectured for practical designs. This is
due to the highly memory intensive computations involved in the calculation of Boolean
relations representing the multi-node optimization exibility. In this thesis, an algorithm
to perform Boolean relation-based multi-node optimization using a robust, fast and memory
efcient algorithm is presented. In particular, two nodes are simultaneously optimized
at a time. Results are reported on large designs, demonstrating the initial power of this
multi-node optimization algorithm. The robustness of the approach arises from the use of
a window-based technique for computing these Boolean relations. Secondly, aggressive
early quantication is performed during the computation, keeping memory utilization low.
Finally, smart heuristics are employed for selecting the node pair to be optimized simultaneously.
These features allow the approach to scale well and provide good results for
large designs. Experiments are performed on a set of large benchmarks and the algorithm's
performance is compared to a SAT-based network optimization technique using complete
don't cares. On average, the approach presented in this thesis achieves a 12% reduction
in literal count across all the large designs compared to the complete don't cares, while
maintaining small runtimes and low memory usage.
|
Page generated in 0.0788 seconds