Book Section records
https://feeds.library.caltech.edu/people/Jiang-Anxiao-Andrew/book_section.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenTue, 16 Apr 2024 13:45:45 +0000Optimal Content Placement for En-Route Web Caching
https://resolver.caltech.edu/CaltechPARADISE:ETR050
Authors: {'items': [{'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}, 'orcid': '0000-0002-0120-7930'}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}]}
Year: 2003
DOI: 10.1109/NCA.2003.1201132
This paper studies the optimal placement of web files for en-route web caching. It is shown that existing placement policies are all solving restricted partial problems of the file placement problem, and therefore give only sub-optimal solutions. A dynamic programming algorithm of low complexity which computes the optimal solution is presented. It is shown both analytically and experimentally that the file-placement solution output by our algorithm outperforms existing en-route caching policies. The optimal placement of web files can be implemented with a reasonable level of cache coordination and management overhead for en-route caching; and importantly, it can be achieved with or without using data prefetching.https://authors.library.caltech.edu/records/t4bc1-d9a83Floating Codes for Joint Information Storage in Write Asymmetric Memories
https://resolver.caltech.edu/CaltechAUTHORS:20170419-152416338
Authors: {'items': [{'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}}, {'id': 'Bohossian-V', 'name': {'family': 'Bohossian', 'given': 'Vasken'}}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}]}
Year: 2007
DOI: 10.1109/ISIT.2007.4557381
Memories whose storage cells transit irreversibly between states have been common since the start of the data storage technology. In recent years, flash memories and other non-volatile memories based on floating-gate cells have become a very important family of such memories. We model them by the write asymmetric memory (WAM), a memory where each cell is in one of q states - state 0, 1, middotmiddotmiddot, q - 1 - and can only transit from a lower state to a higher state. Data stored in a WAM can be rewritten by shifting the cells to higher states. Since the state transition is irreversible, the number of times of rewriting is limited. When multiple variables are stored in a WAM, we study codes, which we call floating codes, that maximize the total number of times the variables can be written and rewritten. In this paper, we present several families of floating codes that either are optimal, or approach optimality as the codes get longer. We also present bounds to the performance of general floating codes. The results show that floating codes can integrate the rewriting capabilities of different variables to a surprisingly high degree.https://authors.library.caltech.edu/records/192v0-rz826Buffer Coding for Asymmetric Multi-Level Memory
https://resolver.caltech.edu/CaltechAUTHORS:20170426-152709376
Authors: {'items': [{'id': 'Bohossian-V', 'name': {'family': 'Bohossian', 'given': 'Vasken'}}, {'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}]}
Year: 2007
DOI: 10.1109/ISIT.2007.4557384
Certain storage media such as flash memories use write-asymmetric, multi-level storage elements. In such media, data is stored in a multi-level memory cell the contents of which can only be increased, or reset. The reset operation is expensive and should be delayed as much as possible. Mathematically, we consider the problem of writing a binary sequence into write-asymmetric q-ary cells, while recording the last r bits written. We want to maximize t, the number of possible writes, before a reset is needed. We introduce the term Buffer Code, to describe the solution to this problem. A buffer code is a code that remembers the r most recent values of a variable. We present the construction of a single-cell (n = 1) buffer code that can store a binary (l = 2) variable with t = [q/2^(r - 1)] + r - 2 and a universal upper bound to the number of rewrites that a single-cell buffer code can have: ..... We also show a binary buffer code with arbitrary n, q, r, namely, the code uses n q-ary cells to remember the r most recent values of one binary variable. The code can rewrite the variable times, which is asymptotically optimal in q and n. . We then extend the code construction for the case r = 2, and obtain a code that can rewrite the variable t = (q - 1)(n - 2) + 1 times. When q = 2, the code is strictly optimal.https://authors.library.caltech.edu/records/j7xa3-njj15Universal rewriting in constrained memories
https://resolver.caltech.edu/CaltechAUTHORS:20170321-172544029
Authors: {'items': [{'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}}, {'id': 'Langberg-M', 'name': {'family': 'Langberg', 'given': 'Michael'}, 'orcid': '0000-0002-7470-0718'}, {'id': 'Schwartz-Moshe', 'name': {'family': 'Schwartz', 'given': 'Moshe'}, 'orcid': '0000-0002-1449-0026'}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}]}
Year: 2009
DOI: 10.1109/ISIT.2009.5205981
A constrained memory is a storage device whose elements change their states under some constraints. A typical example is flash memories, in which cell levels are easy to increase but hard to decrease. In a general rewriting model, the stored data changes with some pattern determined by the application. In a constrained memory, an appropriate representation is needed for the stored data to enable efficient rewriting.
In this paper, we define the general rewriting problem using a graph model. This model generalizes many known rewriting models such as floating codes, WOM codes, buffer codes, etc. We present a novel rewriting scheme for the flash-memory model and prove it is asymptotically optimal in a wide range of scenarios.
We further study randomization and probability distributions to data rewriting and study the expected performance. We present a randomized code for all rewriting sequences and a deterministic code for rewriting following any i.i.d, distribution. Both codes are shown to be optimal asymptotically.https://authors.library.caltech.edu/records/4k7ny-2dc62On the capacity of bounded rank modulation for flash memories
https://resolver.caltech.edu/CaltechAUTHORS:20100816-142932373
Authors: {'items': [{'id': 'Wang-Zhiying', 'name': {'family': 'Wang', 'given': 'Zhiying'}}, {'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}]}
Year: 2009
DOI: 10.1109/ISIT.2009.5205972
Rank modulation has been introduced as a new information representation scheme for flash memories. Given the charge levels of a group of flash cells, sorting is used to induce a permutation, which in turn represents data. Motivated by the lower sorting complexity of smaller cell groups, we consider bounded rank modulation, where a sequence of permutations of given sizes are used to represent data. We study the capacity of bounded rank modulation under the condition that permutations can overlap for higher capacity.https://authors.library.caltech.edu/records/7f210-bjx91Data movement in flash memories
https://resolver.caltech.edu/CaltechAUTHORS:20170321-173656746
Authors: {'items': [{'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}}, {'id': 'Langberg-M', 'name': {'family': 'Langberg', 'given': 'Michael'}, 'orcid': '0000-0002-7470-0718'}, {'id': 'Mateescu-R', 'name': {'family': 'Mateescu', 'given': 'Robert'}}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}]}
Year: 2009
DOI: 10.1109/ALLERTON.2009.5394879
NAND flash memories are the most widely used non-volatile memories, and data movement is common in flash storage systems. We study data movement solutions that minimize the number of block erasures, which are very important for the efficiency and longevity of flash memories. To move data among n blocks with the help of Δ auxiliary blocks, where every block contains m pages, we present algorithms that use θ(n · min{m, log_Δ n}) erasures without the tool of coding. We prove this is almost the best possible for non-coding solutions by presenting a nearly matching lower bound. Optimal data movement can be achieved using coding, where only θ(n) erasures are needed. We present a coding-based algorithm, which has very low coding complexity, for optimal data movement. We further show the NP hardness of both coding-based and non-coding schemes when the objective is to optimize data movement on a per instance basis.https://authors.library.caltech.edu/records/wtbra-ykx96Data movement and aggregation in flash memories
https://resolver.caltech.edu/CaltechAUTHORS:20170309-135756699
Authors: {'items': [{'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}}, {'id': 'Langberg-M', 'name': {'family': 'Langberg', 'given': 'Michael'}, 'orcid': '0000-0002-7470-0718'}, {'id': 'Mateescu-R', 'name': {'family': 'Mateescu', 'given': 'Robert'}}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}]}
Year: 2010
DOI: 10.1109/ISIT.2010.5513391
NAND flash memories have become the most widely used type of non-volatile memories. In a NAND flash memory, every block of memory cells consists of numerous pages, and rewriting a single page requires the whole block to be erased. As block erasures significantly reduce the longevity, speed and power efficiency of flash memories, it is critical to minimize the number of erasures when data are reorganized. This leads to the data movement problem, where data need to be switched in blocks, and the objective is to minimize the number of block erasures. It has been shown that optimal solutions can be obtained by coding. However, coding-based algorithms with the minimum coding complexity still remain an important topic to study.
In this paper, we present a very efficient data movement algorithm with coding over GF(2) and with the minimum storage requirement. We also study data movement with more auxiliary blocks and present its corresponding solution. Furthermore, we extend the study to the data aggregation problem, where data can not only be moved but also aggregated. We present both non-coding and coding-based solutions, and rigorously prove the performance gain by using coding.https://authors.library.caltech.edu/records/kbrtc-fah11Patterned cells for phase change memories
https://resolver.caltech.edu/CaltechAUTHORS:20170213-160905267
Authors: {'items': [{'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}}, {'id': 'Zhou-Hongchao', 'name': {'family': 'Zhou', 'given': 'Hongchao'}}, {'id': 'Wang-Zhiying', 'name': {'family': 'Wang', 'given': 'Zhiying'}}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}]}
Year: 2011
DOI: 10.1109/ISIT.2011.6033979
Phase-change memory (PCM) is an emerging nonvolatile memory technology that promises very high performance. It currently uses discrete cell levels to represent data, controlled by a single amorphous/crystalline domain in a cell. To improve data density, more levels per cell are needed. There exist a number of challenges, including cell programming noise, drifting of cell levels, and the high power requirement for cell programming. In this paper, we present a new cell structure called patterned cell, and explore its data representation schemes. Multiple domains per cell are used, and their connectivity is used to store data. We analyze its storage capacity, and study its error-correction capability and the construction of error-control codes.https://authors.library.caltech.edu/records/dpzzx-7bf24Content-assisted file decoding for nonvolatile memories
https://resolver.caltech.edu/CaltechAUTHORS:20170207-175141968
Authors: {'items': [{'id': 'Li-Yue', 'name': {'family': 'Li', 'given': 'Yue'}}, {'id': 'Wang-Yue', 'name': {'family': 'Wang', 'given': 'Yue'}}, {'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}]}
Year: 2012
DOI: 10.1109/ACSSC.2012.6489154
Nonvolatile memories (NVMs) such as flash memories play a significant role in meeting the data storage requirements of today's computation activities. The rapid increase of storage density for NVMs however brings reliability issues due to closer alignment of adjacent cells on chip, and more levels that are programmed into a cell. We propose a new method for error correction, which uses the random access capability of NVMs and the redundancy that inherently exists in information content. Although it is theoretically possible to remove the redundancy via data compression, existing source coding algorithms do not remove all of it for efficient computation. We propose a method that can be combined with existing storage solutions for text files, namely content-assisted decoding. Using the statistical properties of words and phrases in the text of a given language, our decoder identifies the location of each subcodeword representing some word in a given input noisy codeword, and flips the bits to compute a most likely word sequence. The decoder can be adapted to work together with traditional ECC decoders to keep the number of errors within the correction capability of traditional decoders. The combined decoding framework is evaluated with a set of benchmark files.https://authors.library.caltech.edu/records/53g4r-0wb20Correcting errors by natural redundancy
https://resolver.caltech.edu/CaltechAUTHORS:20170907-081956775
Authors: {'items': [{'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}}, {'id': 'Upadhyaya-P', 'name': {'family': 'Upadhyaya', 'given': 'Pulakesh'}}, {'id': 'Haratsch-E-F', 'name': {'family': 'Haratsch', 'given': 'Erich F.'}}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}]}
Year: 2017
DOI: 10.1109/ITA.2017.8023455
For the storage of big data, there are significant challenges with its long-term reliability. This paper studies how to use the natural redundancy in data for error correction, and how to combine it with error-correcting codes to effectively improve data reliability. It explores several aspects of natural redundancy, including the discovery of natural redundancy in compressed data, the efficient decoding of codes with random structures, the capacity of error-correcting codes that contain natural redundancy, and the time-complexity tradeoff between source coding and channel coding.https://authors.library.caltech.edu/records/bm7m7-z9t46Stopping Set Elimination for LDPC Codes
https://resolver.caltech.edu/CaltechAUTHORS:20180125-132316726
Authors: {'items': [{'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}}, {'id': 'Upadhyaya-P', 'name': {'family': 'Upadhyaya', 'given': 'Pulakesh'}}, {'id': 'Wang-Ying', 'name': {'family': 'Wang', 'given': 'Ying'}}, {'id': 'Narayanan-K-R', 'name': {'family': 'Narayanan', 'given': 'Krishna R.'}}, {'id': 'Zhou-Hongchao', 'name': {'family': 'Zhou', 'given': 'Hongchao'}}, {'id': 'Sima-Jin', 'name': {'family': 'Sima', 'given': 'Jin'}, 'orcid': '0000-0003-4588-9790'}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}]}
Year: 2017
DOI: 10.1109/ALLERTON.2017.8262806
This work studies the Stopping-Set Elimination Problem, namely, given a stopping set, how to remove the fewest erasures so that the remaining erasures can be decoded by belief propagation in k iterations (including k =∞). The NP-hardness of the problem is proven. An approximation algorithm is presented for k = 1. And efficient exact algorithms are presented for general k when the stopping sets form trees.https://authors.library.caltech.edu/records/vnf04-m3829Improve Robustness of Deep Neural Networks by Coding
https://resolver.caltech.edu/CaltechAUTHORS:20201209-153308085
Authors: {'items': [{'id': 'Huang-Kunping', 'name': {'family': 'Huang', 'given': 'Kunping'}}, {'id': 'Raviv-Netanel', 'name': {'family': 'Raviv', 'given': 'Netanel'}, 'orcid': '0000-0002-1686-1994'}, {'id': 'Jain-Siddharth', 'name': {'family': 'Jain', 'given': 'Siddharth'}, 'orcid': '0000-0002-9164-6119'}, {'id': 'Upadhyaya-Pulakesh', 'name': {'family': 'Upadhyaya', 'given': 'Pulakesh'}, 'orcid': '0000-0003-1054-1380'}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}, {'id': 'Siegel-Paul-H', 'name': {'family': 'Siegel', 'given': 'Paul H.'}, 'orcid': '0000-0002-2539-4646'}, {'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}, 'orcid': '0000-0002-0120-7930'}]}
Year: 2020
DOI: 10.1109/ita50056.2020.9244998
Deep neural networks (DNNs) typically have many weights. When errors appear in their weights, which are usually stored in non-volatile memories, their performance can degrade significantly. We review two recently presented approaches that improve the robustness of DNNs in complementary ways. In the first approach, we use error-correcting codes as external redundancy to protect the weights from errors. A deep reinforcement learning algorithm is used to optimize the redundancy-performance tradeoff. In the second approach, internal redundancy is added to neurons via coding. It enables neurons to perform robust inference in noisy environments.https://authors.library.caltech.edu/records/jqw53-1hh11CodNN – Robust Neural Networks From Coded Classification
https://resolver.caltech.edu/CaltechAUTHORS:20200427-091804171
Authors: {'items': [{'id': 'Raviv-N', 'name': {'family': 'Raviv', 'given': 'Netanel'}, 'orcid': '0000-0002-1686-1994'}, {'id': 'Jain-Siddharth', 'name': {'family': 'Jain', 'given': 'Siddharth'}, 'orcid': '0000-0002-9164-6119'}, {'id': 'Upadhyaya-P', 'name': {'family': 'Upadhyaya', 'given': 'Pulakesh'}, 'orcid': '0000-0003-1054-1380'}, {'id': 'Bruck-J', 'name': {'family': 'Bruck', 'given': 'Jehoshua'}, 'orcid': '0000-0001-8474-0812'}, {'id': 'Jiang-Anxiao-Andrew', 'name': {'family': 'Jiang', 'given': 'Anxiao (Andrew)'}, 'orcid': '0000-0002-0120-7930'}]}
Year: 2020
DOI: 10.1109/ISIT44484.2020.9174480
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution, and yet their intrinsic properties remain a mystery. In particular, it is widely known that DNNs are highly sensitive to noise, whether adversarial or random. This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving.In this paper we construct robust DNNs via error correcting codes. By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed. Since DNNs can be seen as a layered concatenation of classification tasks, our research begins with the core task of classifying noisy coded inputs, and progresses towards robust DNNs.We focus on binary data and linear codes. Our main result is that the prevalent parity code can guarantee robustness for a large family of DNNs, which includes the recently popularized binarized neural networks. Further, we show that the coded classification problem has a deep connection to Fourier analysis of Boolean functions.In contrast to existing solutions in the literature, our results do not rely on altering the training process of the DNN, and provide mathematically rigorous guarantees rather than experimental evidence.https://authors.library.caltech.edu/records/psvjm-vmv70