corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-1
0704.0002
Sparsity-certifying Graph Decompositions
<|reference_start|>Sparsity-certifying Graph Decompositions: We describe a new algorithm, the $(k,\ell)$-pebble game with colors, and use it obtain a characterization of the family of $(k,\ell)$-sparse graphs and algorithmic solutions to a family of problems concerning tree decompositions of graphs. Special instances of sparse graphs appear in rigidity theory and have received increased attention in recent years. In particular, our colored pebbles generalize and strengthen the previous results of Lee and Streinu and give a new proof of the Tutte-Nash-Williams characterization of arboricity. We also present a new decomposition that certifies sparsity based on the $(k,\ell)$-pebble game with colors. Our work also exposes connections between pebble game algorithms and previous sparse graph algorithms by Gabow, Gabow and Westermann and Hendrickson.<|reference_end|>
arxiv
@article{streinu2007sparsity-certifying, title={Sparsity-certifying Graph Decompositions}, author={Ileana Streinu and Louis Theran}, journal={arXiv preprint arXiv:0704.0002}, year={2007}, archivePrefix={arXiv}, eprint={0704.0002}, primaryClass={math.CO cs.CG} }
streinu2007sparsity-certifying
arxiv-2
0704.0046
A limit relation for entropy and channel capacity per unit cost
<|reference_start|>A limit relation for entropy and channel capacity per unit cost: In a quantum mechanical model, Diosi, Feldmann and Kosloff arrived at a conjecture stating that the limit of the entropy of certain mixtures is the relative entropy as system size goes to infinity. The conjecture is proven in this paper for density matrices. The first proof is analytic and uses the quantum law of large numbers. The second one clarifies the relation to channel capacity per unit cost for classical-quantum channels. Both proofs lead to generalization of the conjecture.<|reference_end|>
arxiv
@article{csiszar2007a, title={A limit relation for entropy and channel capacity per unit cost}, author={I. Csiszar, F. Hiai and D. Petz}, journal={J. Math. Phys. 48(2007), 092102.}, year={2007}, doi={10.1063/1.2779138}, archivePrefix={arXiv}, eprint={0704.0046}, primaryClass={quant-ph cs.IT math.IT} }
csiszar2007a
arxiv-3
0704.0047
Intelligent location of simultaneously active acoustic emission sources: Part I
<|reference_start|>Intelligent location of simultaneously active acoustic emission sources: Part I: The intelligent acoustic emission locator is described in Part I, while Part II discusses blind source separation, time delay estimation and location of two simultaneously active continuous acoustic emission sources. The location of acoustic emission on complicated aircraft frame structures is a difficult problem of non-destructive testing. This article describes an intelligent acoustic emission source locator. The intelligent locator comprises a sensor antenna and a general regression neural network, which solves the location problem based on learning from examples. Locator performance was tested on different test specimens. Tests have shown that the accuracy of location depends on sound velocity and attenuation in the specimen, the dimensions of the tested area, and the properties of stored data. The location accuracy achieved by the intelligent locator is comparable to that obtained by the conventional triangulation method, while the applicability of the intelligent locator is more general since analysis of sonic ray paths is avoided. This is a promising method for non-destructive testing of aircraft frame structures by the acoustic emission method.<|reference_end|>
arxiv
@article{kosel2007intelligent, title={Intelligent location of simultaneously active acoustic emission sources: Part I}, author={T. Kosel and I. Grabec}, journal={arXiv preprint arXiv:0704.0047}, year={2007}, archivePrefix={arXiv}, eprint={0704.0047}, primaryClass={cs.NE cs.AI} }
kosel2007intelligent
arxiv-4
0704.0050
Intelligent location of simultaneously active acoustic emission sources: Part II
<|reference_start|>Intelligent location of simultaneously active acoustic emission sources: Part II: Part I describes an intelligent acoustic emission locator, while Part II discusses blind source separation, time delay estimation and location of two continuous acoustic emission sources. Acoustic emission (AE) analysis is used for characterization and location of developing defects in materials. AE sources often generate a mixture of various statistically independent signals. A difficult problem of AE analysis is separation and characterization of signal components when the signals from various sources and the mode of mixing are unknown. Recently, blind source separation (BSS) by independent component analysis (ICA) has been used to solve these problems. The purpose of this paper is to demonstrate the applicability of ICA to locate two independent simultaneously active acoustic emission sources on an aluminum band specimen. The method is promising for non-destructive testing of aircraft frame structures by acoustic emission analysis.<|reference_end|>
arxiv
@article{kosel2007intelligent, title={Intelligent location of simultaneously active acoustic emission sources: Part II}, author={T. Kosel and I. Grabec}, journal={arXiv preprint arXiv:0704.0050}, year={2007}, archivePrefix={arXiv}, eprint={0704.0050}, primaryClass={cs.NE cs.AI} }
kosel2007intelligent
arxiv-5
0704.0062
On-line Viterbi Algorithm and Its Relationship to Random Walks
<|reference_start|>On-line Viterbi Algorithm and Its Relationship to Random Walks: In this paper, we introduce the on-line Viterbi algorithm for decoding hidden Markov models (HMMs) in much smaller than linear space. Our analysis on two-state HMMs suggests that the expected maximum memory used to decode sequence of length $n$ with $m$-state HMM can be as low as $\Theta(m\log n)$, without a significant slow-down compared to the classical Viterbi algorithm. Classical Viterbi algorithm requires $O(mn)$ space, which is impractical for analysis of long DNA sequences (such as complete human genome chromosomes) and for continuous data streams. We also experimentally demonstrate the performance of the on-line Viterbi algorithm on a simple HMM for gene finding on both simulated and real DNA sequences.<|reference_end|>
arxiv
@article{Ε‘rΓ‘mek2007on-line, title={On-line Viterbi Algorithm and Its Relationship to Random Walks}, author={Rastislav v{S}r'amek, Brov{n}a Brejov'a, Tom'av{s} Vinav{r}}, journal={Algorithms in Bioinformatics: 7th International Workshop (WABI), 4645 volume of Lecture Notes in Computer Science, pp. 240-251, Philadelphia, PA, USA, September 2007. Springer}, year={2007}, doi={10.1007/978-3-540-74126-8_23}, archivePrefix={arXiv}, eprint={0704.0062}, primaryClass={cs.DS} }
Ε‘rΓ‘mek2007on-line
arxiv-6
0704.0090
Real Options for Project Schedules (ROPS)
<|reference_start|>Real Options for Project Schedules (ROPS): Real Options for Project Schedules (ROPS) has three recursive sampling/optimization shells. An outer Adaptive Simulated Annealing (ASA) optimization shell optimizes parameters of strategic Plans containing multiple Projects containing ordered Tasks. A middle shell samples probability distributions of durations of Tasks. An inner shell samples probability distributions of costs of Tasks. PATHTREE is used to develop options on schedules.. Algorithms used for Trading in Risk Dimensions (TRD) are applied to develop a relative risk analysis among projects.<|reference_end|>
arxiv
@article{ingber2007real, title={Real Options for Project Schedules (ROPS)}, author={Lester Ingber}, journal={arXiv preprint arXiv:0704.0090}, year={2007}, number={Report 2007:ROPS}, archivePrefix={arXiv}, eprint={0704.0090}, primaryClass={cs.CE cond-mat.stat-mech cs.MS cs.NA physics.data-an} }
ingber2007real
arxiv-7
0704.0098
Sparsely-spread CDMA - a statistical mechanics based analysis
<|reference_start|>Sparsely-spread CDMA - a statistical mechanics based analysis: Sparse Code Division Multiple Access (CDMA), a variation on the standard CDMA method in which the spreading (signature) matrix contains only a relatively small number of non-zero elements, is presented and analysed using methods of statistical physics. The analysis provides results on the performance of maximum likelihood decoding for sparse spreading codes in the large system limit. We present results for both cases of regular and irregular spreading matrices for the binary additive white Gaussian noise channel (BIAWGN) with a comparison to the canonical (dense) random spreading code.<|reference_end|>
arxiv
@article{raymond2007sparsely-spread, title={Sparsely-spread CDMA - a statistical mechanics based analysis}, author={Jack Raymond, David Saad}, journal={J. Phys. A: Math. Theor. 40 No 41 (12 October 2007) 12315-12333}, year={2007}, doi={10.1088/1751-8113/40/41/004}, archivePrefix={arXiv}, eprint={0704.0098}, primaryClass={cs.IT math.IT} }
raymond2007sparsely-spread
arxiv-8
0704.0108
Reducing SAT to 2-SAT
<|reference_start|>Reducing SAT to 2-SAT: Description of a polynomial time reduction of SAT to 2-SAT of polynomial size.<|reference_end|>
arxiv
@article{gubin2007reducing, title={Reducing SAT to 2-SAT}, author={Sergey Gubin}, journal={arXiv preprint arXiv:0704.0108}, year={2007}, archivePrefix={arXiv}, eprint={0704.0108}, primaryClass={cs.CC} }
gubin2007reducing
arxiv-9
0704.0213
Geometric Complexity Theory V: On deciding nonvanishing of a generalized Littlewood-Richardson coefficient
<|reference_start|>Geometric Complexity Theory V: On deciding nonvanishing of a generalized Littlewood-Richardson coefficient: This article has been withdrawn because it has been merged with the earlier article GCT3 (arXiv: CS/0501076 [cs.CC]) in the series. The merged article is now available as: Geometric Complexity Theory III: on deciding nonvanishing of a Littlewood-Richardson Coefficient, Journal of Algebraic Combinatorics, vol. 36, issue 1, 2012, pp. 103-110. (Authors: Ketan Mulmuley, Hari Narayanan and Milind Sohoni) The new article in this GCT5 slot in the series is: Geometric Complexity Theory V: Equivalence between blackbox derandomization of polynomial identity testing and derandomization of Noether's Normalization Lemma, in the Proceedings of FOCS 2012 (abstract), arXiv:1209.5993 [cs.CC] (full version) (Author: Ketan Mulmuley)<|reference_end|>
arxiv
@article{narayanan2007geometric, title={Geometric Complexity Theory V: On deciding nonvanishing of a generalized Littlewood-Richardson coefficient}, author={Ketan D. Mulmuley Hariharan Narayanan}, journal={arXiv preprint arXiv:0704.0213}, year={2007}, archivePrefix={arXiv}, eprint={0704.0213}, primaryClass={cs.CC} }
narayanan2007geometric
arxiv-10
0704.0217
Capacity of a Multiple-Antenna Fading Channel with a Quantized Precoding Matrix
<|reference_start|>Capacity of a Multiple-Antenna Fading Channel with a Quantized Precoding Matrix: Given a multiple-input multiple-output (MIMO) channel, feedback from the receiver can be used to specify a transmit precoding matrix, which selectively activates the strongest channel modes. Here we analyze the performance of Random Vector Quantization (RVQ), in which the precoding matrix is selected from a random codebook containing independent, isotropically distributed entries. We assume that channel elements are i.i.d. and known to the receiver, which relays the optimal (rate-maximizing) precoder codebook index to the transmitter using B bits. We first derive the large system capacity of beamforming (rank-one precoding matrix) as a function of B, where large system refers to the limit as B and the number of transmit and receive antennas all go to infinity with fixed ratios. With beamforming RVQ is asymptotically optimal, i.e., no other quantization scheme can achieve a larger asymptotic rate. The performance of RVQ is also compared with that of a simpler reduced-rank scalar quantization scheme in which the beamformer is constrained to lie in a random subspace. We subsequently consider a precoding matrix with arbitrary rank, and approximate the asymptotic RVQ performance with optimal and linear receivers (matched filter and Minimum Mean Squared Error (MMSE)). Numerical examples show that these approximations accurately predict the performance of finite-size systems of interest. Given a target spectral efficiency, numerical examples show that the amount of feedback required by the linear MMSE receiver is only slightly more than that required by the optimal receiver, whereas the matched filter can require significantly more feedback.<|reference_end|>
arxiv
@article{santipach2007capacity, title={Capacity of a Multiple-Antenna Fading Channel with a Quantized Precoding Matrix}, author={Wiroonsak Santipach and Michael L. Honig}, journal={IEEE Trans. Inf. Theory, vol. 55, no. 3, pp. 1218--1234, March 2009}, year={2007}, doi={10.1109/TIT.2008.2011437}, archivePrefix={arXiv}, eprint={0704.0217}, primaryClass={cs.IT math.IT} }
santipach2007capacity
arxiv-11
0704.0218
On Almost Periodicity Criteria for Morphic Sequences in Some Particular Cases
<|reference_start|>On Almost Periodicity Criteria for Morphic Sequences in Some Particular Cases: In some particular cases we give criteria for morphic sequences to be almost periodic (=uniformly recurrent). Namely, we deal with fixed points of non-erasing morphisms and with automatic sequences. In both cases a polynomial-time algorithm solving the problem is found. A result more or less supporting the conjecture of decidability of the general problem is given.<|reference_end|>
arxiv
@article{pritykin2007on, title={On Almost Periodicity Criteria for Morphic Sequences in Some Particular Cases}, author={Yuri Pritykin}, journal={arXiv preprint arXiv:0704.0218}, year={2007}, archivePrefix={arXiv}, eprint={0704.0218}, primaryClass={cs.DM cs.LO} }
pritykin2007on
arxiv-12
0704.0229
Geometric Complexity Theory VI: the flip via saturated and positive integer programming in representation theory and algebraic geometry
<|reference_start|>Geometric Complexity Theory VI: the flip via saturated and positive integer programming in representation theory and algebraic geometry: This article belongs to a series on geometric complexity theory (GCT), an approach to the P vs. NP and related problems through algebraic geometry and representation theory. The basic principle behind this approach is called the flip. In essence, it reduces the negative hypothesis in complexity theory (the lower bound problems), such as the P vs. NP problem in characteristic zero, to the positive hypothesis in complexity theory (the upper bound problems): specifically, to showing that the problems of deciding nonvanishing of the fundamental structural constants in representation theory and algebraic geometry, such as the well known plethysm constants--or rather certain relaxed forms of these decision probelms--belong to the complexity class P. In this article, we suggest a plan for implementing the flip, i.e., for showing that these relaxed decision problems belong to P. This is based on the reduction of the preceding complexity-theoretic positive hypotheses to mathematical positivity hypotheses: specifically, to showing that there exist positive formulae--i.e. formulae with nonnegative coefficients--for the structural constants under consideration and certain functions associated with them. These turn out be intimately related to the similar positivity properties of the Kazhdan-Lusztig polynomials and the multiplicative structural constants of the canonical (global crystal) bases in the theory of Drinfeld-Jimbo quantum groups. The known proofs of these positivity properties depend on the Riemann hypothesis over finite fields and the related results. Thus the reduction here, in conjunction with the flip, in essence, says that the validity of the P vs. NP conjecture in characteristic zero is intimately linked to the Riemann hypothesis over finite fields and related problems.<|reference_end|>
arxiv
@article{mulmuley2007geometric, title={Geometric Complexity Theory VI: the flip via saturated and positive integer programming in representation theory and algebraic geometry}, author={Ketan D. Mulmuley}, journal={arXiv preprint arXiv:0704.0229}, year={2007}, archivePrefix={arXiv}, eprint={0704.0229}, primaryClass={cs.CC} }
mulmuley2007geometric
arxiv-13
0704.0282
On Punctured Pragmatic Space-Time Codes in Block Fading Channel
<|reference_start|>On Punctured Pragmatic Space-Time Codes in Block Fading Channel: This paper considers the use of punctured convolutional codes to obtain pragmatic space-time trellis codes over block-fading channel. We show that good performance can be achieved even when puncturation is adopted and that we can still employ the same Viterbi decoder of the convolutional mother code by using approximated metrics without increasing the complexity of the decoding operations.<|reference_end|>
arxiv
@article{bandi2007on, title={On Punctured Pragmatic Space-Time Codes in Block Fading Channel}, author={Samuele Bandi, Luca Stabellini, Andrea Conti and Velio Tralli}, journal={arXiv preprint arXiv:0704.0282}, year={2007}, archivePrefix={arXiv}, eprint={0704.0282}, primaryClass={cs.IT cs.CC math.IT} }
bandi2007on
arxiv-14
0704.0301
Differential Recursion and Differentially Algebraic Functions
<|reference_start|>Differential Recursion and Differentially Algebraic Functions: Moore introduced a class of real-valued "recursive" functions by analogy with Kleene's formulation of the standard recursive functions. While his concise definition inspired a new line of research on analog computation, it contains some technical inaccuracies. Focusing on his "primitive recursive" functions, we pin down what is problematic and discuss possible attempts to remove the ambiguity regarding the behavior of the differential recursion operator on partial functions. It turns out that in any case the purported relation to differentially algebraic functions, and hence to Shannon's model of analog computation, fails.<|reference_end|>
arxiv
@article{kawamura2007differential, title={Differential Recursion and Differentially Algebraic Functions}, author={Akitoshi Kawamura}, journal={Revised and published in ACM Trans. Comput. Logic 10, Article 22, 2009, under the title "Differential Recursion".}, year={2007}, doi={10.1145/1507244.1507252}, archivePrefix={arXiv}, eprint={0704.0301}, primaryClass={cs.CC} }
kawamura2007differential
arxiv-15
0704.0304
The World as Evolving Information
<|reference_start|>The World as Evolving Information: This paper discusses the benefits of describing the world as information, especially in the study of the evolution of life and cognition. Traditional studies encounter problems because it is difficult to describe life and cognition in terms of matter and energy, since their laws are valid only at the physical scale. However, if matter and energy, as well as life and cognition, are described in terms of information, evolution can be described consistently as information becoming more complex. The paper presents eight tentative laws of information, valid at multiple scales, which are generalizations of Darwinian, cybernetic, thermodynamic, psychological, philosophical, and complexity principles. These are further used to discuss the notions of life, cognition and their evolution.<|reference_end|>
arxiv
@article{gershenson2007the, title={The World as Evolving Information}, author={Carlos Gershenson}, journal={Minai, A., Braha, D., and Bar-Yam, Y., eds. Unifying Themes in Complex Systems VII, pp. 100-115. Springer, Berlin Heidelberg, 2012}, year={2007}, doi={10.1007/978-3-642-18003-3_10}, archivePrefix={arXiv}, eprint={0704.0304}, primaryClass={cs.IT cs.AI math.IT q-bio.PE} }
gershenson2007the
arxiv-16
0704.0309
The Complexity of HCP in Digraps with Degree Bound Two
<|reference_start|>The Complexity of HCP in Digraps with Degree Bound Two: The Hamiltonian cycle problem (HCP) in digraphs D with degree bound two is solved by two mappings in this paper. The first bijection is between an incidence matrix C_{nm} of simple digraph and an incidence matrix F of balanced bipartite undirected graph G; The second mapping is from a perfect matching of G to a cycle of D. It proves that the complexity of HCP in D is polynomial, and finding a second non-isomorphism Hamiltonian cycle from a given Hamiltonian digraph with degree bound two is also polynomial. Lastly it deduces P=NP base on the results.<|reference_end|>
arxiv
@article{zhu2007the, title={The Complexity of HCP in Digraps with Degree Bound Two}, author={Guohun Zhu}, journal={arXiv preprint arXiv:0704.0309}, year={2007}, archivePrefix={arXiv}, eprint={0704.0309}, primaryClass={cs.CC cs.DM} }
zhu2007the
arxiv-17
0704.0361
Pseudo-random Puncturing: A Technique to Lower the Error Floor of Turbo Codes
<|reference_start|>Pseudo-random Puncturing: A Technique to Lower the Error Floor of Turbo Codes: It has been observed that particular rate-1/2 partially systematic parallel concatenated convolutional codes (PCCCs) can achieve a lower error floor than that of their rate-1/3 parent codes. Nevertheless, good puncturing patterns can only be identified by means of an exhaustive search, whilst convergence towards low bit error probabilities can be problematic when the systematic output of a rate-1/2 partially systematic PCCC is heavily punctured. In this paper, we present and study a family of rate-1/2 partially systematic PCCCs, which we call pseudo-randomly punctured codes. We evaluate their bit error rate performance and we show that they always yield a lower error floor than that of their rate-1/3 parent codes. Furthermore, we compare analytic results to simulations and we demonstrate that their performance converges towards the error floor region, owning to the moderate puncturing of their systematic output. Consequently, we propose pseudo-random puncturing as a means of improving the bandwidth efficiency of a PCCC and simultaneously lowering its error floor.<|reference_end|>
arxiv
@article{chatzigeorgiou2007pseudo-random, title={Pseudo-random Puncturing: A Technique to Lower the Error Floor of Turbo Codes}, author={Ioannis Chatzigeorgiou, Miguel R. D. Rodrigues, Ian J. Wassell and Rolando Carrasco}, journal={arXiv preprint arXiv:0704.0361}, year={2007}, doi={10.1109/ISIT.2007.4557299}, archivePrefix={arXiv}, eprint={0704.0361}, primaryClass={cs.IT math.IT} }
chatzigeorgiou2007pseudo-random
arxiv-18
0704.0468
Inapproximability of Maximum Weighted Edge Biclique and Its Applications
<|reference_start|>Inapproximability of Maximum Weighted Edge Biclique and Its Applications: Given a bipartite graph $G = (V_1,V_2,E)$ where edges take on {\it both} positive and negative weights from set $\mathcal{S}$, the {\it maximum weighted edge biclique} problem, or $\mathcal{S}$-MWEB for short, asks to find a bipartite subgraph whose sum of edge weights is maximized. This problem has various applications in bioinformatics, machine learning and databases and its (in)approximability remains open. In this paper, we show that for a wide range of choices of $\mathcal{S}$, specifically when $| \frac{\min\mathcal{S}} {\max \mathcal{S}} | \in \Omega(\eta^{\delta-1/2}) \cap O(\eta^{1/2-\delta})$ (where $\eta = \max\{|V_1|, |V_2|\}$, and $\delta \in (0,1/2]$), no polynomial time algorithm can approximate $\mathcal{S}$-MWEB within a factor of $n^{\epsilon}$ for some $\epsilon > 0$ unless $\mathsf{RP = NP}$. This hardness result gives justification of the heuristic approaches adopted for various applied problems in the aforementioned areas, and indicates that good approximation algorithms are unlikely to exist. Specifically, we give two applications by showing that: 1) finding statistically significant biclusters in the SAMBA model, proposed in \cite{Tan02} for the analysis of microarray data, is $n^{\epsilon}$-inapproximable; and 2) no polynomial time algorithm exists for the Minimum Description Length with Holes problem \cite{Bu05} unless $\mathsf{RP=NP}$.<|reference_end|>
arxiv
@article{tan2007inapproximability, title={Inapproximability of Maximum Weighted Edge Biclique and Its Applications}, author={Jinsong Tan}, journal={LNCS 4978, TAMC 2008, pp 282-293}, year={2007}, archivePrefix={arXiv}, eprint={0704.0468}, primaryClass={cs.CC cs.DS} }
tan2007inapproximability
arxiv-19
0704.0492
Refuting the Pseudo Attack on the REESSE1+ Cryptosystem
<|reference_start|>Refuting the Pseudo Attack on the REESSE1+ Cryptosystem: We illustrate through example 1 and 2 that the condition at theorem 1 in [8] dissatisfies necessity, and the converse proposition of fact 1.1 in [8] does not hold, namely the condition Z/M - L/Ak < 1/(2 Ak^2) is not sufficient for f(i) + f(j) = f(k). Illuminate through an analysis and ex.3 that there is a logic error during deduction of fact 1.2, which causes each of fact 1.2, 1.3, 4 to be invalid. Demonstrate through ex.4 and 5 that each or the combination of qu+1 > qu * D at fact 4 and table 1 at fact 2.2 is not sufficient for f(i) + f(j) = f(k), property 1, 2, 3, 4, 5 each are invalid, and alg.1 based on fact 4 and alg.2 based on table 1 are disordered and wrong logically. Further, manifest through a repeated experiment and ex.5 that the data at table 2 is falsified, and the example in [8] is woven elaborately. We explain why Cx = Ax * W^f(x) (% M) is changed to Cx = (Ax * W^f(x))^d (% M) in REESSE1+ v2.1. To the signature fraud, we point out that [8] misunderstands the existence of T^-1 and Q^-1 % (M-1), and forging of Q can be easily avoided through moving H. Therefore, the conclusion of [8] that REESSE1+ is not secure at all (which connotes that [8] can extract a related private key from any public key in REESSE1+) is fully incorrect, and as long as the parameter Omega is fitly selected, REESSE1+ with Cx = Ax * W^f(x) (% M) is secure.<|reference_end|>
arxiv
@article{su2007refuting, title={Refuting the Pseudo Attack on the REESSE1+ Cryptosystem}, author={Shenghui Su, and Shuwang Lu}, journal={arXiv preprint arXiv:0704.0492}, year={2007}, archivePrefix={arXiv}, eprint={0704.0492}, primaryClass={cs.CR} }
su2007refuting
arxiv-20
0704.0499
Optimal Routing for Decode-and-Forward based Cooperation in Wireless Networks
<|reference_start|>Optimal Routing for Decode-and-Forward based Cooperation in Wireless Networks: We investigate cooperative wireless relay networks in which the nodes can help each other in data transmission. We study different coding strategies in the single-source single-destination network with many relay nodes. Given the myriad of ways in which nodes can cooperate, there is a natural routing problem, i.e., determining an ordered set of nodes to relay the data from the source to the destination. We find that for a given route, the decode-and-forward strategy, which is an information theoretic cooperative coding strategy, achieves rates significantly higher than that achievable by the usual multi-hop coding strategy, which is a point-to-point non-cooperative coding strategy. We construct an algorithm to find an optimal route (in terms of rate maximizing) for the decode-and-forward strategy. Since the algorithm runs in factorial time in the worst case, we propose a heuristic algorithm that runs in polynomial time. The heuristic algorithm outputs an optimal route when the nodes transmit independent codewords. We implement these coding strategies using practical low density parity check codes to compare the performance of the strategies on different routes.<|reference_end|>
arxiv
@article{ong2007optimal, title={Optimal Routing for Decode-and-Forward based Cooperation in Wireless Networks}, author={Lawrence Ong and Mehul Motani}, journal={Proceedings of the 4th Annual IEEE Communications Society Conference on Sensor, Mesh, and Ad Hoc Communications and Networks (SECON 2007), San Diego, CA, pp. 334-343, Jun. 18-21 2007.}, year={2007}, doi={10.1109/SAHCN.2007.4292845}, archivePrefix={arXiv}, eprint={0704.0499}, primaryClass={cs.IT math.IT} }
ong2007optimal
arxiv-21
0704.0528
Many-to-One Throughput Capacity of IEEE 80211 Multi-hop Wireless Networks
<|reference_start|>Many-to-One Throughput Capacity of IEEE 80211 Multi-hop Wireless Networks: This paper investigates the many-to-one throughput capacity (and by symmetry, one-to-many throughput capacity) of IEEE 802.11 multi-hop networks. It has generally been assumed in prior studies that the many-to-one throughput capacity is upper-bounded by the link capacity L. Throughput capacity L is not achievable under 802.11. This paper introduces the notion of "canonical networks", which is a class of regularly-structured networks whose capacities can be analyzed more easily than unstructured networks. We show that the throughput capacity of canonical networks under 802.11 has an analytical upper bound of 3L/4 when the source nodes are two or more hops away from the sink; and simulated throughputs of 0.690L (0.740L) when the source nodes are many hops away. We conjecture that 3L/4 is also the upper bound for general networks. When all links have equal length, 2L/3 can be shown to be the upper bound for general networks. Our simulations show that 802.11 networks with random topologies operated with AODV routing can only achieve throughputs far below the upper bounds. Fortunately, by properly selecting routes near the gateway (or by properly positioning the relay nodes leading to the gateway) to fashion after the structure of canonical networks, the throughput can be improved significantly by more than 150%. Indeed, in a dense network, it is worthwhile to deactivate some of the relay nodes near the sink judiciously.<|reference_end|>
arxiv
@article{chan2007many-to-one, title={Many-to-One Throughput Capacity of IEEE 802.11 Multi-hop Wireless Networks}, author={Chi Pan Chan, Soung Chang Liew, An Chan}, journal={arXiv preprint arXiv:0704.0528}, year={2007}, archivePrefix={arXiv}, eprint={0704.0528}, primaryClass={cs.NI cs.IT math.IT} }
chan2007many-to-one
arxiv-22
0704.0540
On the Achievable Rate Regions for Interference Channels with Degraded Message Sets
<|reference_start|>On the Achievable Rate Regions for Interference Channels with Degraded Message Sets: The interference channel with degraded message sets (IC-DMS) refers to a communication model in which two senders attempt to communicate with their respective receivers simultaneously through a common medium, and one of the senders has complete and a priori (non-causal) knowledge about the message being transmitted by the other. A coding scheme that collectively has advantages of cooperative coding, collaborative coding, and dirty paper coding, is developed for such a channel. With resorting to this coding scheme, achievable rate regions of the IC-DMS in both discrete memoryless and Gaussian cases are derived, which, in general, include several previously known rate regions. Numerical examples for the Gaussian case demonstrate that in the high-interference-gain regime, the derived achievable rate regions offer considerable improvements over these existing results.<|reference_end|>
arxiv
@article{jiang2007on, title={On the Achievable Rate Regions for Interference Channels with Degraded Message Sets}, author={Jinhua Jiang and Xin Yan}, journal={arXiv preprint arXiv:0704.0540}, year={2007}, archivePrefix={arXiv}, eprint={0704.0540}, primaryClass={cs.IT math.IT} }
jiang2007on
arxiv-23
0704.0590
A Low Complexity Algorithm and Architecture for Systematic Encoding of Hermitian Codes
<|reference_start|>A Low Complexity Algorithm and Architecture for Systematic Encoding of Hermitian Codes: We present an algorithm for systematic encoding of Hermitian codes. For a Hermitian code defined over GF(q^2), the proposed algorithm achieves a run time complexity of O(q^2) and is suitable for VLSI implementation. The encoder architecture uses as main blocks q varying-rate Reed-Solomon encoders and achieves a space complexity of O(q^2) in terms of finite field multipliers and memory elements.<|reference_end|>
arxiv
@article{agarwal2007a, title={A Low Complexity Algorithm and Architecture for Systematic Encoding of Hermitian Codes}, author={Rachit Agarwal, Ralf Koetter and Emanuel Popovici}, journal={arXiv preprint arXiv:0704.0590}, year={2007}, doi={10.1109/ISIT.2007.4557408}, archivePrefix={arXiv}, eprint={0704.0590}, primaryClass={cs.IT math.IT} }
agarwal2007a
arxiv-24
0704.0671
Learning from compressed observations
<|reference_start|>Learning from compressed observations: The problem of statistical learning is to construct a predictor of a random variable $Y$ as a function of a related random variable $X$ on the basis of an i.i.d. training sample from the joint distribution of $(X,Y)$. Allowable predictors are drawn from some specified class, and the goal is to approach asymptotically the performance (expected loss) of the best predictor in the class. We consider the setting in which one has perfect observation of the $X$-part of the sample, while the $Y$-part has to be communicated at some finite bit rate. The encoding of the $Y$-values is allowed to depend on the $X$-values. Under suitable regularity conditions on the admissible predictors, the underlying family of probability distributions and the loss function, we give an information-theoretic characterization of achievable predictor performance in terms of conditional distortion-rate functions. The ideas are illustrated on the example of nonparametric regression in Gaussian noise.<|reference_end|>
arxiv
@article{raginsky2007learning, title={Learning from compressed observations}, author={Maxim Raginsky}, journal={arXiv preprint arXiv:0704.0671}, year={2007}, doi={10.1109/ITW.2007.4313111}, archivePrefix={arXiv}, eprint={0704.0671}, primaryClass={cs.IT cs.LG math.IT} }
raginsky2007learning
arxiv-25
0704.0730
Revisiting the Issues On Netflow Sample and Export Performance
<|reference_start|>Revisiting the Issues On Netflow Sample and Export Performance: The high volume of packets and packet rates of traffic on some router links makes it exceedingly difficult for routers to examine every packet in order to keep detailed statistics about the traffic which is traversing the router. Sampling is commonly applied on routers in order to limit the load incurred by the collection of information that the router has to undertake when evaluating flow information for monitoring purposes. The sampling process in nearly all cases is a deterministic process of choosing 1 in every N packets on a per-interface basis, and then forming the flow statistics based on the collected sampled statistics. Even though this sampling may not be significant for some statistics, such as packet rate, others can be severely distorted. However, it is important to consider the sampling techniques and their relative accuracy when applied to different traffic patterns. The main disadvantage of sampling is the loss of accuracy in the collected trace when compared to the original traffic stream. To date there has not been a detailed analysis of the impact of sampling at a router in various traffic profiles and flow criteria. In this paper, we assess the performance of the sampling process as used in NetFlow in detail, and we discuss some techniques for the compensation of loss of monitoring detail.<|reference_end|>
arxiv
@article{haddadi2007revisiting, title={Revisiting the Issues On Netflow Sample and Export Performance}, author={Hamed Haddadi, Raul Landa, Miguel Rio, Saleem Bhatti}, journal={arXiv preprint arXiv:0704.0730}, year={2007}, archivePrefix={arXiv}, eprint={0704.0730}, primaryClass={cs.PF cs.NI} }
haddadi2007revisiting
arxiv-26
0704.0788
Optimal Synthesis of Multiple Algorithms
<|reference_start|>Optimal Synthesis of Multiple Algorithms: In this paper we give a definition of "algorithm," "finite algorithm," "equivalent algorithms," and what it means for a single algorithm to dominate a set of algorithms. We define a derived algorithm which may have a smaller mean execution time than any of its component algorithms. We give an explicit expression for the mean execution time (when it exists) of the derived algorithm. We give several illustrative examples of derived algorithms with two component algorithms. We include mean execution time solutions for two-algorithm processors whose joint density of execution times are of several general forms. For the case in which the joint density for a two-algorithm processor is a step function, we give a maximum-likelihood estimation scheme with which to analyze empirical processing time data.<|reference_end|>
arxiv
@article{soileau2007optimal, title={Optimal Synthesis of Multiple Algorithms}, author={Kerry M. Soileau}, journal={arXiv preprint arXiv:0704.0788}, year={2007}, archivePrefix={arXiv}, eprint={0704.0788}, primaryClass={cs.DS cs.PF} }
soileau2007optimal
arxiv-27
0704.0802
Hybrid-ARQ in Multihop Networks with Opportunistic Relay Selection
<|reference_start|>Hybrid-ARQ in Multihop Networks with Opportunistic Relay Selection: This paper develops a contention-based opportunistic feedback technique towards relay selection in a dense wireless network. This technique enables the forwarding of additional parity information from the selected relay to the destination. For a given network, the effects of varying key parameters such as the feedback probability are presented and discussed. A primary advantage of the proposed technique is that relay selection can be performed in a distributed way. Simulation results find its performance to closely match that of centralized schemes that utilize GPS information, unlike the proposed method. The proposed relay selection method is also found to achieve throughput gains over a point-to-point transmission strategy.<|reference_end|>
arxiv
@article{lo2007hybrid-arq, title={Hybrid-ARQ in Multihop Networks with Opportunistic Relay Selection}, author={Caleb K. Lo, Robert W. Heath, Jr. and Sriram Vishwanath}, journal={arXiv preprint arXiv:0704.0802}, year={2007}, archivePrefix={arXiv}, eprint={0704.0802}, primaryClass={cs.IT math.IT} }
lo2007hybrid-arq
arxiv-28
0704.0805
Opportunistic Relay Selection with Limited Feedback
<|reference_start|>Opportunistic Relay Selection with Limited Feedback: It has been shown that a decentralized relay selection protocol based on opportunistic feedback from the relays yields good throughput performance in dense wireless networks. This selection strategy supports a hybrid-ARQ transmission approach where relays forward parity information to the destination in the event of a decoding error. Such an approach, however, suffers a loss compared to centralized strategies that select relays with the best channel gain to the destination. This paper closes the performance gap by adding another level of channel feedback to the decentralized relay selection problem. It is demonstrated that only one additional bit of feedback is necessary for good throughput performance. The performance impact of varying key parameters such as the number of relays and the channel feedback threshold is discussed. An accompanying bit error rate analysis demonstrates the importance of relay selection.<|reference_end|>
arxiv
@article{lo2007opportunistic, title={Opportunistic Relay Selection with Limited Feedback}, author={Caleb K. Lo, Robert W. Heath, Jr. and Sriram Vishwanath}, journal={arXiv preprint arXiv:0704.0805}, year={2007}, doi={10.1109/VETECS.2007.40}, archivePrefix={arXiv}, eprint={0704.0805}, primaryClass={cs.IT math.IT} }
lo2007opportunistic
arxiv-29
0704.0831
On packet lengths and overhead for random linear coding over the erasure channel
<|reference_start|>On packet lengths and overhead for random linear coding over the erasure channel: We assess the practicality of random network coding by illuminating the issue of overhead and considering it in conjunction with increasingly long packets sent over the erasure channel. We show that the transmission of increasingly long packets, consisting of either of an increasing number of symbols per packet or an increasing symbol alphabet size, results in a data rate approaching zero over the erasure channel. This result is due to an erasure probability that increases with packet length. Numerical results for a particular modulation scheme demonstrate a data rate of approximately zero for a large, but finite-length packet. Our results suggest a reduction in the performance gains offered by random network coding.<|reference_end|>
arxiv
@article{shrader2007on, title={On packet lengths and overhead for random linear coding over the erasure channel}, author={Brooke Shrader and Anthony Ephremides}, journal={arXiv preprint arXiv:0704.0831}, year={2007}, archivePrefix={arXiv}, eprint={0704.0831}, primaryClass={cs.IT math.IT} }
shrader2007on
arxiv-30
0704.0834
P-adic arithmetic coding
<|reference_start|>P-adic arithmetic coding: A new incremental algorithm for data compression is presented. For a sequence of input symbols algorithm incrementally constructs a p-adic integer number as an output. Decoding process starts with less significant part of a p-adic integer and incrementally reconstructs a sequence of input symbols. Algorithm is based on certain features of p-adic numbers and p-adic norm. p-adic coding algorithm may be considered as of generalization a popular compression technique - arithmetic coding algorithms. It is shown that for p = 2 the algorithm works as integer variant of arithmetic coding; for a special class of models it gives exactly the same codes as Huffman's algorithm, for another special model and a specific alphabet it gives Golomb-Rice codes.<|reference_end|>
arxiv
@article{rodionov2007p-adic, title={P-adic arithmetic coding}, author={Anatoly Rodionov, Sergey Volkov}, journal={arXiv preprint arXiv:0704.0834}, year={2007}, archivePrefix={arXiv}, eprint={0704.0834}, primaryClass={cs.DS} }
rodionov2007p-adic
arxiv-31
0704.0838
Universal Source Coding for Monotonic and Fast Decaying Monotonic Distributions
<|reference_start|>Universal Source Coding for Monotonic and Fast Decaying Monotonic Distributions: We study universal compression of sequences generated by monotonic distributions. We show that for a monotonic distribution over an alphabet of size $k$, each probability parameter costs essentially $0.5 \log (n/k^3)$ bits, where $n$ is the coded sequence length, as long as $k = o(n^{1/3})$. Otherwise, for $k = O(n)$, the total average sequence redundancy is $O(n^{1/3+\epsilon})$ bits overall. We then show that there exists a sub-class of monotonic distributions over infinite alphabets for which redundancy of $O(n^{1/3+\epsilon})$ bits overall is still achievable. This class contains fast decaying distributions, including many distributions over the integers and geometric distributions. For some slower decays, including other distributions over the integers, redundancy of $o(n)$ bits overall is achievable, where a method to compute specific redundancy rates for such distributions is derived. The results are specifically true for finite entropy monotonic distributions. Finally, we study individual sequence redundancy behavior assuming a sequence is governed by a monotonic distribution. We show that for sequences whose empirical distributions are monotonic, individual redundancy bounds similar to those in the average case can be obtained. However, even if the monotonicity in the empirical distribution is violated, diminishing per symbol individual sequence redundancies with respect to the monotonic maximum likelihood description length may still be achievable.<|reference_end|>
arxiv
@article{shamir2007universal, title={Universal Source Coding for Monotonic and Fast Decaying Monotonic Distributions}, author={Gil I. Shamir}, journal={arXiv preprint arXiv:0704.0838}, year={2007}, archivePrefix={arXiv}, eprint={0704.0838}, primaryClass={cs.IT math.IT} }
shamir2007universal
arxiv-32
0704.0858
Lessons Learned from the deployment of a high-interaction honeypot
<|reference_start|>Lessons Learned from the deployment of a high-interaction honeypot: This paper presents an experimental study and the lessons learned from the observation of the attackers when logged on a compromised machine. The results are based on a six months period during which a controlled experiment has been run with a high interaction honeypot. We correlate our findings with those obtained with a worldwide distributed system of lowinteraction honeypots.<|reference_end|>
arxiv
@article{alata2007lessons, title={Lessons Learned from the deployment of a high-interaction honeypot}, author={Eric Alata (LAAS), Vincent Nicomette (LAAS), Mohamed Ka^aniche (LAAS), Marc Dacier (LAAS), Matthieu Herrb (LAAS)}, journal={Proc. 6th European Dependable Computing Conference (EDCC-6), Coimbra (Portugal), 18-20 octobre 2006 (18/10/2006) 39-44}, year={2007}, archivePrefix={arXiv}, eprint={0704.0858}, primaryClass={cs.CR} }
alata2007lessons
arxiv-33
0704.0860
Availability assessment of SunOS/Solaris Unix Systems based on Syslogd and wtmpx logfiles : a case study
<|reference_start|>Availability assessment of SunOS/Solaris Unix Systems based on Syslogd and wtmpx logfiles : a case study: This paper presents a measurement-based availability assessment study using field data collected during a 4-year period from 373 SunOS/Solaris Unix workstations and servers interconnected through a local area network. We focus on the estimation of machine uptimes, downtimes and availability based on the identification of failures that caused total service loss. Data corresponds to syslogd event logs that contain a large amount of information about the normal activity of the studied systems as well as their behavior in the presence of failures. It is widely recognized that the information contained in such event logs might be incomplete or imperfect. The solution investigated in this paper to address this problem is based on the use of auxiliary sources of data obtained from wtmpx files maintained by the SunOS/Solaris Unix operating system. The results obtained suggest that the combined use of wtmpx and syslogd log files provides more complete information on the state of the target systems that is useful to provide availability estimations that better reflect reality.<|reference_end|>
arxiv
@article{simache2007availability, title={Availability assessment of SunOS/Solaris Unix Systems based on Syslogd and wtmpx logfiles : a case study}, author={Cristina Simache (LAAS), Mohamed Kaaniche (LAAS)}, journal={Proc. 2005 IEEE Pacific Rim International Symposium on Dependable Computing (PRDC'2005), Changsha, Hunan (Chine), 12-14 D{\'e}cembre 2005 (18/12/2005) 49-56}, year={2007}, archivePrefix={arXiv}, eprint={0704.0860}, primaryClass={cs.PF} }
simache2007availability
arxiv-34
0704.0861
Empirical analysis and statistical modeling of attack processes based on honeypots
<|reference_start|>Empirical analysis and statistical modeling of attack processes based on honeypots: Honeypots are more and more used to collect data on malicious activities on the Internet and to better understand the strategies and techniques used by attackers to compromise target systems. Analysis and modeling methodologies are needed to support the characterization of attack processes based on the data collected from the honeypots. This paper presents some empirical analyses based on the data collected from the Leurr{\'e}.com honeypot platforms deployed on the Internet and presents some preliminary modeling studies aimed at fulfilling such objectives.<|reference_end|>
arxiv
@article{kaaniche2007empirical, title={Empirical analysis and statistical modeling of attack processes based on honeypots}, author={Mohamed Kaaniche (LAAS), Y. Deswarte (LAAS), Eric Alata (LAAS), Marc Dacier (SC), Vincent Nicomette (LAAS)}, journal={IEEE/IFIP International Conference on Dependable Systems and Networks (DSN-2006) (25/06/2006) 119-124}, year={2007}, archivePrefix={arXiv}, eprint={0704.0861}, primaryClass={cs.PF cs.CR} }
kaaniche2007empirical
arxiv-35
0704.0865
An architecture-based dependability modeling framework using AADL
<|reference_start|>An architecture-based dependability modeling framework using AADL: For efficiency reasons, the software system designers' will is to use an integrated set of methods and tools to describe specifications and designs, and also to perform analyses such as dependability, schedulability and performance. AADL (Architecture Analysis and Design Language) has proved to be efficient for software architecture modeling. In addition, AADL was designed to accommodate several types of analyses. This paper presents an iterative dependency-driven approach for dependability modeling using AADL. It is illustrated on a small example. This approach is part of a complete framework that allows the generation of dependability analysis and evaluation models from AADL models to support the analysis of software and system architectures, in critical application domains.<|reference_end|>
arxiv
@article{rugina2007an, title={An architecture-based dependability modeling framework using AADL}, author={Ana-Elena Rugina (LAAS), Karama Kanoun (LAAS), Mohamed Kaaniche (LAAS)}, journal={Proc. 10th IASTED International Conference on Software Engineering and Applications (SEA'2006), Dallas (USA), 13-15 November2006 (13/11/2006) 222-227}, year={2007}, archivePrefix={arXiv}, eprint={0704.0865}, primaryClass={cs.PF cs.SE} }
rugina2007an
arxiv-36
0704.0879
A Hierarchical Approach for Dependability Analysis of a Commercial Cache-Based RAID Storage Architecture
<|reference_start|>A Hierarchical Approach for Dependability Analysis of a Commercial Cache-Based RAID Storage Architecture: We present a hierarchical simulation approach for the dependability analysis and evaluation of a highly available commercial cache-based RAID storage system. The archi-tecture is complex and includes several layers of overlap-ping error detection and recovery mechanisms. Three ab-straction levels have been developed to model the cache architecture, cache operations, and error detection and recovery mechanism. The impact of faults and errors oc-curring in the cache and in the disks is analyzed at each level of the hierarchy. A simulation submodel is associated with each abstraction level. The models have been devel-oped using DEPEND, a simulation-based environment for system-level dependability analysis, which provides facili-ties to inject faults into a functional behavior model, to simulate error detection and recovery mechanisms, and to evaluate quantitative measures. Several fault models are defined for each submodel to simulate cache component failures, disk failures, transmission errors, and data errors in the cache memory and in the disks. Some of the parame-ters characterizing fault injection in a given submodel cor-respond to probabilities evaluated from the simulation of the lower-level submodel. Based on the proposed method-ology, we evaluate and analyze 1) the system behavior un-der a real workload and high error rate (focusing on error bursts), 2) the coverage of the error detection mechanisms implemented in the system and the error latency distribu-tions, and 3) the accumulation of errors in the cache and in the disks.<|reference_end|>
arxiv
@article{kaaniche2007a, title={A Hierarchical Approach for Dependability Analysis of a Commercial Cache-Based RAID Storage Architecture}, author={Mohamed Kaaniche (LAAS), Luigi Romano (UIUC), Zbigniew Kalbarczyk (UIUC), Ravishankar Iyer (UIUC), Rick Karcich (STORAGETEK)}, journal={Proc. 28th IEEE International Symposium on Fault-Tolerant Computing (FTCS-28), Munich (Germany), IEEE Computer Society, June 1998, pp.6-15 (1998) 6-15}, year={2007}, archivePrefix={arXiv}, eprint={0704.0879}, primaryClass={cs.PF} }
kaaniche2007a
arxiv-37
0704.0954
Sensor Networks with Random Links: Topology Design for Distributed Consensus
<|reference_start|>Sensor Networks with Random Links: Topology Design for Distributed Consensus: In a sensor network, in practice, the communication among sensors is subject to:(1) errors or failures at random times; (3) costs; and(2) constraints since sensors and networks operate under scarce resources, such as power, data rate, or communication. The signal-to-noise ratio (SNR) is usually a main factor in determining the probability of error (or of communication failure) in a link. These probabilities are then a proxy for the SNR under which the links operate. The paper studies the problem of designing the topology, i.e., assigning the probabilities of reliable communication among sensors (or of link failures) to maximize the rate of convergence of average consensus, when the link communication costs are taken into account, and there is an overall communication budget constraint. To consider this problem, we address a number of preliminary issues: (1) model the network as a random topology; (2) establish necessary and sufficient conditions for mean square sense (mss) and almost sure (a.s.) convergence of average consensus when network links fail; and, in particular, (3) show that a necessary and sufficient condition for both mss and a.s. convergence is for the algebraic connectivity of the mean graph describing the network topology to be strictly positive. With these results, we formulate topology design, subject to random link failures and to a communication cost constraint, as a constrained convex optimization problem to which we apply semidefinite programming techniques. We show by an extensive numerical study that the optimal design improves significantly the convergence speed of the consensus algorithm and can achieve the asymptotic performance of a non-random network at a fraction of the communication cost.<|reference_end|>
arxiv
@article{kar2007sensor, title={Sensor Networks with Random Links: Topology Design for Distributed Consensus}, author={Soummya Kar and Jose M. F. Moura}, journal={arXiv preprint arXiv:0704.0954}, year={2007}, doi={10.1109/TSP.2008.920143}, archivePrefix={arXiv}, eprint={0704.0954}, primaryClass={cs.IT cs.LG math.IT} }
kar2007sensor
arxiv-38
0704.0967
Cross-Layer Optimization of MIMO-Based Mesh Networks with Gaussian Vector Broadcast Channels
<|reference_start|>Cross-Layer Optimization of MIMO-Based Mesh Networks with Gaussian Vector Broadcast Channels: MIMO technology is one of the most significant advances in the past decade to increase channel capacity and has a great potential to improve network capacity for mesh networks. In a MIMO-based mesh network, the links outgoing from each node sharing the common communication spectrum can be modeled as a Gaussian vector broadcast channel. Recently, researchers showed that ``dirty paper coding'' (DPC) is the optimal transmission strategy for Gaussian vector broadcast channels. So far, there has been little study on how this fundamental result will impact the cross-layer design for MIMO-based mesh networks. To fill this gap, we consider the problem of jointly optimizing DPC power allocation in the link layer at each node and multihop/multipath routing in a MIMO-based mesh networks. It turns out that this optimization problem is a very challenging non-convex problem. To address this difficulty, we transform the original problem to an equivalent problem by exploiting the channel duality. For the transformed problem, we develop an efficient solution procedure that integrates Lagrangian dual decomposition method, conjugate gradient projection method based on matrix differential calculus, cutting-plane method, and subgradient method. In our numerical example, it is shown that we can achieve a network performance gain of 34.4% by using DPC.<|reference_end|>
arxiv
@article{liu2007cross-layer, title={Cross-Layer Optimization of MIMO-Based Mesh Networks with Gaussian Vector Broadcast Channels}, author={Jia Liu and Y. Thomas Hou}, journal={arXiv preprint arXiv:0704.0967}, year={2007}, archivePrefix={arXiv}, eprint={0704.0967}, primaryClass={cs.IT cs.AR math.IT} }
liu2007cross-layer
arxiv-39
0704.0985
Architecture for Pseudo Acausal Evolvable Embedded Systems
<|reference_start|>Architecture for Pseudo Acausal Evolvable Embedded Systems: Advances in semiconductor technology are contributing to the increasing complexity in the design of embedded systems. Architectures with novel techniques such as evolvable nature and autonomous behavior have engrossed lot of attention. This paper demonstrates conceptually evolvable embedded systems can be characterized basing on acausal nature. It is noted that in acausal systems, future input needs to be known, here we make a mechanism such that the system predicts the future inputs and exhibits pseudo acausal nature. An embedded system that uses theoretical framework of acausality is proposed. Our method aims at a novel architecture that features the hardware evolability and autonomous behavior alongside pseudo acausality. Various aspects of this architecture are discussed in detail along with the limitations.<|reference_end|>
arxiv
@article{abubakr2007architecture, title={Architecture for Pseudo Acausal Evolvable Embedded Systems}, author={Mohd Abubakr, R.M.Vinay}, journal={arXiv preprint arXiv:0704.0985}, year={2007}, archivePrefix={arXiv}, eprint={0704.0985}, primaryClass={cs.NE cs.AI} }
abubakr2007architecture
arxiv-40
0704.1020
The on-line shortest path problem under partial monitoring
<|reference_start|>The on-line shortest path problem under partial monitoring: The on-line shortest path problem is considered under various models of partial monitoring. Given a weighted directed acyclic graph whose edge weights can change in an arbitrary (adversarial) way, a decision maker has to choose in each round of a game a path between two distinguished vertices such that the loss of the chosen path (defined as the sum of the weights of its composing edges) be as small as possible. In a setting generalizing the multi-armed bandit problem, after choosing a path, the decision maker learns only the weights of those edges that belong to the chosen path. For this problem, an algorithm is given whose average cumulative loss in n rounds exceeds that of the best path, matched off-line to the entire sequence of the edge weights, by a quantity that is proportional to 1/\sqrt{n} and depends only polynomially on the number of edges of the graph. The algorithm can be implemented with linear complexity in the number of rounds n and in the number of edges. An extension to the so-called label efficient setting is also given, in which the decision maker is informed about the weights of the edges corresponding to the chosen path at a total of m << n time instances. Another extension is shown where the decision maker competes against a time-varying path, a generalization of the problem of tracking the best expert. A version of the multi-armed bandit setting for shortest path is also discussed where the decision maker learns only the total weight of the chosen path but not the weights of the individual edges on the path. Applications to routing in packet switched networks along with simulation results are also presented.<|reference_end|>
arxiv
@article{gyorgy2007the, title={The on-line shortest path problem under partial monitoring}, author={Andras Gyorgy, Tamas Linder, Gabor Lugosi, Gyorgy Ottucsak}, journal={arXiv preprint arXiv:0704.1020}, year={2007}, archivePrefix={arXiv}, eprint={0704.1020}, primaryClass={cs.LG cs.SC} }
gyorgy2007the
arxiv-41
0704.1028
A neural network approach to ordinal regression
<|reference_start|>A neural network approach to ordinal regression: Ordinal regression is an important type of learning, which has properties of both classification and regression. Here we describe a simple and effective approach to adapt a traditional neural network to learn ordinal categories. Our approach is a generalization of the perceptron method for ordinal regression. On several benchmark datasets, our method (NNRank) outperforms a neural network classification method. Compared with the ordinal regression methods using Gaussian processes and support vector machines, NNRank achieves comparable performance. Moreover, NNRank has the advantages of traditional neural networks: learning in both online and batch modes, handling very large training datasets, and making rapid predictions. These features make NNRank a useful and complementary tool for large-scale data processing tasks such as information retrieval, web page ranking, collaborative filtering, and protein ranking in Bioinformatics.<|reference_end|>
arxiv
@article{cheng2007a, title={A neural network approach to ordinal regression}, author={Jianlin Cheng}, journal={arXiv preprint arXiv:0704.1028}, year={2007}, archivePrefix={arXiv}, eprint={0704.1028}, primaryClass={cs.LG cs.AI cs.NE} }
cheng2007a
arxiv-42
0704.1043
On the Kolmogorov-Chaitin Complexity for short sequences
<|reference_start|>On the Kolmogorov-Chaitin Complexity for short sequences: A drawback of Kolmogorov-Chaitin complexity (K) as a function from s to the shortest program producing s is its noncomputability which limits its range of applicability. Moreover, when strings are short, the dependence of K on a particular universal Turing machine U can be arbitrary. In practice one can approximate it by computable compression methods. However, such compression methods do not always provide meaningful approximations--for strings shorter, for example, than typical compiler lengths. In this paper we suggest an empirical approach to overcome this difficulty and to obtain a stable definition of the Kolmogorov-Chaitin complexity for short sequences. Additionally, a correlation in terms of distribution frequencies was found across the output of two models of abstract machines, namely unidimensional cellular automata and deterministic Turing machine.<|reference_end|>
arxiv
@article{delahaye2007on, title={On the Kolmogorov-Chaitin Complexity for short sequences}, author={Jean-Paul Delahaye and Hector Zenil}, journal={arXiv preprint arXiv:0704.1043}, year={2007}, archivePrefix={arXiv}, eprint={0704.1043}, primaryClass={cs.CC cs.IT math.IT} }
delahaye2007on
arxiv-43
0704.1068
Fast paths in large-scale dynamic road networks
<|reference_start|>Fast paths in large-scale dynamic road networks: Efficiently computing fast paths in large scale dynamic road networks (where dynamic traffic information is known over a part of the network) is a practical problem faced by several traffic information service providers who wish to offer a realistic fast path computation to GPS terminal enabled vehicles. The heuristic solution method we propose is based on a highway hierarchy-based shortest path algorithm for static large-scale networks; we maintain a static highway hierarchy and perform each query on the dynamically evaluated network.<|reference_end|>
arxiv
@article{nannicini2007fast, title={Fast paths in large-scale dynamic road networks}, author={Giacomo Nannicini, Philippe Baptiste, Gilles Barbier, Daniel Krob, Leo Liberti}, journal={arXiv preprint arXiv:0704.1068}, year={2007}, archivePrefix={arXiv}, eprint={0704.1068}, primaryClass={cs.NI cs.DS} }
nannicini2007fast
arxiv-44
0704.1070
Differential Diversity Reception of MDPSK over Independent Rayleigh Channels with Nonidentical Branch Statistics and Asymmetric Fading Spectrum
<|reference_start|>Differential Diversity Reception of MDPSK over Independent Rayleigh Channels with Nonidentical Branch Statistics and Asymmetric Fading Spectrum: This paper is concerned with optimum diversity receiver structure and its performance analysis of differential phase shift keying (DPSK) with differential detection over nonselective, independent, nonidentically distributed, Rayleigh fading channels. The fading process in each branch is assumed to have an arbitrary Doppler spectrum with arbitrary Doppler bandwidth, but to have distinct, asymmetric fading power spectral density characteristic. Using 8-DPSK as an example, the average bit error probability (BEP) of the optimum diversity receiver is obtained by calculating the BEP for each of the three individual bits. The BEP results derived are given in exact, explicit, closed-form expressions which show clearly the behavior of the performance as a function of various system parameters.<|reference_end|>
arxiv
@article{fu2007differential, title={Differential Diversity Reception of MDPSK over Independent Rayleigh Channels with Nonidentical Branch Statistics and Asymmetric Fading Spectrum}, author={Hua Fu and Pooi Yuen Kam}, journal={arXiv preprint arXiv:0704.1070}, year={2007}, archivePrefix={arXiv}, eprint={0704.1070}, primaryClass={cs.IT cs.PF math.IT} }
fu2007differential
arxiv-45
0704.1158
Novelty and Collective Attention
<|reference_start|>Novelty and Collective Attention: The subject of collective attention is central to an information age where millions of people are inundated with daily messages. It is thus of interest to understand how attention to novel items propagates and eventually fades among large populations. We have analyzed the dynamics of collective attention among one million users of an interactive website -- \texttt{digg.com} -- devoted to thousands of novel news stories. The observations can be described by a dynamical model characterized by a single novelty factor. Our measurements indicate that novelty within groups decays with a stretched-exponential law, suggesting the existence of a natural time scale over which attention fades.<|reference_end|>
arxiv
@article{wu2007novelty, title={Novelty and Collective Attention}, author={Fang Wu and Bernardo A. Huberman}, journal={arXiv preprint arXiv:0704.1158}, year={2007}, doi={10.1073/pnas.0704916104}, archivePrefix={arXiv}, eprint={0704.1158}, primaryClass={cs.CY cs.IR physics.soc-ph} }
wu2007novelty
arxiv-46
0704.1196
Novel algorithm to calculate hypervolume indicator of Pareto approximation set
<|reference_start|>Novel algorithm to calculate hypervolume indicator of Pareto approximation set: Hypervolume indicator is a commonly accepted quality measure for comparing Pareto approximation set generated by multi-objective optimizers. The best known algorithm to calculate it for $n$ points in $d$-dimensional space has a run time of $O(n^{d/2})$ with special data structures. This paper presents a recursive, vertex-splitting algorithm for calculating the hypervolume indicator of a set of $n$ non-comparable points in $d>2$ dimensions. It splits out multiple child hyper-cuboids which can not be dominated by a splitting reference point. In special, the splitting reference point is carefully chosen to minimize the number of points in the child hyper-cuboids. The complexity analysis shows that the proposed algorithm achieves $O((\frac{d}{2})^n)$ time and $O(dn^2)$ space complexity in the worst case.<|reference_end|>
arxiv
@article{yang2007novel, title={Novel algorithm to calculate hypervolume indicator of Pareto approximation set}, author={Qing Yang and Shengchao Ding}, journal={arXiv preprint arXiv:0704.1196}, year={2007}, archivePrefix={arXiv}, eprint={0704.1196}, primaryClass={cs.CG cs.NE} }
yang2007novel
arxiv-47
0704.1198
A Doubly Distributed Genetic Algorithm for Network Coding
<|reference_start|>A Doubly Distributed Genetic Algorithm for Network Coding: We present a genetic algorithm which is distributed in two novel ways: along genotype and temporal axes. Our algorithm first distributes, for every member of the population, a subset of the genotype to each network node, rather than a subset of the population to each. This genotype distribution is shown to offer a significant gain in running time. Then, for efficient use of the computational resources in the network, our algorithm divides the candidate solutions into pipelined sets and thus the distribution is in the temporal domain, rather that in the spatial domain. This temporal distribution may lead to temporal inconsistency in selection and replacement, however our experiments yield better efficiency in terms of the time to convergence without incurring significant penalties.<|reference_end|>
arxiv
@article{kim2007a, title={A Doubly Distributed Genetic Algorithm for Network Coding}, author={Minkyu Kim, Varun Aggarwal, Una-May O'Reilly, Muriel Medard}, journal={arXiv preprint arXiv:0704.1198}, year={2007}, archivePrefix={arXiv}, eprint={0704.1198}, primaryClass={cs.NE cs.NI} }
kim2007a
arxiv-48
0704.1267
Text Line Segmentation of Historical Documents: a Survey
<|reference_start|>Text Line Segmentation of Historical Documents: a Survey: There is a huge amount of historical documents in libraries and in various National Archives that have not been exploited electronically. Although automatic reading of complete pages remains, in most cases, a long-term objective, tasks such as word spotting, text/image alignment, authentication and extraction of specific fields are in use today. For all these tasks, a major step is document segmentation into text lines. Because of the low quality and the complexity of these documents (background noise, artifacts due to aging, interfering lines),automatic text line segmentation remains an open research field. The objective of this paper is to present a survey of existing methods, developed during the last decade, and dedicated to documents of historical interest.<|reference_end|>
arxiv
@article{likforman-sulem2007text, title={Text Line Segmentation of Historical Documents: a Survey}, author={Laurence Likforman-Sulem, Abderrazak Zahour, Bruno Taconet}, journal={Vol. 9, no 2-4, April 2007, pp. 123-138}, year={2007}, doi={10.1007/s10032-006-0023-z}, archivePrefix={arXiv}, eprint={0704.1267}, primaryClass={cs.CV} }
likforman-sulem2007text
arxiv-49
0704.1269
Phase Transitions in the Coloring of Random Graphs
<|reference_start|>Phase Transitions in the Coloring of Random Graphs: We consider the problem of coloring the vertices of a large sparse random graph with a given number of colors so that no adjacent vertices have the same color. Using the cavity method, we present a detailed and systematic analytical study of the space of proper colorings (solutions). We show that for a fixed number of colors and as the average vertex degree (number of constraints) increases, the set of solutions undergoes several phase transitions similar to those observed in the mean field theory of glasses. First, at the clustering transition, the entropically dominant part of the phase space decomposes into an exponential number of pure states so that beyond this transition a uniform sampling of solutions becomes hard. Afterward, the space of solutions condenses over a finite number of the largest states and consequently the total entropy of solutions becomes smaller than the annealed one. Another transition takes place when in all the entropically dominant states a finite fraction of nodes freezes so that each of these nodes is allowed a single color in all the solutions inside the state. Eventually, above the coloring threshold, no more solutions are available. We compute all the critical connectivities for Erdos-Renyi and regular random graphs and determine their asymptotic values for large number of colors. Finally, we discuss the algorithmic consequences of our findings. We argue that the onset of computational hardness is not associated with the clustering transition and we suggest instead that the freezing transition might be the relevant phenomenon. We also discuss the performance of a simple local Walk-COL algorithm and of the belief propagation algorithm in the light of our results.<|reference_end|>
arxiv
@article{zdeborovΓ‘2007phase, title={Phase Transitions in the Coloring of Random Graphs}, author={Lenka Zdeborov'a, Florent Krzakala}, journal={Phys. Rev. E 76, 031131 (2007)}, year={2007}, doi={10.1103/PhysRevE.76.031131}, archivePrefix={arXiv}, eprint={0704.1269}, primaryClass={cond-mat.dis-nn cond-mat.stat-mech cs.CC} }
zdeborovΓ‘2007phase
arxiv-50
0704.1274
Parametric Learning and Monte Carlo Optimization
<|reference_start|>Parametric Learning and Monte Carlo Optimization: This paper uncovers and explores the close relationship between Monte Carlo Optimization of a parametrized integral (MCO), Parametric machine-Learning (PL), and `blackbox' or `oracle'-based optimization (BO). We make four contributions. First, we prove that MCO is mathematically identical to a broad class of PL problems. This identity potentially provides a new application domain for all broadly applicable PL techniques: MCO. Second, we introduce immediate sampling, a new version of the Probability Collectives (PC) algorithm for blackbox optimization. Immediate sampling transforms the original BO problem into an MCO problem. Accordingly, by combining these first two contributions, we can apply all PL techniques to BO. In our third contribution we validate this way of improving BO by demonstrating that cross-validation and bagging improve immediate sampling. Finally, conventional MC and MCO procedures ignore the relationship between the sample point locations and the associated values of the integrand; only the values of the integrand at those locations are considered. We demonstrate that one can exploit the sample location information using PL techniques, for example by forming a fit of the sample locations to the associated values of the integrand. This provides an additional way to apply PL techniques to improve MCO.<|reference_end|>
arxiv
@article{wolpert2007parametric, title={Parametric Learning and Monte Carlo Optimization}, author={David H. Wolpert and Dev G. Rajnarayan}, journal={arXiv preprint arXiv:0704.1274}, year={2007}, archivePrefix={arXiv}, eprint={0704.1274}, primaryClass={cs.LG} }
wolpert2007parametric
arxiv-51
0704.1294
A Disciplined Approach to Adopting Agile Practices: The Agile Adoption Framework
<|reference_start|>A Disciplined Approach to Adopting Agile Practices: The Agile Adoption Framework: Many organizations aspire to adopt agile processes to take advantage of the numerous benefits that it offers to an organization. Those benefits include, but are not limited to, quicker return on investment, better software quality, and higher customer satisfaction. To date however, there is no structured process (at least in the public domain) that guides organizations in adopting agile practices. To address this problem we present the Agile Adoption Framework. The framework consists of two components: an agile measurement index, and a 4-Stage process, that together guide and assist the agile adoption efforts of organizations. More specifically, the agile measurement index is used to identify the agile potential of projects and organizations. The 4-Stage process, on the other hand, helps determine (a) whether or not organizations are ready for agile adoption, and (b) guided by their potential, what set of agile practices can and should be introduced.<|reference_end|>
arxiv
@article{sidky2007a, title={A Disciplined Approach to Adopting Agile Practices: The Agile Adoption Framework}, author={Ahmed Sidky, James Arthur, Shawn Bohner}, journal={arXiv preprint arXiv:0704.1294}, year={2007}, archivePrefix={arXiv}, eprint={0704.1294}, primaryClass={cs.SE} }
sidky2007a
arxiv-52
0704.1308
Antenna Combining for the MIMO Downlink Channel
<|reference_start|>Antenna Combining for the MIMO Downlink Channel: A multiple antenna downlink channel where limited channel feedback is available to the transmitter is considered. In a vector downlink channel (single antenna at each receiver), the transmit antenna array can be used to transmit separate data streams to multiple receivers only if the transmitter has very accurate channel knowledge, i.e., if there is high-rate channel feedback from each receiver. In this work it is shown that channel feedback requirements can be significantly reduced if each receiver has a small number of antennas and appropriately combines its antenna outputs. A combining method that minimizes channel quantization error at each receiver, and thereby minimizes multi-user interference, is proposed and analyzed. This technique is shown to outperform traditional techniques such as maximum-ratio combining because minimization of interference power is more critical than maximization of signal power in the multiple antenna downlink. Analysis is provided to quantify the feedback savings, and the technique is seen to work well with user selection and is also robust to receiver estimation error.<|reference_end|>
arxiv
@article{jindal2007antenna, title={Antenna Combining for the MIMO Downlink Channel}, author={Nihar Jindal}, journal={arXiv preprint arXiv:0704.1308}, year={2007}, doi={10.1109/T-WC.2008.070383}, archivePrefix={arXiv}, eprint={0704.1308}, primaryClass={cs.IT math.IT} }
jindal2007antenna
arxiv-53
0704.1317
Low Density Lattice Codes
<|reference_start|>Low Density Lattice Codes: Low density lattice codes (LDLC) are novel lattice codes that can be decoded efficiently and approach the capacity of the additive white Gaussian noise (AWGN) channel. In LDLC a codeword x is generated directly at the n-dimensional Euclidean space as a linear transformation of a corresponding integer message vector b, i.e., x = Gb, where H, the inverse of G, is restricted to be sparse. The fact that H is sparse is utilized to develop a linear-time iterative decoding scheme which attains, as demonstrated by simulations, good error performance within ~0.5dB from capacity at block length of n = 100,000 symbols. The paper also discusses convergence results and implementation considerations.<|reference_end|>
arxiv
@article{sommer2007low, title={Low Density Lattice Codes}, author={Naftali Sommer, Meir Feder and Ofir Shalvi}, journal={arXiv preprint arXiv:0704.1317}, year={2007}, archivePrefix={arXiv}, eprint={0704.1317}, primaryClass={cs.IT math.IT} }
sommer2007low
arxiv-54
0704.1353
Supporting Knowledge and Expertise Finding within Australia's Defence Science and Technology Organisation
<|reference_start|>Supporting Knowledge and Expertise Finding within Australia's Defence Science and Technology Organisation: This paper reports on work aimed at supporting knowledge and expertise finding within a large Research and Development (R&D) organisation. The paper first discusses the nature of knowledge important to R&D organisations and presents a prototype information system developed to support knowledge and expertise finding. The paper then discusses a trial of the system within an R&D organisation, the implications and limitations of the trial, and discusses future research questions.<|reference_end|>
arxiv
@article{prekop2007supporting, title={Supporting Knowledge and Expertise Finding within Australia's Defence Science and Technology Organisation}, author={Paul Prekop}, journal={arXiv preprint arXiv:0704.1353}, year={2007}, archivePrefix={arXiv}, eprint={0704.1353}, primaryClass={cs.OH cs.DB cs.DL cs.HC} }
prekop2007supporting
arxiv-55
0704.1358
Distance preserving mappings from ternary vectors to permutations
<|reference_start|>Distance preserving mappings from ternary vectors to permutations: Distance-preserving mappings (DPMs) are mappings from the set of all q-ary vectors of a fixed length to the set of permutations of the same or longer length such that every two distinct vectors are mapped to permutations with the same or even larger Hamming distance than that of the vectors. In this paper, we propose a construction of DPMs from ternary vectors. The constructed DPMs improve the lower bounds on the maximal size of permutation arrays.<|reference_end|>
arxiv
@article{lin2007distance, title={Distance preserving mappings from ternary vectors to permutations}, author={Jyh-Shyan Lin, Jen-Chun Chang, Rong-Jaye Chen, Torleiv Kl{o}ve}, journal={arXiv preprint arXiv:0704.1358}, year={2007}, archivePrefix={arXiv}, eprint={0704.1358}, primaryClass={cs.DM cs.IT math.IT} }
lin2007distance
arxiv-56
0704.1373
A Language-Based Approach for Improving the Robustness of Network Application Protocol Implementations
<|reference_start|>A Language-Based Approach for Improving the Robustness of Network Application Protocol Implementations: The secure and robust functioning of a network relies on the defect-free implementation of network applications. As network protocols have become increasingly complex, however, hand-writing network message processing code has become increasingly error-prone. In this paper, we present a domain-specific language, Zebu, for describing protocol message formats and related processing constraints. From a Zebu specification, a compiler automatically generates stubs to be used by an application to parse network messages. Zebu is easy to use, as it builds on notations used in RFCs to describe protocol grammars. Zebu is also efficient, as the memory usage is tailored to application needs and message fragments can be specified to be processed on demand. Finally, Zebu-based applications are robust, as the Zebu compiler automatically checks specification consistency and generates parsing stubs that include validation of the message structure. Using a mutation analysis in the context of SIP and RTSP, we show that Zebu significantly improves application robustness.<|reference_end|>
arxiv
@article{laurent2007a, title={A Language-Based Approach for Improving the Robustness of Network Application Protocol Implementations}, author={Burgy Laurent (INRIA Futurs), Laurent R'eveill`ere (INRIA Futurs), Julia Lawall (DIKU), Gilles Muller (INRIA Rennes)}, journal={arXiv preprint arXiv:0704.1373}, year={2007}, archivePrefix={arXiv}, eprint={0704.1373}, primaryClass={cs.PL} }
laurent2007a
arxiv-57
0704.1394
Calculating Valid Domains for BDD-Based Interactive Configuration
<|reference_start|>Calculating Valid Domains for BDD-Based Interactive Configuration: In these notes we formally describe the functionality of Calculating Valid Domains from the BDD representing the solution space of valid configurations. The formalization is largely based on the CLab configuration framework.<|reference_end|>
arxiv
@article{hadzic2007calculating, title={Calculating Valid Domains for BDD-Based Interactive Configuration}, author={Tarik Hadzic, Rune Moller Jensen, Henrik Reif Andersen}, journal={arXiv preprint arXiv:0704.1394}, year={2007}, archivePrefix={arXiv}, eprint={0704.1394}, primaryClass={cs.AI} }
hadzic2007calculating
arxiv-58
0704.1409
Preconditioned Temporal Difference Learning
<|reference_start|>Preconditioned Temporal Difference Learning: This paper has been withdrawn by the author. This draft is withdrawn for its poor quality in english, unfortunately produced by the author when he was just starting his science route. Look at the ICML version instead: http://icml2008.cs.helsinki.fi/papers/111.pdf<|reference_end|>
arxiv
@article{hengshuai2007preconditioned, title={Preconditioned Temporal Difference Learning}, author={Yao HengShuai}, journal={arXiv preprint arXiv:0704.1409}, year={2007}, archivePrefix={arXiv}, eprint={0704.1409}, primaryClass={cs.LG cs.AI} }
hengshuai2007preconditioned
arxiv-59
0704.1411
Trellis-Coded Quantization Based on Maximum-Hamming-Distance Binary Codes
<|reference_start|>Trellis-Coded Quantization Based on Maximum-Hamming-Distance Binary Codes: Most design approaches for trellis-coded quantization take advantage of the duality of trellis-coded quantization with trellis-coded modulation, and use the same empirically-found convolutional codes to label the trellis branches. This letter presents an alternative approach that instead takes advantage of maximum-Hamming-distance convolutional codes. The proposed source codes are shown to be competitive with the best in the literature for the same computational complexity.<|reference_end|>
arxiv
@article{cappellari2007trellis-coded, title={Trellis-Coded Quantization Based on Maximum-Hamming-Distance Binary Codes}, author={Lorenzo Cappellari}, journal={arXiv preprint arXiv:0704.1411}, year={2007}, archivePrefix={arXiv}, eprint={0704.1411}, primaryClass={cs.IT math.IT} }
cappellari2007trellis-coded
arxiv-60
0704.1455
A Better Good-Turing Estimator for Sequence Probabilities
<|reference_start|>A Better Good-Turing Estimator for Sequence Probabilities: We consider the problem of estimating the probability of an observed string drawn i.i.d. from an unknown distribution. The key feature of our study is that the length of the observed string is assumed to be of the same order as the size of the underlying alphabet. In this setting, many letters are unseen and the empirical distribution tends to overestimate the probability of the observed letters. To overcome this problem, the traditional approach to probability estimation is to use the classical Good-Turing estimator. We introduce a natural scaling model and use it to show that the Good-Turing sequence probability estimator is not consistent. We then introduce a novel sequence probability estimator that is indeed consistent under the natural scaling model.<|reference_end|>
arxiv
@article{wagner2007a, title={A Better Good-Turing Estimator for Sequence Probabilities}, author={Aaron B. Wagner, Pramod Viswanath, and Sanjeev R. Kulkarni}, journal={arXiv preprint arXiv:0704.1455}, year={2007}, archivePrefix={arXiv}, eprint={0704.1455}, primaryClass={cs.IT math.IT} }
wagner2007a
arxiv-61
0704.1524
GLRT-Optimal Noncoherent Lattice Decoding
<|reference_start|>GLRT-Optimal Noncoherent Lattice Decoding: This paper presents new low-complexity lattice-decoding algorithms for noncoherent block detection of QAM and PAM signals over complex-valued fading channels. The algorithms are optimal in terms of the generalized likelihood ratio test (GLRT). The computational complexity is polynomial in the block length; making GLRT-optimal noncoherent detection feasible for implementation. We also provide even lower complexity suboptimal algorithms. Simulations show that the suboptimal algorithms have performance indistinguishable from the optimal algorithms. Finally, we consider block based transmission, and propose to use noncoherent detection as an alternative to pilot assisted transmission (PAT). The new technique is shown to outperform PAT.<|reference_end|>
arxiv
@article{ryan2007glrt-optimal, title={GLRT-Optimal Noncoherent Lattice Decoding}, author={Daniel J. Ryan, Iain B. Collings and I. Vaughan L. Clarkson}, journal={arXiv preprint arXiv:0704.1524}, year={2007}, doi={10.1109/TSP.2007.894237}, archivePrefix={arXiv}, eprint={0704.1524}, primaryClass={cs.IT math.IT} }
ryan2007glrt-optimal
arxiv-62
0704.1571
On restrictions of balanced 2-interval graphs
<|reference_start|>On restrictions of balanced 2-interval graphs: The class of 2-interval graphs has been introduced for modelling scheduling and allocation problems, and more recently for specific bioinformatic problems. Some of those applications imply restrictions on the 2-interval graphs, and justify the introduction of a hierarchy of subclasses of 2-interval graphs that generalize line graphs: balanced 2-interval graphs, unit 2-interval graphs, and (x,x)-interval graphs. We provide instances that show that all the inclusions are strict. We extend the NP-completeness proof of recognizing 2-interval graphs to the recognition of balanced 2-interval graphs. Finally we give hints on the complexity of unit 2-interval graphs recognition, by studying relationships with other graph classes: proper circular-arc, quasi-line graphs, K_{1,5}-free graphs, ...<|reference_end|>
arxiv
@article{gambette2007on, title={On restrictions of balanced 2-interval graphs}, author={Philippe Gambette (LIAFA), St'ephane Vialette (LRI)}, journal={Dans Lecture Notes In Computer Science - 33rd International Workshop on Graph-Theoretic Concepts in Computer Science (WG'07), Dornburg : Allemagne (2007)}, year={2007}, doi={10.1007/978-3-540-74839-7_6}, archivePrefix={arXiv}, eprint={0704.1571}, primaryClass={cs.DM q-bio.QM} }
gambette2007on
arxiv-63
0704.1675
Exploiting Social Annotation for Automatic Resource Discovery
<|reference_start|>Exploiting Social Annotation for Automatic Resource Discovery: Information integration applications, such as mediators or mashups, that require access to information resources currently rely on users manually discovering and integrating them in the application. Manual resource discovery is a slow process, requiring the user to sift through results obtained via keyword-based search. Although search methods have advanced to include evidence from document contents, its metadata and the contents and link structure of the referring pages, they still do not adequately cover information sources -- often called ``the hidden Web''-- that dynamically generate documents in response to a query. The recently popular social bookmarking sites, which allow users to annotate and share metadata about various information sources, provide rich evidence for resource discovery. In this paper, we describe a probabilistic model of the user annotation process in a social bookmarking system del.icio.us. We then use the model to automatically find resources relevant to a particular information domain. Our experimental results on data obtained from \emph{del.icio.us} show this approach as a promising method for helping automate the resource discovery task.<|reference_end|>
arxiv
@article{plangprasopchok2007exploiting, title={Exploiting Social Annotation for Automatic Resource Discovery}, author={Anon Plangprasopchok and Kristina Lerman}, journal={arXiv preprint arXiv:0704.1675}, year={2007}, archivePrefix={arXiv}, eprint={0704.1675}, primaryClass={cs.AI cs.CY cs.DL} }
plangprasopchok2007exploiting
arxiv-64
0704.1676
Personalizing Image Search Results on Flickr
<|reference_start|>Personalizing Image Search Results on Flickr: The social media site Flickr allows users to upload their photos, annotate them with tags, submit them to groups, and also to form social networks by adding other users as contacts. Flickr offers multiple ways of browsing or searching it. One option is tag search, which returns all images tagged with a specific keyword. If the keyword is ambiguous, e.g., ``beetle'' could mean an insect or a car, tag search results will include many images that are not relevant to the sense the user had in mind when executing the query. We claim that users express their photography interests through the metadata they add in the form of contacts and image annotations. We show how to exploit this metadata to personalize search results for the user, thereby improving search performance. First, we show that we can significantly improve search precision by filtering tag search results by user's contacts or a larger social network that includes those contact's contacts. Secondly, we describe a probabilistic model that takes advantage of tag information to discover latent topics contained in the search results. The users' interests can similarly be described by the tags they used for annotating their images. The latent topics found by the model are then used to personalize search results by finding images on topics that are of interest to the user.<|reference_end|>
arxiv
@article{lerman2007personalizing, title={Personalizing Image Search Results on Flickr}, author={Kristina Lerman, Anon Plangprasopchok and Chio Wong}, journal={arXiv preprint arXiv:0704.1676}, year={2007}, archivePrefix={arXiv}, eprint={0704.1676}, primaryClass={cs.IR cs.AI cs.CY cs.DL cs.HC} }
lerman2007personalizing
arxiv-65
0704.1678
Settling the Complexity of Computing Two-Player Nash Equilibria
<|reference_start|>Settling the Complexity of Computing Two-Player Nash Equilibria: We settle a long-standing open question in algorithmic game theory. We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. This is the first of a series of results concerning the complexity of Nash equilibria. In particular, we prove the following theorems: Bimatrix does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time. The smoothed complexity of the classic Lemke-Howson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results demonstrate that, even in the simplest form of non-cooperative games, equilibrium computation and approximation are polynomial-time equivalent to fixed point computation. Our results also have two broad complexity implications in mathematical economics and operations research: Arrow-Debreu market equilibria are PPAD-hard to compute. The P-Matrix Linear Complementary Problem is computationally harder than convex programming unless every problem in PPAD is solvable in polynomial time.<|reference_end|>
arxiv
@article{chen2007settling, title={Settling the Complexity of Computing Two-Player Nash Equilibria}, author={Xi Chen, Xiaotie Deng, Shang-Hua Teng}, journal={arXiv preprint arXiv:0704.1678}, year={2007}, archivePrefix={arXiv}, eprint={0704.1678}, primaryClass={cs.GT cs.CC} }
chen2007settling
arxiv-66
0704.1694
Locally Decodable Codes From Nice Subsets of Finite Fields and Prime Factors of Mersenne Numbers
<|reference_start|>Locally Decodable Codes From Nice Subsets of Finite Fields and Prime Factors of Mersenne Numbers: A k-query Locally Decodable Code (LDC) encodes an n-bit message x as an N-bit codeword C(x), such that one can probabilistically recover any bit x_i of the message by querying only k bits of the codeword C(x), even after some constant fraction of codeword bits has been corrupted. The major goal of LDC related research is to establish the optimal trade-off between length and query complexity of such codes. Recently [Y] introduced a novel technique for constructing locally decodable codes and vastly improved the upper bounds for code length. The technique is based on Mersenne primes. In this paper we extend the work of [Y] and argue that further progress via these methods is tied to progress on an old number theory question regarding the size of the largest prime factors of Mersenne numbers. Specifically, we show that every Mersenne number m=2^t-1 that has a prime factor p>m^\gamma yields a family of k(\gamma)-query locally decodable codes of length Exp(n^{1/t}). Conversely, if for some fixed k and all \epsilon > 0 one can use the technique of [Y] to obtain a family of k-query LDCs of length Exp(n^\epsilon); then infinitely many Mersenne numbers have prime factors arger than known currently.<|reference_end|>
arxiv
@article{kedlaya2007locally, title={Locally Decodable Codes From Nice Subsets of Finite Fields and Prime Factors of Mersenne Numbers}, author={Kiran S. Kedlaya, Sergey Yekhanin}, journal={arXiv preprint arXiv:0704.1694}, year={2007}, archivePrefix={arXiv}, eprint={0704.1694}, primaryClass={cs.CC math.NT} }
kedlaya2007locally
arxiv-67
0704.1707
A Cut-free Sequent Calculus for Bi-Intuitionistic Logic: Extended Version
<|reference_start|>A Cut-free Sequent Calculus for Bi-Intuitionistic Logic: Extended Version: Bi-intuitionistic logic is the extension of intuitionistic logic with a connective dual to implication. Bi-intuitionistic logic was introduced by Rauszer as a Hilbert calculus with algebraic and Kripke semantics. But her subsequent ``cut-free'' sequent calculus for BiInt has recently been shown by Uustalu to fail cut-elimination. We present a new cut-free sequent calculus for BiInt, and prove it sound and complete with respect to its Kripke semantics. Ensuring completeness is complicated by the interaction between implication and its dual, similarly to future and past modalities in tense logic. Our calculus handles this interaction using extended sequents which pass information from premises to conclusions using variables instantiated at the leaves of failed derivation trees. Our simple termination argument allows our calculus to be used for automated deduction, although this is not its main purpose.<|reference_end|>
arxiv
@article{buisman2007a, title={A Cut-free Sequent Calculus for Bi-Intuitionistic Logic: Extended Version}, author={Linda Buisman and Rajeev Gor'e}, journal={arXiv preprint arXiv:0704.1707}, year={2007}, archivePrefix={arXiv}, eprint={0704.1707}, primaryClass={cs.LO} }
buisman2007a
arxiv-68
0704.1709
Traitement Des Donnees Manquantes Au Moyen De L'Algorithme De Kohonen
<|reference_start|>Traitement Des Donnees Manquantes Au Moyen De L'Algorithme De Kohonen: Nous montrons comment il est possible d'utiliser l'algorithme d'auto organisation de Kohonen pour traiter des donn\'ees avec valeurs manquantes et estimer ces derni\`eres. Apr\`es un rappel m\'ethodologique, nous illustrons notre propos \`a partir de trois applications \`a des donn\'ees r\'eelles. ----- We show how it is possible to use the Kohonen self-organizing algorithm to deal with data which contain missing values and to estimate them. After a methodological recall, we illustrate our purpose from three real databases applications.<|reference_end|>
arxiv
@article{cottrell2007traitement, title={Traitement Des Donnees Manquantes Au Moyen De L'Algorithme De Kohonen}, author={Marie Cottrell (SAMOS, Matisse), Smail Ibbou (SAMOS, Matisse), Patrick Letr'emy (SAMOS, Matisse)}, journal={Actes de la dixi\`eme conf\'erence ACSEG 2003 (Nantes) (2003) 201-217}, year={2007}, archivePrefix={arXiv}, eprint={0704.1709}, primaryClass={stat.AP cs.NE} }
cottrell2007traitement
arxiv-69
0704.1748
Self-Organization applied to Dynamic Network Layout
<|reference_start|>Self-Organization applied to Dynamic Network Layout: As networks and their structure have become a major field of research, a strong demand for network visualization has emerged. We address this challenge by formalizing the well established spring layout in terms of dynamic equations. We thus open up the design space for new algorithms. Drawing from the knowledge of systems design, we derive a layout algorithm that remedies several drawbacks of the original spring layout. This new algorithm relies on the balancing of two antagonistic forces. We thus call it {\em arf} for "attractive and repulsive forces". It is, as we claim, particularly suited for a dynamic layout of smaller networks ($n < 10^3$). We back this claim with several application examples from on going complex systems research.<|reference_end|>
arxiv
@article{geipel2007self-organization, title={Self-Organization applied to Dynamic Network Layout}, author={Markus M. Geipel}, journal={International Journal of Modern Physics C vol. 18, no. 10 (2007), pp. 1537-1549}, year={2007}, doi={10.1142/S0129183107011558}, archivePrefix={arXiv}, eprint={0704.1748}, primaryClass={physics.comp-ph cs.DS nlin.AO} }
geipel2007self-organization
arxiv-70
0704.1751
Information Theoretic Proofs of Entropy Power Inequalities
<|reference_start|>Information Theoretic Proofs of Entropy Power Inequalities: While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, up to now Shannon's entropy power inequality (EPI) is an exception: Existing information theoretic proofs of the EPI hinge on representations of differential entropy using either Fisher information or minimum mean-square error (MMSE), which are derived from de Bruijn's identity. In this paper, we first present an unified view of these proofs, showing that they share two essential ingredients: 1) a data processing argument applied to a covariance-preserving linear transformation; 2) an integration over a path of a continuous Gaussian perturbation. Using these ingredients, we develop a new and brief proof of the EPI through a mutual information inequality, which replaces Stam and Blachman's Fisher information inequality (FII) and an inequality for MMSE by Guo, Shamai and Verd\'u used in earlier proofs. The result has the advantage of being very simple in that it relies only on the basic properties of mutual information. These ideas are then generalized to various extended versions of the EPI: Zamir and Feder's generalized EPI for linear transformations of the random variables, Takano and Johnson's EPI for dependent variables, Liu and Viswanath's covariance-constrained EPI, and Costa's concavity inequality for the entropy power.<|reference_end|>
arxiv
@article{rioul2007information, title={Information Theoretic Proofs of Entropy Power Inequalities}, author={Olivier Rioul}, journal={arXiv preprint arXiv:0704.1751}, year={2007}, doi={10.1109/TIT.2010.2090193}, archivePrefix={arXiv}, eprint={0704.1751}, primaryClass={cs.IT math.IT} }
rioul2007information
arxiv-71
0704.1756
The Invar Tensor Package
<|reference_start|>The Invar Tensor Package: The Invar package is introduced, a fast manipulator of generic scalar polynomial expressions formed from the Riemann tensor of a four-dimensional metric-compatible connection. The package can maximally simplify any polynomial containing tensor products of up to seven Riemann tensors within seconds. It has been implemented both in Mathematica and Maple algebraic systems.<|reference_end|>
arxiv
@article{martin-garcia2007the, title={The Invar Tensor Package}, author={Jose M. Martin-Garcia, Renato Portugal, Leon R. U. Manssur}, journal={Comp. Phys. Commun. 177 (2007) 640-648}, year={2007}, doi={10.1016/j.cpc.2007.05.015}, archivePrefix={arXiv}, eprint={0704.1756}, primaryClass={cs.SC gr-qc hep-th} }
martin-garcia2007the
arxiv-72
0704.1768
Assessment and Propagation of Input Uncertainty in Tree-based Option Pricing Models
<|reference_start|>Assessment and Propagation of Input Uncertainty in Tree-based Option Pricing Models: This paper aims to provide a practical example on the assessment and propagation of input uncertainty for option pricing when using tree-based methods. Input uncertainty is propagated into output uncertainty, reflecting that option prices are as unknown as the inputs they are based on. Option pricing formulas are tools whose validity is conditional not only on how close the model represents reality, but also on the quality of the inputs they use, and those inputs are usually not observable. We provide three alternative frameworks to calibrate option pricing tree models, propagating parameter uncertainty into the resulting option prices. We finally compare our methods with classical calibration-based results assuming that there is no options market established. These methods can be applied to pricing of instruments for which there is not an options market, as well as a methodological tool to account for parameter and model uncertainty in theoretical option pricing.<|reference_end|>
arxiv
@article{gzyl2007assessment, title={Assessment and Propagation of Input Uncertainty in Tree-based Option Pricing Models}, author={Henryk Gzyl, German Molina, Enrique ter Horst}, journal={arXiv preprint arXiv:0704.1768}, year={2007}, archivePrefix={arXiv}, eprint={0704.1768}, primaryClass={cs.CE cs.GT} }
gzyl2007assessment
arxiv-73
0704.1783
Unicast and Multicast Qos Routing with Soft Constraint Logic Programming
<|reference_start|>Unicast and Multicast Qos Routing with Soft Constraint Logic Programming: We present a formal model to represent and solve the unicast/multicast routing problem in networks with Quality of Service (QoS) requirements. To attain this, first we translate the network adapting it to a weighted graph (unicast) or and-or graph (multicast), where the weight on a connector corresponds to the multidimensional cost of sending a packet on the related network link: each component of the weights vector represents a different QoS metric value (e.g. bandwidth, cost, delay, packet loss). The second step consists in writing this graph as a program in Soft Constraint Logic Programming (SCLP): the engine of this framework is then able to find the best paths/trees by optimizing their costs and solving the constraints imposed on them (e.g. delay < 40msec), thus finding a solution to QoS routing problems. Moreover, c-semiring structures are a convenient tool to model QoS metrics. At last, we provide an implementation of the framework over scale-free networks and we suggest how the performance can be improved.<|reference_end|>
arxiv
@article{bistarelli2007unicast, title={Unicast and Multicast Qos Routing with Soft Constraint Logic Programming}, author={Stefano Bistarelli, Ugo Montanari, Francesca Rossi, Francesco Santini}, journal={arXiv preprint arXiv:0704.1783}, year={2007}, archivePrefix={arXiv}, eprint={0704.1783}, primaryClass={cs.LO cs.AI cs.NI} }
bistarelli2007unicast
arxiv-74
0704.1818
Low-density graph codes that are optimal for source/channel coding and binning
<|reference_start|>Low-density graph codes that are optimal for source/channel coding and binning: We describe and analyze the joint source/channel coding properties of a class of sparse graphical codes based on compounding a low-density generator matrix (LDGM) code with a low-density parity check (LDPC) code. Our first pair of theorems establish that there exist codes from this ensemble, with all degrees remaining bounded independently of block length, that are simultaneously optimal as both source and channel codes when encoding and decoding are performed optimally. More precisely, in the context of lossy compression, we prove that finite degree constructions can achieve any pair $(R, D)$ on the rate-distortion curve of the binary symmetric source. In the context of channel coding, we prove that finite degree codes can achieve any pair $(C, p)$ on the capacity-noise curve of the binary symmetric channel. Next, we show that our compound construction has a nested structure that can be exploited to achieve the Wyner-Ziv bound for source coding with side information (SCSI), as well as the Gelfand-Pinsker bound for channel coding with side information (CCSI). Although the current results are based on optimal encoding and decoding, the proposed graphical codes have sparse structure and high girth that renders them well-suited to message-passing and other efficient decoding procedures.<|reference_end|>
arxiv
@article{wainwright2007low-density, title={Low-density graph codes that are optimal for source/channel coding and binning}, author={Martin J. Wainwright, Emin Martinian}, journal={arXiv preprint arXiv:0704.1818}, year={2007}, number={Technical report 730}, archivePrefix={arXiv}, eprint={0704.1818}, primaryClass={cs.IT math.IT} }
wainwright2007low-density
arxiv-75
0704.1827
Transaction-Oriented Simulation In Ad Hoc Grids
<|reference_start|>Transaction-Oriented Simulation In Ad Hoc Grids: This paper analyses the possibilities of performing parallel transaction-oriented simulations with a special focus on the space-parallel approach and discrete event simulation synchronisation algorithms that are suitable for transaction-oriented simulation and the target environment of Ad Hoc Grids. To demonstrate the findings a Java-based parallel transaction-oriented simulator for the simulation language GPSS/H is implemented on the basis of the promising Shock Resistant Time Warp synchronisation algorithm and using the Grid framework ProActive. The validation of this parallel simulator shows that the Shock Resistant Time Warp algorithm can successfully reduce the number of rolled back Transaction moves but it also reveals circumstances in which the Shock Resistant Time Warp algorithm can be outperformed by the normal Time Warp algorithm. The conclusion of this paper suggests possible improvements to the Shock Resistant Time Warp algorithm to avoid such problems.<|reference_end|>
arxiv
@article{krafft2007transaction-oriented, title={Transaction-Oriented Simulation In Ad Hoc Grids}, author={Gerald Krafft}, journal={arXiv preprint arXiv:0704.1827}, year={2007}, archivePrefix={arXiv}, eprint={0704.1827}, primaryClass={cs.DC} }
krafft2007transaction-oriented
arxiv-76
0704.1829
On-line Chain Partitions of Up-growing Semi-orders
<|reference_start|>On-line Chain Partitions of Up-growing Semi-orders: On-line chain partition is a two-player game between Spoiler and Algorithm. Spoiler presents a partially ordered set, point by point. Algorithm assigns incoming points (immediately and irrevocably) to the chains which constitute a chain partition of the order. The value of the game for orders of width $w$ is a minimum number $\fVal(w)$ such that Algorithm has a strategy using at most $\fVal(w)$ chains on orders of width at most $w$. We analyze the chain partition game for up-growing semi-orders. Surprisingly, the golden ratio comes into play and the value of the game is $\lfloor\frac{1+\sqrt{5}}{2}\; w \rfloor$.<|reference_end|>
arxiv
@article{felsner2007on-line, title={On-line Chain Partitions of Up-growing Semi-orders}, author={Stefan Felsner, Kamil Kloch, Grzegorz Matecki, and Piotr Micek}, journal={arXiv preprint arXiv:0704.1829}, year={2007}, archivePrefix={arXiv}, eprint={0704.1829}, primaryClass={cs.DM} }
felsner2007on-line
arxiv-77
0704.1833
Analysis of the 80211e Enhanced Distributed Channel Access Function
<|reference_start|>Analysis of the 80211e Enhanced Distributed Channel Access Function: The IEEE 802.11e standard revises the Medium Access Control (MAC) layer of the former IEEE 802.11 standard for Quality-of-Service (QoS) provision in the Wireless Local Area Networks (WLANs). The Enhanced Distributed Channel Access (EDCA) function of 802.11e defines multiple Access Categories (AC) with AC-specific Contention Window (CW) sizes, Arbitration Interframe Space (AIFS) values, and Transmit Opportunity (TXOP) limits to support MAC-level QoS and prioritization. We propose an analytical model for the EDCA function which incorporates an accurate CW, AIFS, and TXOP differentiation at any traffic load. The proposed model is also shown to capture the effect of MAC layer buffer size on the performance. Analytical and simulation results are compared to demonstrate the accuracy of the proposed approach for varying traffic loads, EDCA parameters, and MAC layer buffer space.<|reference_end|>
arxiv
@article{inan2007analysis, title={Analysis of the 802.11e Enhanced Distributed Channel Access Function}, author={Inanc Inan, Feyza Keceli, Ender Ayanoglu}, journal={arXiv preprint arXiv:0704.1833}, year={2007}, archivePrefix={arXiv}, eprint={0704.1833}, primaryClass={cs.NI} }
inan2007analysis
arxiv-78
0704.1838
Performance Analysis of the IEEE 80211e Enhanced Distributed Coordination Function using Cycle Time Approach
<|reference_start|>Performance Analysis of the IEEE 80211e Enhanced Distributed Coordination Function using Cycle Time Approach: The recently ratified IEEE 802.11e standard defines the Enhanced Distributed Channel Access (EDCA) function for Quality-of-Service (QoS) provisioning in the Wireless Local Area Networks (WLANs). The EDCA uses Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) and slotted Binary Exponential Backoff (BEB) mechanism. We present a simple mathematical analysis framework for the EDCA function. Our analysis considers the fact that the distributed random access systems exhibit cyclic behavior where each station successfully transmits a packet in a cycle. Our analysis shows that an AC-specific cycle time exists for the EDCA function. Validating the theoretical results via simulations, we show that the proposed analysis accurately captures EDCA saturation performance in terms of average throughput, medium access delay, and packet loss ratio. The cycle time analysis is a simple and insightful substitute for previously proposed more complex EDCA models.<|reference_end|>
arxiv
@article{inan2007performance, title={Performance Analysis of the IEEE 802.11e Enhanced Distributed Coordination Function using Cycle Time Approach}, author={Inanc Inan, Feyza Keceli, Ender Ayanoglu}, journal={arXiv preprint arXiv:0704.1838}, year={2007}, archivePrefix={arXiv}, eprint={0704.1838}, primaryClass={cs.OH} }
inan2007performance
arxiv-79
0704.1842
Fairness Provision in the IEEE 80211e Infrastructure Basic Service Set
<|reference_start|>Fairness Provision in the IEEE 80211e Infrastructure Basic Service Set: Most of the deployed IEEE 802.11e Wireless Local Area Networks (WLANs) use infrastructure Basic Service Set (BSS) in which an Access Point (AP) serves as a gateway between wired and wireless domains. We present the unfairness problem between the uplink and the downlink flows of any Access Category (AC) in the 802.11e Enhanced Distributed Channel Access (EDCA) when the default settings of the EDCA parameters are used. We propose a simple analytical model to calculate the EDCA parameter settings that achieve weighted fair resource allocation for all uplink and downlink flows. We also propose a simple model-assisted measurement-based dynamic EDCA parameter adaptation algorithm. Moreover, our dynamic solution addresses the differences in the transport layer and the Medium Access Control (MAC) layer interactions of User Datagram Protocol (UDP) and Transmission Control Protocol (TCP). We show that proposed Contention Window (CW) and Transmit Opportunity (TXOP) limit adaptation at the AP provides fair UDP and TCP access between uplink and downlink flows of the same AC while preserving prioritization among ACs.<|reference_end|>
arxiv
@article{keceli2007fairness, title={Fairness Provision in the IEEE 802.11e Infrastructure Basic Service Set}, author={Feyza Keceli, Inanc Inan, Ender Ayanoglu}, journal={arXiv preprint arXiv:0704.1842}, year={2007}, archivePrefix={arXiv}, eprint={0704.1842}, primaryClass={cs.OH} }
keceli2007fairness
arxiv-80
0704.1873
An Achievable Rate Region for Interference Channels with Conferencing
<|reference_start|>An Achievable Rate Region for Interference Channels with Conferencing: In this paper, we propose an achievable rate region for discrete memoryless interference channels with conferencing at the transmitter side. We employ superposition block Markov encoding, combined with simultaneous superposition coding, dirty paper coding, and random binning to obtain the achievable rate region. We show that, under respective conditions, the proposed achievable region reduces to Han and Kobayashi achievable region for interference channels, the capacity region for degraded relay channels, and the capacity region for the Gaussian vector broadcast channel. Numerical examples for the Gaussian case are given.<|reference_end|>
arxiv
@article{cao2007an, title={An Achievable Rate Region for Interference Channels with Conferencing}, author={Yi Cao and Biao Chen}, journal={arXiv preprint arXiv:0704.1873}, year={2007}, archivePrefix={arXiv}, eprint={0704.1873}, primaryClass={cs.IT math.IT} }
cao2007an
arxiv-81
0704.1886
An algebraic generalization of Kripke structures
<|reference_start|>An algebraic generalization of Kripke structures: The Kripke semantics of classical propositional normal modal logic is made algebraic via an embedding of Kripke structures into the larger class of pointed stably supported quantales. This algebraic semantics subsumes the traditional algebraic semantics based on lattices with unary operators, and it suggests natural interpretations of modal logic, of possible interest in the applications, in structures that arise in geometry and analysis, such as foliated manifolds and operator algebras, via topological groupoids and inverse semigroups. We study completeness properties of the quantale based semantics for the systems K, T, K4, S4, and S5, in particular obtaining an axiomatization for S5 which does not use negation or the modal necessity operator. As additional examples we describe intuitionistic propositional modal logic, the logic of programs PDL, and the ramified temporal logic CTL.<|reference_end|>
arxiv
@article{marcelino2007an, title={An algebraic generalization of Kripke structures}, author={S'ergio Marcelino and Pedro Resende}, journal={Math. Proc. Cambridge Philos. Soc. 145 (2008) 549-577}, year={2007}, doi={10.1017/S0305004108001667}, archivePrefix={arXiv}, eprint={0704.1886}, primaryClass={math.LO cs.LO math.RA} }
marcelino2007an
arxiv-82
0704.1925
Blind Identification of Distributed Antenna Systems with Multiple Carrier Frequency Offsets
<|reference_start|>Blind Identification of Distributed Antenna Systems with Multiple Carrier Frequency Offsets: In spatially distributed multiuser antenna systems, the received signal contains multiple carrier-frequency offsets (CFOs) arising from mismatch between the oscillators of transmitters and receivers. This results in a time-varying rotation of the data constellation, which needs to be compensated at the receiver before symbol recovery. In this paper, a new approach for blind CFO estimation and symbol recovery is proposed. The received base-band signal is over-sampled, and its polyphase components are used to formulate a virtual Multiple-Input Multiple-Output (MIMO) problem. By applying blind MIMO system estimation techniques, the system response can be estimated and decoupled versions of the user symbols can be recovered, each one of which contains a distinct CFO. By applying a decision feedback Phase Lock Loop (PLL), the CFO can be mitigated and the transmitted symbols can be recovered. The estimated MIMO system response provides information about the CFOs that can be used to initialize the PLL, speed up its convergence, and avoid ambiguities usually linked with PLL.<|reference_end|>
arxiv
@article{yu2007blind, title={Blind Identification of Distributed Antenna Systems with Multiple Carrier Frequency Offsets}, author={Yuanning Yu, Athina P. Petropulu and H. Vincent Poor}, journal={arXiv preprint arXiv:0704.1925}, year={2007}, archivePrefix={arXiv}, eprint={0704.1925}, primaryClass={cs.IT math.IT} }
yu2007blind
arxiv-83
0704.2010
A study of structural properties on profiles HMMs
<|reference_start|>A study of structural properties on profiles HMMs: Motivation: Profile hidden Markov Models (pHMMs) are a popular and very useful tool in the detection of the remote homologue protein families. Unfortunately, their performance is not always satisfactory when proteins are in the 'twilight zone'. We present HMMER-STRUCT, a model construction algorithm and tool that tries to improve pHMM performance by using structural information while training pHMMs. As a first step, HMMER-STRUCT constructs a set of pHMMs. Each pHMM is constructed by weighting each residue in an aligned protein according to a specific structural property of the residue. Properties used were primary, secondary and tertiary structures, accessibility and packing. HMMER-STRUCT then prioritizes the results by voting. Results: We used the SCOP database to perform our experiments. Throughout, we apply leave-one-family-out cross-validation over protein superfamilies. First, we used the MAMMOTH-mult structural aligner to align the training set proteins. Then, we performed two sets of experiments. In a first experiment, we compared structure weighted models against standard pHMMs and against each other. In a second experiment, we compared the voting model against individual pHMMs. We compare method performance through ROC curves and through Precision/Recall curves, and assess significance through the paired two tailed t-test. Our results show significant performance improvements of all structurally weighted models over default HMMER, and a significant improvement in sensitivity of the combined models over both the original model and the structurally weighted models.<|reference_end|>
arxiv
@article{bernardes2007a, title={A study of structural properties on profiles HMMs}, author={Juliana S Bernardes, Alberto Davila, Vitor Santos Costa, Gerson Zaverucha}, journal={arXiv preprint arXiv:0704.2010}, year={2007}, archivePrefix={arXiv}, eprint={0704.2010}, primaryClass={cs.AI} }
bernardes2007a
arxiv-84
0704.2014
Extensive Games with Possibly Unaware Players
<|reference_start|>Extensive Games with Possibly Unaware Players: Standard game theory assumes that the structure of the game is common knowledge among players. We relax this assumption by considering extensive games where agents may be unaware of the complete structure of the game. In particular, they may not be aware of moves that they and other agents can make. We show how such games can be represented; the key idea is to describe the game from the point of view of every agent at every node of the game tree. We provide a generalization of Nash equilibrium and show that every game with awareness has a generalized Nash equilibrium. Finally, we extend these results to games with awareness of unawareness, where a player i may be aware that a player j can make moves that i is not aware of, and to subjective games, where payers may have no common knowledge regarding the actual game and their beliefs are incompatible with a common prior.<|reference_end|>
arxiv
@article{halpern2007extensive, title={Extensive Games with Possibly Unaware Players}, author={Joseph Y. Halpern, Leandro C. R^ego}, journal={arXiv preprint arXiv:0704.2014}, year={2007}, archivePrefix={arXiv}, eprint={0704.2014}, primaryClass={cs.GT cs.MA} }
halpern2007extensive
arxiv-85
0704.2017
Large System Analysis of Game-Theoretic Power Control in UWB Wireless Networks with Rake Receivers
<|reference_start|>Large System Analysis of Game-Theoretic Power Control in UWB Wireless Networks with Rake Receivers: This paper studies the performance of partial-Rake (PRake) receivers in impulse-radio ultrawideband wireless networks when an energy-efficient power control scheme is adopted. Due to the large bandwidth of the system, the multipath channel is assumed to be frequency-selective. By using noncooperative game-theoretic models and large system analysis, explicit expressions are derived in terms of network parameters to measure the effects of self- and multiple-access interference at a receiving access point. Performance of the PRake is compared in terms of achieved utilities and loss to that of the all-Rake receiver.<|reference_end|>
arxiv
@article{bacci2007large, title={Large System Analysis of Game-Theoretic Power Control in UWB Wireless Networks with Rake Receivers}, author={G. Bacci, M. Luise, H.V. Poor}, journal={arXiv preprint arXiv:0704.2017}, year={2007}, doi={10.1109/SPAWC.2007.4401311}, archivePrefix={arXiv}, eprint={0704.2017}, primaryClass={cs.IT cs.GT math.IT} }
bacci2007large
arxiv-86
0704.2083
Introduction to Arabic Speech Recognition Using CMUSphinx System
<|reference_start|>Introduction to Arabic Speech Recognition Using CMUSphinx System: In this paper Arabic was investigated from the speech recognition problem point of view. We propose a novel approach to build an Arabic Automated Speech Recognition System (ASR). This system is based on the open source CMU Sphinx-4, from the Carnegie Mellon University. CMU Sphinx is a large-vocabulary; speaker-independent, continuous speech recognition system based on discrete Hidden Markov Models (HMMs). We build a model using utilities from the OpenSource CMU Sphinx. We will demonstrate the possible adaptability of this system to Arabic voice recognition.<|reference_end|>
arxiv
@article{satori2007introduction, title={Introduction to Arabic Speech Recognition Using CMUSphinx System}, author={H. Satori, M. Harti and N. Chenfour}, journal={arXiv preprint arXiv:0704.2083}, year={2007}, archivePrefix={arXiv}, eprint={0704.2083}, primaryClass={cs.CL cs.AI} }
satori2007introduction
arxiv-87
0704.2092
A Note on the Inapproximability of Correlation Clustering
<|reference_start|>A Note on the Inapproximability of Correlation Clustering: We consider inapproximability of the correlation clustering problem defined as follows: Given a graph $G = (V,E)$ where each edge is labeled either "+" (similar) or "-" (dissimilar), correlation clustering seeks to partition the vertices into clusters so that the number of pairs correctly (resp. incorrectly) classified with respect to the labels is maximized (resp. minimized). The two complementary problems are called MaxAgree and MinDisagree, respectively, and have been studied on complete graphs, where every edge is labeled, and general graphs, where some edge might not have been labeled. Natural edge-weighted versions of both problems have been studied as well. Let S-MaxAgree denote the weighted problem where all weights are taken from set S, we show that S-MaxAgree with weights bounded by $O(|V|^{1/2-\delta})$ essentially belongs to the same hardness class in the following sense: if there is a polynomial time algorithm that approximates S-MaxAgree within a factor of $\lambda = O(\log{|V|})$ with high probability, then for any choice of S', S'-MaxAgree can be approximated in polynomial time within a factor of $(\lambda + \epsilon)$, where $\epsilon > 0$ can be arbitrarily small, with high probability. A similar statement also holds for $S-MinDisagree. This result implies it is hard (assuming $NP \neq RP$) to approximate unweighted MaxAgree within a factor of $80/79-\epsilon$, improving upon a previous known factor of $116/115-\epsilon$ by Charikar et. al. \cite{Chari05}.<|reference_end|>
arxiv
@article{tan2007a, title={A Note on the Inapproximability of Correlation Clustering}, author={Jinsong Tan}, journal={Information Processing Letters, 108: 331-335, 2008}, year={2007}, archivePrefix={arXiv}, eprint={0704.2092}, primaryClass={cs.LG cs.DS} }
tan2007a
arxiv-88
0704.2201
Arabic Speech Recognition System using CMU-Sphinx4
<|reference_start|>Arabic Speech Recognition System using CMU-Sphinx4: In this paper we present the creation of an Arabic version of Automated Speech Recognition System (ASR). This system is based on the open source Sphinx-4, from the Carnegie Mellon University. Which is a speech recognition system based on discrete hidden Markov models (HMMs). We investigate the changes that must be made to the model to adapt Arabic voice recognition. Keywords: Speech recognition, Acoustic model, Arabic language, HMMs, CMUSphinx-4, Artificial intelligence.<|reference_end|>
arxiv
@article{satori2007arabic, title={Arabic Speech Recognition System using CMU-Sphinx4}, author={H. Satori, M. Harti and N. Chenfour}, journal={arXiv preprint arXiv:0704.2201}, year={2007}, archivePrefix={arXiv}, eprint={0704.2201}, primaryClass={cs.CL cs.AI} }
satori2007arabic
arxiv-89
0704.2258
On the Hardness of Approximating Stopping and Trapping Sets in LDPC Codes
<|reference_start|>On the Hardness of Approximating Stopping and Trapping Sets in LDPC Codes: We prove that approximating the size of stopping and trapping sets in Tanner graphs of linear block codes, and more restrictively, the class of low-density parity-check (LDPC) codes, is NP-hard. The ramifications of our findings are that methods used for estimating the height of the error-floor of moderate- and long-length LDPC codes based on stopping and trapping set enumeration cannot provide accurate worst-case performance predictions.<|reference_end|>
arxiv
@article{mcgregor2007on, title={On the Hardness of Approximating Stopping and Trapping Sets in LDPC Codes}, author={Andrew McGregor and Olgica Milenkovic}, journal={arXiv preprint arXiv:0704.2258}, year={2007}, archivePrefix={arXiv}, eprint={0704.2258}, primaryClass={cs.IT math.IT} }
mcgregor2007on
arxiv-90
0704.2259
The Wiretap Channel with Feedback: Encryption over the Channel
<|reference_start|>The Wiretap Channel with Feedback: Encryption over the Channel: In this work, the critical role of noisy feedback in enhancing the secrecy capacity of the wiretap channel is established. Unlike previous works, where a noiseless public discussion channel is used for feedback, the feed-forward and feedback signals share the same noisy channel in the present model. Quite interestingly, this noisy feedback model is shown to be more advantageous in the current setting. More specifically, the discrete memoryless modulo-additive channel with a full-duplex destination node is considered first, and it is shown that the judicious use of feedback increases the perfect secrecy capacity to the capacity of the source-destination channel in the absence of the wiretapper. In the achievability scheme, the feedback signal corresponds to a private key, known only to the destination. In the half-duplex scheme, a novel feedback technique that always achieves a positive perfect secrecy rate (even when the source-wiretapper channel is less noisy than the source-destination channel) is proposed. These results hinge on the modulo-additive property of the channel, which is exploited by the destination to perform encryption over the channel without revealing its key to the source. Finally, this scheme is extended to the continuous real valued modulo-$\Lambda$ channel where it is shown that the perfect secrecy capacity with feedback is also equal to the capacity in the absence of the wiretapper.<|reference_end|>
arxiv
@article{lai2007the, title={The Wiretap Channel with Feedback: Encryption over the Channel}, author={Lifeng Lai, Hesham El Gamal and H. Vincent Poor}, journal={arXiv preprint arXiv:0704.2259}, year={2007}, doi={10.1109/TIT.2008.929914}, archivePrefix={arXiv}, eprint={0704.2259}, primaryClass={cs.IT cs.CR math.IT} }
lai2007the
arxiv-91
0704.2282
Kekul\'e Cells for Molecular Computation
<|reference_start|>Kekul\'e Cells for Molecular Computation: The configurations of single and double bonds in polycyclic hydrocarbons are abstracted as Kekul\'e states of graphs. Sending a so-called soliton over an open channel between ports (external nodes) of the graph changes the Kekul\'e state and therewith the set of open channels in the graph. This switching behaviour is proposed as a basis for molecular computation. The proposal is highly speculative but may have tremendous impact. Kekul\'e states with the same boundary behaviour (port assignment) can be regarded as equivalent. This gives rise to the abstraction of Kekul\'e cells. The basic theory of Kekul\'e states and Kekul\'e cells is developed here, up to the classification of Kekul\'e cells with $\leq 4$ ports. To put the theory in context, we generalize Kekul\'e states to semi-Kekul\'e states, which form the solutions of a linear system of equations over the field of the bits 0 and 1. We briefly study so-called omniconjugated graphs, in which every port assignment of the right signature has a Kekul\'e state. Omniconjugated graphs may be useful as connectors between computational elements. We finally investigate some examples with potentially useful switching behaviour.<|reference_end|>
arxiv
@article{hesselink2007kekul\'e, title={Kekul\'e Cells for Molecular Computation}, author={W.H. Hesselink and J.C. Hummelen and H.T. Jonkman and H.G. Reker and G.R. Renardel de Lavalette and M.H. van der Veen}, journal={arXiv preprint arXiv:0704.2282}, year={2007}, archivePrefix={arXiv}, eprint={0704.2282}, primaryClass={cs.OH cs.DM} }
hesselink2007kekul\'e
arxiv-92
0704.2295
Using Image Attributes for Human Identification Protocols
<|reference_start|>Using Image Attributes for Human Identification Protocols: A secure human identification protocol aims at authenticating human users to a remote server when even the users' inputs are not hidden from an adversary. Recently, the authors proposed a human identification protocol in the RSA Conference 2007, which is loosely based on the ability of humans to efficiently process an image. The advantage being that an automated adversary is not effective in attacking the protocol without human assistance. This paper extends that work by trying to solve some of the open problems. First, we analyze the complexity of defeating the proposed protocols by quantifying the workload of a human adversary. Secondly, we propose a new construction based on textual CAPTCHAs (Reverse Turing Tests) in order to make the generation of automated challenges easier. We also present a brief experiment involving real human users to find out the number of possible attributes in a given image and give some guidelines for the selection of challenge questions based on the results. Finally, we analyze the previously proposed protocol in detail for the relationship between the secrets. Our results show that we can construct human identification protocols based on image evaluation with reasonably ``quantified'' security guarantees based on our model.<|reference_end|>
arxiv
@article{jameel2007using, title={Using Image Attributes for Human Identification Protocols}, author={Hassan Jameel, Heejo Lee and Sungyoung Lee}, journal={arXiv preprint arXiv:0704.2295}, year={2007}, archivePrefix={arXiv}, eprint={0704.2295}, primaryClass={cs.CR} }
jameel2007using
arxiv-93
0704.2344
Parallel computing for the finite element method
<|reference_start|>Parallel computing for the finite element method: A finite element method is presented to compute time harmonic microwave fields in three dimensional configurations. Nodal-based finite elements have been coupled with an absorbing boundary condition to solve open boundary problems. This paper describes how the modeling of large devices has been made possible using parallel computation, New algorithms are then proposed to implement this formulation on a cluster of workstations (10 DEC ALPHA 300X) and on a CRAY C98. Analysis of the computation efficiency is performed using simple problems. The electromagnetic scattering of a plane wave by a perfect electric conducting airplane is finally given as example.<|reference_end|>
arxiv
@article{vollaire2007parallel, title={Parallel computing for the finite element method}, author={Christian Vollaire (CEGELY), Laurent Nicolas (CEGELY), Alain Nicolas (CEGELY)}, journal={EUROPEAN PHYSICAL JOURNAL Applied Physics 1, 3 (03/1998) 305-314}, year={2007}, doi={10.1051/epjap:1998151}, archivePrefix={arXiv}, eprint={0704.2344}, primaryClass={cs.DC} }
vollaire2007parallel
arxiv-94
0704.2351
Parallel computation of the rank of large sparse matrices from algebraic K-theory
<|reference_start|>Parallel computation of the rank of large sparse matrices from algebraic K-theory: This paper deals with the computation of the rank and of some integer Smith forms of a series of sparse matrices arising in algebraic K-theory. The number of non zero entries in the considered matrices ranges from 8 to 37 millions. The largest rank computation took more than 35 days on 50 processors. We report on the actual algorithms we used to build the matrices, their link to the motivic cohomology and the linear algebra and parallelizations required to perform such huge computations. In particular, these results are part of the first computation of the cohomology of the linear group GL_7(Z).<|reference_end|>
arxiv
@article{dumas2007parallel, title={Parallel computation of the rank of large sparse matrices from algebraic K-theory}, author={Jean-Guillaume Dumas (LMC - IMAG), Philippe Elbaz-Vincent (I3M), Pascal Giorgi (LP2A), Anna Urbanska (LMC - IMAG)}, journal={arXiv preprint arXiv:0704.2351}, year={2007}, archivePrefix={arXiv}, eprint={0704.2351}, primaryClass={math.KT cs.DC cs.SC math.NT} }
dumas2007parallel
arxiv-95
0704.2353
Scaling Laws of Cognitive Networks
<|reference_start|>Scaling Laws of Cognitive Networks: We consider a cognitive network consisting of n random pairs of cognitive transmitters and receivers communicating simultaneously in the presence of multiple primary users. Of interest is how the maximum throughput achieved by the cognitive users scales with n. Furthermore, how far these users must be from a primary user to guarantee a given primary outage. Two scenarios are considered for the network scaling law: (i) when each cognitive transmitter uses constant power to communicate with a cognitive receiver at a bounded distance away, and (ii) when each cognitive transmitter scales its power according to the distance to a considered primary user, allowing the cognitive transmitter-receiver distances to grow. Using single-hop transmission, suitable for cognitive devices of opportunistic nature, we show that, in both scenarios, with path loss larger than 2, the cognitive network throughput scales linearly with the number of cognitive users. We then explore the radius of a primary exclusive region void of cognitive transmitters. We obtain bounds on this radius for a given primary outage constraint. These bounds can help in the design of a primary network with exclusive regions, outside of which cognitive users may transmit freely. Our results show that opportunistic secondary spectrum access using single-hop transmission is promising.<|reference_end|>
arxiv
@article{vu2007scaling, title={Scaling Laws of Cognitive Networks}, author={Mai Vu, Natasha Devroye, Masoud Sharif and Vahid Tarokh}, journal={arXiv preprint arXiv:0704.2353}, year={2007}, doi={10.1109/CROWNCOM.2007.4549764}, archivePrefix={arXiv}, eprint={0704.2353}, primaryClass={cs.IT math.IT} }
vu2007scaling
arxiv-96
0704.2355
A Nice Labelling for Tree-Like Event Structures of Degree 3
<|reference_start|>A Nice Labelling for Tree-Like Event Structures of Degree 3: We address the problem of &#64257;nding nice labellings for event structures of degree 3. We develop a minimum theory by which we prove that the labelling number of an event structure of degree 3 is bounded by a linear function of the height. The main theorem we present in this paper states that event structures of degree 3 whose causality order is a tree have a nice labelling with 3 colors. Finally, we exemplify how to use this theorem to construct upper bounds for the labelling number of other event structures of degree 3.<|reference_end|>
arxiv
@article{santocanale2007a, title={A Nice Labelling for Tree-Like Event Structures of Degree 3}, author={Luigi Santocanale (LIF)}, journal={arXiv preprint arXiv:0704.2355}, year={2007}, archivePrefix={arXiv}, eprint={0704.2355}, primaryClass={cs.DC} }
santocanale2007a
arxiv-97
0704.2375
Power control algorithms for CDMA networks based on large system analysis
<|reference_start|>Power control algorithms for CDMA networks based on large system analysis: Power control is a fundamental task accomplished in any wireless cellular network; its aim is to set the transmit power of any mobile terminal, so that each user is able to achieve its own target SINR. While conventional power control algorithms require knowledge of a number of parameters of the signal of interest and of the multiaccess interference, in this paper it is shown that in a large CDMA system much of this information can be dispensed with, and effective distributed power control algorithms may be implemented with very little information on the user of interest. An uplink CDMA system subject to flat fading is considered with a focus on the cases in which a linear MMSE receiver and a non-linear MMSE serial interference cancellation receiver are adopted; for the latter case new formulas are also given for the system SINR in the large system asymptote. Experimental results show an excellent agreement between the performance and the power profile of the proposed distributed algorithms and that of conventional ones that require much greater prior knowledge.<|reference_end|>
arxiv
@article{buzzi2007power, title={Power control algorithms for CDMA networks based on large system analysis}, author={Stefano Buzzi and H. Vincent Poor}, journal={arXiv preprint arXiv:0704.2375}, year={2007}, archivePrefix={arXiv}, eprint={0704.2375}, primaryClass={cs.IT math.IT} }
buzzi2007power
arxiv-98
0704.2383
Power control and receiver design for energy efficiency in multipath CDMA channels with bandlimited waveforms
<|reference_start|>Power control and receiver design for energy efficiency in multipath CDMA channels with bandlimited waveforms: This paper is focused on the cross-layer design problem of joint multiuser detection and power control for energy-efficiency optimization in a wireless data network through a game-theoretic approach. Building on work of Meshkati, et al., wherein the tools of game-theory are used in order to achieve energy-efficiency in a simple synchronous code division multiple access system, system asynchronism, the use of bandlimited chip-pulses, and the multipath distortion induced by the wireless channel are explicitly incorporated into the analysis. Several non-cooperative games are proposed wherein users may vary their transmit power and their uplink receiver in order to maximize their utility, which is defined here as the ratio of data throughput to transmit power. In particular, the case in which a linear multiuser detector is adopted at the receiver is considered first, and then, the more challenging case in which non-linear decision feedback multiuser detectors are employed is considered. The proposed games are shown to admit a unique Nash equilibrium point, while simulation results show the effectiveness of the proposed solutions, as well as that the use of a decision-feedback multiuser receiver brings remarkable performance improvements.<|reference_end|>
arxiv
@article{buzzi2007power, title={Power control and receiver design for energy efficiency in multipath CDMA channels with bandlimited waveforms}, author={Stefano Buzzi, Valeria Massaro, and H. Vincent Poor}, journal={arXiv preprint arXiv:0704.2383}, year={2007}, archivePrefix={arXiv}, eprint={0704.2383}, primaryClass={cs.IT math.IT} }
buzzi2007power
arxiv-99
0704.2386
Bounded Pushdown dimension vs Lempel Ziv information density
<|reference_start|>Bounded Pushdown dimension vs Lempel Ziv information density: In this paper we introduce a variant of pushdown dimension called bounded pushdown (BPD) dimension, that measures the density of information contained in a sequence, relative to a BPD automata, i.e. a finite state machine equipped with an extra infinite memory stack, with the additional requirement that every input symbol only allows a bounded number of stack movements. BPD automata are a natural real-time restriction of pushdown automata. We show that BPD dimension is a robust notion by giving an equivalent characterization of BPD dimension in terms of BPD compressors. We then study the relationships between BPD compression, and the standard Lempel-Ziv (LZ) compression algorithm, and show that in contrast to the finite-state compressor case, LZ is not universal for bounded pushdown compressors in a strong sense: we construct a sequence that LZ fails to compress signicantly, but that is compressed by at least a factor 2 by a BPD compressor. As a corollary we obtain a strong separation between finite-state and BPD dimension.<|reference_end|>
arxiv
@article{albert2007bounded, title={Bounded Pushdown dimension vs Lempel Ziv information density}, author={Pilar Albert, Elvira Mayordomo, and Philippe Moser}, journal={arXiv preprint arXiv:0704.2386}, year={2007}, archivePrefix={arXiv}, eprint={0704.2386}, primaryClass={cs.CC cs.IT math.IT} }
albert2007bounded
arxiv-100
0704.2448
Light Logics and Optimal Reduction: Completeness and Complexity
<|reference_start|>Light Logics and Optimal Reduction: Completeness and Complexity: Typing of lambda-terms in Elementary and Light Affine Logic (EAL, LAL, resp.) has been studied for two different reasons: on the one hand the evaluation of typed terms using LAL (EAL, resp.) proof-nets admits a guaranteed polynomial (elementary, resp.) bound; on the other hand these terms can also be evaluated by optimal reduction using the abstract version of Lamping's algorithm. The first reduction is global while the second one is local and asynchronous. We prove that for LAL (EAL, resp.) typed terms, Lamping's abstract algorithm also admits a polynomial (elementary, resp.) bound. We also show its soundness and completeness (for EAL and LAL with type fixpoints), by using a simple geometry of interaction model (context semantics).<|reference_end|>
arxiv
@article{baillot2007light, title={Light Logics and Optimal Reduction: Completeness and Complexity}, author={Patrick Baillot, Paolo Coppola and Ugo Dal Lago}, journal={arXiv preprint arXiv:0704.2448}, year={2007}, archivePrefix={arXiv}, eprint={0704.2448}, primaryClass={cs.LO cs.PL} }
baillot2007light

ScholarCopilot-Data-v1

ScholarCopilot-Data-v1 contains the corpus data and embedded vectors of Scholar Copilot. Scholar Copilot improves the academic writing process by seamlessly integrating automatic text completion and intelligent citation suggestions into a cohesive, human-in-the-loop AI-driven pipeline. Designed to enhance productivity and creativity, it provides researchers with high-quality text generation and precise citation recommendations powered by iterative and context-aware Retrieval-Augmented Generation (RAG).

The current version of Scholar Copilot leverages a state-of-the-art 7-billion-parameter language model (LLM) trained on the complete Arxiv full paper corpus. This unified model for retrieval and generation is adept at making context-sensitive decisions about when to cite, what to cite, and how to generate coherent content based on reference papers.

🌟 Key Features

  • ** πŸ“ Next-3-Sentence Suggestions: Facilitates writing by predicting the next sentences with automatic retrieval and citation of relevant reference papers.
  • ** πŸ“š Citation Suggestions on Demand: Provides precise, contextually appropriate paper citations whenever needed.
  • ** ✨ Full Section Auto-Completion: Assists in brainstorming and drafting comprehensive paper content and structure.

The current version of ScholarCopilot primarily focuses on the introduction and related work sections of academic papers. We will support full-paper writing in future releases.

Downloads last month
263

Collection including TIGER-Lab/ScholarCopilot-Data-v1